id stringlengths 1 6 | url stringlengths 16 1.82k | content stringlengths 37 9.64M |
|---|---|---|
187800 | https://zh.khanacademy.org/math/statistics-probability/random-variables-stats-library/binomial-random-variables/e/calculating-binomial-probability | 计算二项分布概率 (练习) | 二项随机变量 | 可汗学院
跳转到主内容
如果你看到这则信息,这表示下载可汗学院的外部资源时遇到困难.
如果您在网络过滤器后面,请确保.kastatic.org 和 .kasandbox.org 域未被阻止。
探索
数学
科学
计算机
艺术与人文
大学, 职业和更多
经济和金融
选择一个类别以查看它的课程
搜索
捐款登录注册
搜索科目, 技能, 教学视频
跳到课程内容
统计和概率
课程: 统计和概率>单元 9
课程 5: 二项随机变量
二项变量
识别二项变量
用于假设实验 "独立性" 的 10% 法则
识别二项变量
二项式分布
可视化二项式分布
二项式概率示例
推广至n投k进
罚球的二项概率分布
投篮二项式分布图的绘制
二项分布概率密度函数和二项累积分布函数
二项式概率(基础)
二项分布概率公式
计算二项分布概率
数学>
统计和概率>
随机变量>
二项随机变量
© 2025 Khan Academy
使用条款隐私政策Cookie 通知可访问性声明
计算二项分布概率
Google课堂
Microsoft Teams
您可能需要:计算器
问题
某种西红柿在从盆移植到花园时有 70%的几率存活.小娜移植了 3棵. 假设每棵植物的存活相互独立.设 X为存活的西红柿的数量.
在这 3 棵西红柿中存活 2 棵的几率是多少?
保留小数点到百分位.
P(X=2)=
你的答案是 一个精确的十进位小数,例如0.75
显示计算器
相关内容
视频10 分钟 24 秒 10:24 二项式概率示例
视频5 分钟 27 秒 5:27 二项分布概率密度函数和二项累积分布函数
视频4 分钟 13 秒 4:13 推广至n投k进
报告问题
做所有4道题
跳过 检查
Use of cookies
Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy
Accept All Cookies Strictly Necessary Only
Cookies Settings
Privacy Preference Center
When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
More information
Allow All
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session.
Functional Cookies
[x] Functional Cookies
These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences.
Targeting Cookies
[x] Targeting Cookies
These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service.
Performance Cookies
[x] Performance Cookies
These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails.
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Reject All Confirm My Choices
计算器
RAD DEG
sin⁻¹ sin del ac cos⁻¹ cos()tan⁻¹ tan π ans ln log eˣ EXP
+-=
1 2 3 4 5 6 7 8 9 0.
×÷xʸ√ |
187801 | https://fiveable.me/discrete-mathematics/unit-8/euler-hamiltonian-paths/study-guide/nxBy2f3LfpTk9HLo | Euler and Hamiltonian Paths | Discrete Mathematics Class Notes | Fiveable | Fiveable
ap study content toolsprintables
upgrade
🧩Discrete Mathematics Unit 8 Review
8.3 Euler and Hamiltonian Paths
All Study Guides
Discrete Mathematics
Unit 8 – Graph Theory
Topic: 8.3
🧩Discrete Mathematics Unit 8 Review
8.3 Euler and Hamiltonian Paths
Written by the Fiveable Content Team • Last updated September 2025
Written by the Fiveable Content Team • Last updated September 2025
print study guide
copy citation
APA
🧩Discrete Mathematics
Unit & Topic Study Guides
Introduction to Logic and Proofs
Set Theory
Functions and Relations
Algorithms and Complexity
Number Theory and Cryptography
Induction and Recursion
Counting and Probability
Graph Theory
8.1 Graph Fundamentals and Representations
8.2 Graph Connectivity and Traversals
8.3 Euler and Hamiltonian Paths
8.4 Planar Graphs and Coloring
Trees and Applications
Boolean Algebra
Modeling Computation
Discrete Probability
Recurrence Relations
Generating Functions
print guide report error
Euler and Hamiltonian paths are key concepts in graph theory. They help us understand how to traverse graphs efficiently, visiting edges or vertices exactly once. These ideas have real-world applications in route planning, puzzle solving, and network design.
Euler paths focus on edges, while Hamiltonian paths deal with vertices. Both have specific conditions for existence and algorithms for finding them. Understanding these paths is crucial for solving complex problems in various fields, from logistics to computer science.
Euler Paths and Circuits
Fundamental Concepts of Euler Paths and Circuits
Euler path traverses every edge in a graph exactly once
Euler circuit forms a closed loop traversing every edge exactly once
Eulerian graph contains an Euler circuit
Semi-Eulerian graph contains an Euler path but not an Euler circuit
Conditions for Eulerian Graphs
Necessary condition for Euler path requires all vertices except at most two have even degree
Sufficient condition for Euler path ensures graph is connected with at most two odd-degree vertices
Necessary and sufficient condition for Euler circuit mandates all vertices have even degree and graph is connected
Fleury's algorithm finds Euler circuits in Eulerian graphs
Practical Applications of Euler Paths
Snow plow route optimization utilizes Euler paths to clear streets efficiently
Mail delivery routes leverage Euler paths to minimize repeated travel
Network design employs Euler circuits for efficient data transmission (token ring networks)
Puzzle solving applies Euler paths in games like "draw without lifting your pencil"
Hamiltonian Paths and Cycles
Core Concepts of Hamiltonian Paths and Cycles
Hamiltonian path visits every vertex in a graph exactly once
Hamiltonian cycle forms a closed loop visiting every vertex exactly once
Hamiltonian graph contains a Hamiltonian cycle
Semi-Hamiltonian graph contains a Hamiltonian path but not a Hamiltonian cycle
Conditions for Hamiltonian Graphs
Necessary conditions for Hamiltonian graphs include connectivity and absence of articulation points
Dirac's theorem provides sufficient condition: graph with n ≥ 3 vertices and minimum degree ≥ n/2 is Hamiltonian
Ore's theorem offers another sufficient condition: sum of degrees of non-adjacent vertices ≥ n for graph with n ≥ 3 vertices
Bondy-Chvátal theorem generalizes sufficient conditions for Hamiltonian graphs
Complexity and Algorithms for Hamiltonian Problems
Determining existence of Hamiltonian paths or cycles belongs to NP-complete class of problems
Held-Karp algorithm solves Hamiltonian cycle problem in O(n 2 2 n)O(n^2 2^n)O(n 2 2 n) time
Nearest neighbor algorithm provides heuristic approach for finding approximate Hamiltonian cycles
Genetic algorithms offer alternative method for tackling Hamiltonian problems in large graphs
Applications
Optimization Problems in Graph Theory
Traveling Salesman Problem seeks shortest Hamiltonian cycle in weighted graph
Nearest Neighbor and 2-opt heuristics provide approximate solutions for Traveling Salesman Problem
Branch and bound algorithm offers exact solution for smaller instances of Traveling Salesman Problem
Vehicle routing problems extend Traveling Salesman Problem to multiple vehicles and constraints
Historical and Practical Graph Traversal Problems
Konigsberg Bridge Problem led to development of Euler path concept
Seven Bridges of Konigsberg represented as multigraph with land masses as vertices and bridges as edges
Euler proved impossibility of traversing all bridges exactly once, establishing foundations of graph theory
Modern applications include optimizing routes for mail delivery and waste collection
Graph Theory in Scientific and Engineering Applications
Utility graph traversal problem involves connecting houses to utilities without crossing lines
K3,3 graph represents utility problem, proving its impossibility on a plane
Molecular structure analysis uses graph theory to represent chemical bonds and atomic arrangements
Graph isomorphism helps identify structurally equivalent molecules in chemistry
Study Content & Tools
Study GuidesPractice QuestionsGlossaryScore Calculators
Company
Get $$ for referralsPricingTestimonialsFAQsEmail us
Resources
AP ClassesAP Classroom
every AP exam is fiveable
history
🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history
social science
✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾⚖️ ap us government
english & capstone
✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar
arts
🎨 ap art & design🖼️ ap art history🎵 ap music theory
science
🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics
math & computer science
🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p
world languages
🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature
go beyond AP
high school exams
✏️ PSAT🎓 Digital SAT🎒 ACT
honors classes
🍬 honors algebra II🐇 honors biology👩🏽🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history
college classes
👩🏽🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽🔬 science💶 social science
RefundsTermsPrivacyCCPA
© 2025 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
every AP exam is fiveable
Study Content & Tools
Study GuidesPractice QuestionsGlossaryScore Calculators
Company
Get $$ for referralsPricingTestimonialsFAQsEmail us
Resources
AP ClassesAP Classroom
history
🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history
social science
✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾⚖️ ap us government
english & capstone
✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar
arts
🎨 ap art & design🖼️ ap art history🎵 ap music theory
science
🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics
math & computer science
🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p
world languages
🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature
go beyond AP
high school exams
✏️ PSAT🎓 Digital SAT🎒 ACT
honors classes
🍬 honors algebra II🐇 honors biology👩🏽🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history
college classes
👩🏽🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽🔬 science💶 social science
RefundsTermsPrivacyCCPA
© 2025 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. |
187802 | https://www.chemistrysteps.com/preparation-and-reaction-mechanism-of-carboxylic-anhydrides/ | Preparation and Reaction Mechanisms of Carboxylic Anhydrides - Chemistry Steps
Skip to content
Chemistry Steps
Menu
Live Help
Topics
Study Guides
Reagent Guide
Reaction Maps
Quizzes
Practice
CS Benefits
Log In
Register
Menu
Live Help
Topics
Study Guides
Reagent Guide
Reaction Maps
Quizzes
Practice
CS Benefits
Log In
Register
Carboxylic Acids and Their Derivatives
Preparation and Reaction Mechanisms of Carboxylic Anhydrides
Preparation of Anhydrides
Anhydrides are prepared by dehydration of carboxylic acids either at extremely high temperatures (800 o C) or by using P 2 O 5 as a dehydrating agent:
For most carboxylic acids are not compatible with excessive heating, and even using P 2 O 5 is not a practically applicable solution. Therefore, the most common method for preparing carboxylic anhydrides in the laboratory is the conversion of the acid to an acid chloride and a carboxylate salt, which react readily at room temperature:
Reactions of Anhydrides
Although slightly less reactive than acid chlorides, anhydrides react with all the nucleophiles in an identical mechanism.
The only difference in these nucleophilic acyl substitutions is the change of the leaving group from chloride to a carboxylate.
Hydrolysis of Anhydrides
Anhydrides can be hydrolyzed to carboxylic acids. Adding a base would increase the rate of the conversion:
Esters from Anhydrides
Anhydrides can be converted into esters by using an alcohol with a base or, even better, an alkoxide which is a stronger nucleophile:
In a similar reaction, thioesters can be prepared from anhydrides:
Amides from Anhydrides
Amines, being good nucleophiles, react readily with anhydrides to form primary, secondary or tertiary amides:
Anhydrides in Grignard Reaction
Just like the acid chlorides and esters, anhydrides react with excess Grignard reagent, forming a tertiary alcohol:
Reduction of Anhydrides
Anhydrides can be reduced to a primary alcohol using LiAlH 4:
To stop the reduction at the aldehyde, a less powerful reducing agent such as lithium tri(t-butoxy) aluminum hydride LiAl(O t Bu)3 H can be used:
Check Also
Preparation of Carboxylic Acids
Naming Carboxylic Acids
Naming Nitriles
Naming Esters
Naming Carboxylic Acid Derivatives – Practice Problems
The Addition-Elimination Mechanism
Fischer Esterification
Ester Hydrolysis by Acid and Base-Catalyzed Hydrolysis
What is Transesterification?
Esters Reaction with Amines – The Aminolysis Mechanism
Ester Reactions Summary and Practice Problems
Preparation of Acyl (Acid) Chlorides (ROCl)
Reactions of Acid Chlorides (ROCl) with Nucleophiles
R 2 CuLi Organocuprates – Gilman Reagent
Reaction of Acyl Chlorides with Grignard and Gilman (Organocuprate) Reagents
Reduction of Acyl Chlorides by LiAlH 4, NaBH4, and LiAl(OtBu)3 H
Reduction of Carboxylic Acids and Their Derivatives
Preparation and Reaction Mechanism of Carboxylic Anhydrides
Amides – Structure and Reactivity
Naming Amides
Amides Hydrolysis: Acid and Base-Catalyzed Mechanism
Amide Dehydration Mechanism by SOCl 2, POCl 3, and P 2 O 5
Amide Reduction Mechanism by LiAlH4
Reduction of Amides to Amines and Aldehydes
Amides Preparation and Reactions Summary
Amides from Carboxylic Acids-DCC and EDC Coupling
The Mechanism of Nitrile Hydrolysis To Carboxylic Acid
Nitrile Reduction Mechanism with LiAlH4 and DIBAL to Amine or Aldehyde
The Mechanism of Grignard and Organolithium Reactions with Nitriles
The Reactions of Nitriles
Converting Nitriles to Amides
Carboxylic Acids to Ketones
Esters to Ketones
Carboxylic Acids and Their Derivatives Practice Problems
Carboxylic Acids and Their Derivatives Quiz
Reactions Map of Carboxylic Acid Derivatives
Share Your Thoughts, Ask that Question! Cancel reply
Comment
Name Email Website
- [x] Notify me via e-mail if anyone answers my comment.
Organic Chemistry Study Guides
Structure and Bonding
Lewis Structures in Organic Chemistry
Valency and Formal Charges in Organic Chemistry
How to Determine the Number of Lone Pairs
Bonding Patterns in Organic Chemistry
sp3, sp2, and sp Hybridization in Organic Chemistry with Practice Problems
How to Quickly Determine The sp3, sp2, and sp Hybridization
Bond Lengths and Bond Strengths
VSEPR Theory – Molecular and Electron Geometry of Organic Molecules
Dipole-dipole, London Dispersion and Hydrogen Bonding Interactions
Dipole Moment and Molecular Polarity
Boiling Point and Melting Point in Organic Chemistry
Boiling Point and Melting Point Practice Problems
Solubility of Organic Compounds
General Chemistry Overview Quiz
Molecular Representations
Bond-Line or Skeletal Structures
Functional Groups in Organic Chemistry with Practice Problems
Bond-line, Lewis, and Condensed Structures with Practice Problems
Curved Arrows with Practice Problems
Resonance Structures in Organic Chemistry with Practice Problems
Rules for Drawing Resonance Structures
Major and Minor Resonance Contributor
How to Choose the More Stable Resonance Structure
Drawing Complex Patterns in Resonance Structures
Localized and Delocalized Lone Pairs with Practice Problems
Molecular Representations Quiz
Acids and Bases
Acids and Bases – General Chemistry
Organic Acids and Bases
Organic Acid-Base Mechanisms
Acid Strength and pKa
How to Determine the Position of Equilibrium for an Acid-Base Reaction
Inductive and Resonance (Mesomeric) Effects
Factors That Determine the pKa and Acid Strength
How to Choose an Acid or a Base to Protonate or Deprotonate a Given Compound
Lewis Acids and Bases
Basicity of Amines
Organic Acids and Bases Practice Problems
Organic Acids and Bases Quiz
Alkanes and Cycloalkanes
Naming Alkanes by IUPAC Nomenclature Rules Practice Problems
Naming Bicyclic Compounds
Naming Bicyclic Compounds-Practice Problems
How to Name a Compound with Multiple Functional Groups
Primary, Secondary, and Tertiary Carbon Atoms in Organic Chemistry
Constitutional or Structural Isomers with Practice Problems
Degrees of Unsaturation or Index of Hydrogen Deficiency
The Wedge and Dash Representation
Sawhorse Projections
Newman Projections with Practice Problems
Staggered and Eclipsed Conformations
Conformational Isomers of Propane
Newman Projection and Conformational Analysis of Butane
Newman Projection of Chair Conformation
Gauche Conformation
Gauche Conformation, Steric, Torsional Strain Energy Practice Problems
Ring Strain
Steric vs Torsional Strain
Conformational Analysis
Drawing the Chair Conformation of Cyclohexane
Ring Flip: Drawing Both Chair Conformations with Practice Problems
1,3-Diaxial Interactions and A value for Cyclohexanes
Ring-Flip: Comparing the Stability of Chair Conformations with Practice Problems
Cis and Trans Decalin
IUPAC Nomenclature Practice Problems
IUPAC Nomenclature Summary Quiz
Alkanes and Cycloalkanes Practice Quiz
Stereochemistry
How to Determine the R and S Configuration
The R and S Configuration Practice Problems
What is Nonsuperimposable in Organic Chemistry
Chirality and Enantiomers
Diastereomers-Introduction and Practice Problems
Enantiomers vs Diastereomers
Cis and Trans Isomers
E and Z Alkene Configuration with Practice Problems
Enantiomers, Diastereomers, the Same or Constitutional Isomers with Practice Problems
Configurational Isomers
Optical Activity
Specific Rotation
Racemic Mixtures
Enantiomeric Excess (ee): Percentage of Enantiomers from Specific Rotation with Practice Problems
Symmetry and Chirality. Meso Compounds
Fischer Projections with Practice Problems
R and S Configuration in the Fischer Projection
R and S configuration on Newman projections
R and S Configuration of Allenes
Converting Bond-Line, Newman Projection, and Fischer Projections
Resolution of Enantiomers: Separate Enantiomers by Converting to Diastereomers
Stereochemistry Practice Problems
Stereochemistry Practice Quiz
Energy Changes In Organic Chemistry
Energy and Organic Chemistry Reactions
Bond Lengths and Bond Strengths
Homolytic and Heterolytic Bond Cleavage
The Heat of Reaction from Bond Dissociation Energies
Nucleophilic Substitution Reactions
Introduction to Alkyl Halides
Nomenclature of Alkyl Halides
Substitution and Elimination Reactions
Nucleophilic Substitution Reactions – An Introduction
All You Need to Know About the S N 2 Reaction Mechanism
The S N 2 Mechanism: Kinetics, Thermodynamics, Curved Arrows, and Stereochemistry with Practice Problems
The Stereochemistry of S N 2 Reactions
Stability of Carbocations
The S N 1 Nucleophilic Substitution Reaction
Reactions of Alkyl Halides with Water
The Stereochemistry of the S N 1 Reaction Mechanism
The S N 1 Mechanism: Kinetics, Thermodynamics, Curved Arrows, and Stereochemistry with Practice Problems
Steric Hindrance in S N 2 and S N 1 Reactions
Carbocation Rearrangements in S N 1 Reactions with Practice Problems
Ring Expansion Rearrangements
Ring Contraction Rearrangements
When Is the Mechanism S N 1 or S N 2?
Reactions of Alcohols with HCl, HBr, and HI Acids
SOCl 2 and PBr 3 for Conversion of Alcohols to Alkyl Halides
Alcohols in S N 1 and S N 2 Reactions
How to Choose Molecules for Doing S N 2 and S N 1 Synthesis-Practice Problems
Exceptions in S N 2 and S N 1 Reactions
Nucleophilic Substitution and Elimination Practice Quiz
Reactions Map of Alkyl Halides
Alkenes: Structure, Stability, and Nomenclature
Alkenes: Structure and Stability
Naming Alkenes by IUPAC Nomenclature Rules
Cis and Trans Isomers
E and Z Alkene Configuration with Practice Problems
Elimination Reactions
Substitution and Elimination Reactions
The E2 Mechanism
E2 Elimination Practice Problems
Zaitsev's Rule - Regioselectivity of E2 Elimination Reactions
The Hofmann Elimination of Amines and Alkyl Fluorides
Stereoselectivity of E2 Elimination Reactions
Stereospecificity of E2 Elimination Reactions
S N 2 and E2 Rates of Cyclohexanes
Elimination Reactions of Cyclohexanes with Practice Problems
POCl 3 for Dehydration of Alcohols
The E1 Mechanism with Practice Problems
Regioselectivity of E1 Reactions
Stereoselectivity of E1 Reactions
How to tell if it is E2 or E1 Mechanism
S N 1 vs E1 Reactions
S N 2 vs E2 Reactions
Dehydration of Alcohols by E1 and E2 Elimination
Mesylates and Tosylates as Good Leaving Groups
Mitsunobu Reaction
S N 1 S N 2 E1 E2 – How to Choose the Mechanism
Polar Protic and Polar Aprotic Solvents
S N 1 S N 2 E1 or E2 - the Largest Collection of Practice Problems
The Hammond Postulate
The E1cB Elimination Mechanism
Nucleophilic Substitution and Elimination Practice Quiz
Reactions Map of Alkyl Halides
Addition Reactions of Alkenes
Electrophilic Addition Reactions to Alkenes
Markovnikov's Rule
Markovnikov's Rule with Practice Problems
Addition of Water to Alkenes
Acid-Catalyzed Hydration of Alkenes with Practice Problems
Rearrangements in Alkene Addition Reactions
Oxymercuration-Demercuration
Addition of Alcohols to Alkenes
Free-Radical Addition of HBr: Anti-Markovnikov Addition
Hydroboration-Oxidation: The Mechanism
Hydroboration-Oxidation of Alkenes: Regiochemistry and Stereochemistry with Practice Problems
Halogenation of Alkenes and Halohydrin Formation
The Regiochemistry of Alkene Addition Reactions
The Stereochemistry of Alkene Addition Reactions
Cis product in an anti-Addition Reaction of Alkenes
Ozonolysis of Alkenes with Practice Problems
Syn Dihydroxylation of Alkenes with KMnO 4 and OsO 4
Anti-Dihydroxylation of Alkenes with MCPBA and Other Peroxides with Practice Problems
Oxidative Cleavage of Alkenes with KMno 4 and O 3
Alkene Reactions Practice Problems
Changing the Position of a Double Bond
Changing the Position of a Leaving Group
Alkenes Multi-Step Synthesis Practice Problems
Alkene Addition Reactions Practice Quiz
Reactions Map of Alkenes
Alkynes
Introduction to Alkynes
Naming Alkynes by IUPAC Nomenclature Rules - Practice Problems
Preparation of Alkynes by Elimination Reactions
Hydrohalogenation of Alkynes
Addition of Water to Alkynes
Acid-Catalyzed Hydration of Alkynes with Practice Problems
Reduction of Alkynes
Halogenation of Alkynes
Hydroboration-Oxidation of Alkynes with Practice Problems
Ozonolysis of Alkynes with Practice Problems
Alkylation of Terminal Alkynes in Organic Synthesis with Practice Problems
Reactions of Acetylide Ions
Alkyne reactions summary practice problems
Alkyne Synthesis Reactions Practice Problems
Alkyne Naming and Reactions Practice Quiz
Reactions Map of Alkynes
Nuclear Magnetic Resonance (NMR) Spectroscopy
NMR Spectroscopy – An Easy Introduction
NMR Chemical Shift
NMR Chemical Shift Range and Value Table
NMR Number of Signals and Equivalent Protons
Homotopic, Enantiotopic, Diastereotopic, and Heterotopic
Homotopic Enantiotopic Diastereotopic Practice Problems
Integration in NMR Spectroscopy
Splitting and Multiplicity (N+1 rule) in NMR Spectroscopy
NMR Signal Splitting N+1 Rule Multiplicity Practice Problems
13 C Carbon NMR
DEPT NMR: Signals and Problem Solving
NMR Spectroscopy-Carbon-Dept-IR Practice Problems
Organic Structure Determination
NMR Spectroscopy-Carbon-Dept-IR Practice Problems
Infrared (IR) Spectroscopy Practice Problems
Solving Mass Spectrometry Practice Problems
Radical Reactions
Free-Radical Addition of HBr: Anti-Markovnikov Addition
Initiation, Propagation, Termination in Radical Reactions
Selectivity in Radical Halogenation
Stability of Radicals
Resonance Structures of Radicals
Stereochemistry of Radical Halogenation with Practice Problems
Allylic Bromination by NBS with Practice Problems
Radical Halogenation in Organic Synthesis
Reactions of Alcohols
Nomenclature of Alcohols: Naming Alcohols based on IUPAC Rules with Practice Problems
Preparation of Alcohols via Substitution or Addition Reactions
Reaction of Alcohols with HCl, HBr, and HI Acids
Mesylates and Tosylates as Good Leaving Groups
SOCl 2 and PBr 3 for Conversion of Alcohols to Alkyl Halides
Alcohols in Substitution Reactions Practice Problems
POCl 3 for Dehydration of Alcohols
Dehydration of Alcohols by E1 and E2 Elimination
The Oxidation States of Organic Compounds
LiAlH4 and NaBH4 Carbonyl Reduction Mechanism
Alcohols from Carbonyl Reductions - Practice Problems
Grignard Reaction in Preparing Alcohols with Practice Problems
Grignard Reaction in Organic Synthesis with Practice Problems
Protecting Groups For Alcohols in Organic Synthesis
Oxidation of Alcohols: PCC, PDC, CrO 3, DMP, Swern, and All of That
Diols: Nomenclature, Preparation, and Reactions
NaIO4 Oxidative Cleavage of Diols
The Pinacol Rearrangement
The Williamson Ether Synthesis
Alcohol Reactions Practice Problems
Naming Thiols and Sulfides
Reactions of Thiols
Alcohols Quiz – Naming, Preparation, and Reactions
Reactions Map of Alcohols
Ethers and Epoxides
Preparation of Epoxides
Ring-Opening Reactions of Epoxides
Reactions of Epoxides under Acidic and Basic Conditions
Reactions of Epoxides Practice Problems
The Grignard Reaction of Epoxides
Naming Ethers
The Williamson Ether Synthesis
Reactions of Ethers-Ether Cleavage
Conjugated Systems
Resonance and Conjugated Dienes
Allylic Carbocations
1,2 and 1,4 Electrophilic Addition to Dienes
Kinetic vs Thermodynamic Control of Electrophilic Addition to Dienes
The Diels-Alder Reaction
Diels-Alder Reaction: Dienes and Dienophiles
Predict the Products of the Diels-Alder Reaction with Practice Problems
Endo and Exo Products of Diels-Alder Reaction with Practice Problems
Regiochemistry of the Diels–Alder Reaction with Practice Problems
Identify the Diene and Dienophile of the Diels-Alder reaction with Practice Problems
Diels-Alder Reaction in Organic Synthesis Practice Problems
Aromatic Compounds
Naming Aromatic Compounds
Introduction to Aromatic Compounds
Benzene – Aromatic Structure and Stability
Aromaticity and Huckel's Rule
The 4n+2 Rule
Identify Aromatic, Antiaromatic, or Nonaromatic Compounds
Frost Circle
Annulenes
Electrophilic Aromatic Substitution
Electrophilic Aromatic Substitution – The Mechanism
The Halogenation of Benzene
The Nitration of Benzene
The Sulfonation of Benzene
Activating and Deactivating Groups
Friedel-Crafts Alkylation with Practice Problems
Friedel-Crafts Acylation with Practice Problems
Vilsmeier-Haack Reaction
The Alkylation of Benzene by Acylation-Reduction
Ortho, Para, Meta in EAS with Practice Problems
Ortho, Para, and Meta in Disubstituted Benzenes
Why Are Halogens Ortho-, Para- Directors yet Deactivators?
Is Phenyl an Ortho/Para or Meta Director?
Limitations of Electrophilic Aromatic Substitution Reactions
Orientation in Benzene Rings With More Than One Substituent
Synthesis of Aromatic Compounds From Benzene
Arenediazonium Salts in Electrophilic Aromatic Substitution
Reactions at the Benzylic Position
Benzylic Bromination
Nucleophilic Aromatic Substitution
Nucleophilic Aromatic Substitution Practice Problems
Reactions of Phenols
Reactions of Aniline
Meta Substitution on Activated Aromatic Ring
Electrophilic Aromatic Substitution Practice Problems
Aromatic Compounds Quiz
Reactions Map of Aromatic Compounds
Aldehydes and Ketones
Nomenclature of Aldehydes and Ketones
How to Name a Compound with Multiple Functional Groups
Preparation of Aldehydes and Ketones
Nucleophilic Addition to Carbonyl Groups
Reduction of Aldehydes and Ketones
Reactions of Aldehydes and Ketones with Water
Reactions of Aldehydes and Ketones with Alcohols: Acetals and Hemiacetals
Acetals as Protecting Groups for Aldehydes and Ketones
Formation and Reactions of Imines and Enamines
Reductive Amination
Acetal Hydrolysis Mechanism
Imine and Enamine Hydrolysis Mechanism
Hydrolysis of Acetals, Imines, and Enamines-Practice Problems
Reaction of Aldehydes and Ketones with CN, Cyanohydrin Formation
The Wittig Reaction: Examples and Mechanism
The Wittig Reaction: Practice Problems
Aldehydes and Ketones to Carboxylic Acids
Reactions of Aldehydes and Ketones - Practice Problems
Aldehydes and Ketones Reactions Practice Quiz
Reactions Map of Aldehydes
Reactions Map of Ketones
Carboxylic Acids and Their Derivatives-Nucleophilic Acyl Substitution
Preparation of Carboxylic Acids
Naming Carboxylic Acids
Naming Nitriles
Naming Esters
Naming Carboxylic Acid Derivatives – Practice Problems
The Addition-Elimination Mechanism
Fischer Esterification
Ester Hydrolysis by Acid and Base-Catalyzed Hydrolysis
What is Transesterification?
Esters Reaction with Amines – The Aminolysis Mechanism
Ester Reactions Summary and Practice Problems
Preparation of Acyl (Acid) Chlorides (ROCl)
Reactions of Acid Chlorides (ROCl) with Nucleophiles
R 2 CuLi Organocuprates - Gilman Reagent
Reaction of Acyl Chlorides with Grignard and Gilman (Organocuprate) Reagents
Reduction of Acyl Chlorides by LiAlH 4, NaBH4, and LiAl(OtBu)3 H
Reduction of Carboxylic Acids and Their Derivatives
Preparation and Reaction Mechanism of Carboxylic Anhydrides
Amides - Structure and Reactivity
Naming Amides
Amides Hydrolysis: Acid and Base-Catalyzed Mechanism
Amide Dehydration Mechanism by SOCl 2, POCl 3, and P 2 O 5
Amide Reduction Mechanism by LiAlH4
Reduction of Amides to Amines and Aldehydes
Amides Preparation and Reactions Summary
Amides from Carboxylic Acids-DCC and EDC Coupling
The Mechanism of Nitrile Hydrolysis To Carboxylic Acid
Nitrile Reduction Mechanism with LiAlH4 and DIBAL to Amine or Aldehyde
The Mechanism of Grignard and Organolithium Reactions with Nitriles
The Reactions of Nitriles
Converting Nitriles to Amides
Carboxylic Acids to Ketones
Esters to Ketones
Carboxylic Acids and Their Derivatives Practice Problems
Carboxylic Acids and Their Derivatives Quiz
Reactions Map of Carboxylic Acid Derivatives
Alpha Carbon Chemistry: Enols and Enolates
Keto-Enol Tautomerization
Alpha Halogenation of Enols and Enolates
The Haloform and Iodoform Reactions
Alpha Halogenation of Carboxylic Acids
Alpha Halogenation of Enols and Enolates Practice Problems
The E1cB Elimination Mechanism
Aldol Reaction – Principles and Mechanism
Aldol Condensation – Dehydration of Aldol Addition Product
Intramolecular Aldol Reactions
Aldol Addition and Condensation Reactions – Practice Problems
Crossed Aldol And Directed Aldol Reactions
Crossed Aldol Condensation Practice Problems
The Cannizzaro reaction
Alkylation of Enolates Alpha Position
Enolate Alkylation Practice Problems
Acetoacetic Ester Synthesis
Acetoacetic Ester Enolates Practice Problems
Malonic Ester Synthesis
Decarboxylation
Michael Reaction: The Conjugate Addition of Enolates
Robinson Annulation, Shortcut, and Retrosynthesis
Claisen Condensation
Dieckmann condensation – An Intramolecular Claisen Reaction
Crossed Claisen and Claisen Variation Reactions
Claisen Condensation Practice Problems
Stork Enamine Synthesis
Mannich Reaction
Enolates in Organic Synthesis – a Comprehensive Practice Problem
Amines
Naming Amines: Systematic and Common Nomenclature
Preparation of Amines
The Gabriel Synthesis of Primary Amines
Imines from Aldehydes and Ketones with Primary Amines
Enamines from Aldehydes and Ketones with Secondary Amines
The Hofmann Elimination of Amines and Alkyl Fluorides
The Reaction of Amines with Nitrous Acid
Reactions of Amines Practice Problems
The Cope elimination
Basicity of Amines
Boc Protecting Group for Amines
Organic Synthesis Problems
Organic Chemistry Multistep Synthesis Practice Problems
Organic Synthesis Puzzles
Acetals as Protecting Groups for Aldehydes and Ketones
How to Choose Molecules for Doing SN2 and SN1 Synthesis-Practice Problems
Alkene Reactions Practice Problems
Changing the Position of a Double Bond
Changing the Position of a Leaving Group
Alkenes Multi-Step Synthesis Practice Problems
Alkyne Synthesis Reactions Practice Problems
Radical Halogenation in Organic Synthesis
Grignard Reaction in Organic Synthesis with Practice Problems
Ortho Para Meta in EAS with Practice Problems
Orientation in Benzene Rings With More Than One Substituent
Carbohydrates
Carbohydrates – Structure and Classification
Erythro and Threo
R and S Configuration on Fischer Projections
D and L Sugars
Aldoses and Ketoses: Classification and Stereochemistry
Epimers and Anomers
Converting Fischer, Haworth, and Chair forms of Carbohydrates
Mutarotation
Glycosides
Isomerization of Carbohydrates
Ether and Ester Derivatives of Carbohydrates
Oxidation of Monosaccharides
Reduction of Monosaccharides
Kiliani–Fischer Synthesis
Wohl Degradation
Carbohydrates Practice Problem Quiz
Chemistry Steps LLC
Organic Chemistry Study Materials, Practice Problems, Summary Sheet Guides, Multiple-Choice Quizzes. It’s all here – Just keep browsing.
5900 Balcones Drive,Austin, TX 78731
info@chemistrysteps.com
YouTube
Facebook
Pinterest
Instagram
Study Guides
Tutoring
Reaction Maps
Sign Up
General Chemistry
About
Terms
Privacy Policy
Contact
Reviews
Copyright © 2016 - 2025 Chemistry Steps |
187803 | https://brilliant.org/wiki/inequalities-with-strange-equality-conditions/ | Inequalities with Strange Equality Conditions
Sign up with Facebook
or
Sign up manually
Already have an account?
Log in here.
Daniel Liu,
Calvin Lin,
and
Jimin Khim
contributed
This page serves to debunk the myth that
An expression attains its maximum or minimum when all (some) of the variables are equal.
If a,b are real numbers such that a+b=4, what is the minimum value of
[(a−1)(b−1)]2?
Clearly, since the expression is a perfect square, the minimum that it can attain is 0. This can indeed be attained, say with a=1,b=3. The minimum does not occur when a=b=24, even though the expression is symmetric. □
Some inequalities are less obvious and they do not have equality case when all variables are equal.
If x,y,z≥0 satisfy x+y+z=3, find the maximum value of x2y+y2z+z2x.
Setting x=y=z=1 gives x2y+y2z+z2x=3. However, is this the maximum value?
In fact, setting x=0,y=2,z=1 gives x2y+y2z+z2x=4, which is the actual maximum value. □
Try your hand at the list of problems below. Be warned that these problems are hard to (properly) solve, and could require a lot of ingenuity.
Let a,b,c,d∈[21,2]. Suppose abcd=1. Find the maximum possible value of (a+b1)(b+c1)(c+d1)(d+a1).
The correct answer is: 25
Let a,b,c,d be positive integers such that a+b+c+d=120, and define S=ab+bd+da+bc+cd. Find the number of ordered quadruples (a,b,c,d) for which S attains its maximum.
The correct answer is: 39
Let x,y,z be non-negative real numbers satisfying the condition x+y+z=1. The maximum possible value of
x3y3+y3z3+z3x3
has the form ba, where a and b are positive, coprime integers. What is the value of a+b?
The correct answer is: 65
Without loss of generality, we may assume that x≥y≥z≥0. Since y and z are non-negative real numbers, x=1−y−z≤1.
Furthermore, since squares are non-negative,
(x−21)2≥0⇒x2−x+41≥0⇒41≥x(1−x).
We can cube both sides of the inequality since x(1−x)≥0, and both terms are positive. This gives 641≥x3(1−x)3=x3(y+z)3=x3y3+3x3y2z+3x3yz2+x3z3. Observe that the right hand side is very close to the quantity that we want to maximize. In fact, since 0≤y,z≤x, we have 3x3yz2≥3(y2z)yz2=3y3z3≥y3z3. Thus, 641≥x3y3+3x3y2z+3x3yz2+x3z3≥x3y3+y3z3+z3x3.
In order for equality to hold throughout, we need x=21 in the first inequality, and z=0 in the last inequality. We can check that the maximium 641 is attained at (x,y,z)=(21,21,0). Hence, a+b=1+64=65.
Note: This inequality is tricky because the maximum is attained at (21,21,0) and its permutations, hence standard approaches will tend to fail to work.
Let x,y,z be real numbers such that x2+y2+z2+(x+y+z)2=9 and xyz≤3215. To 2 decimal places, what is the greatest possible value of x?
The correct answer is: 2.5
Suppose a, b, and c are non-negative real numbers with a+b+c=1. The largest possible value of the expression ab2+bc2+ca2 can be written as mn, where n and m are coprime positive integers. What is the value of n+m?
The correct answer is: 31
What is the smallest real number k (to 3 decimal places), such that for all ordered triples of non-negative reals (a,b,c) which satisfy a+b+c=1, we have
1−ca+1−ab+1−bc≤1+k?
The correct answer is: 0.250
If a,b,c are non-negative real numbers, what is the maximum value of
(a+b+c)3ab2+bc2+ca2?
The correct answer is: 0.148148148
1
0
0.25
6.25E-2
Consider all pairs of real numbers such that a+b=3.
What is the minimum value of
a2b2−2a2b−2ab2+a2+4ab+b2−2a−2b+1?
The correct answer is: 0
Let a triangle with sides of length a,b,c have perimeter 2. What is the maximum value of k such that b1−a+c1−b+a1−c≥k is always true? Prove your claim.
The correct answer is: 1.000
Over all positive triples of real numbers, what is the largest value of k (to 2 decimal places) such that
b+ca+c+ab+a+bc≥k?
The correct answer is: 2.00
For real numbers a and b such that a>b>0, what is the minimum value of
a+b(a−b)1?
Cite as:
Inequalities with Strange Equality Conditions.
Brilliant.org.
Retrieved
17:05, August 23, 2025,
from |
187804 | https://www.dictionary.com/browse/fanciful | Daily Crossword
Word Puzzle
Word Finder
All games
Word of the Day
Word of the Year
New words
Language stories
All featured
Slang
Emoji
Memes
Acronyms
Gender and sexuality
All culture
Writing tips
Writing hub
Grammar essentials
Commonly confused
All writing tips
Games
Featured
Culture
Writing tips
Advertisement
View synonyms for fanciful
fanciful
[fan-si-fuhl]
adjective
characterized by or showing fancy; capricious or whimsical in appearance.
a fanciful design of butterflies and flowers.
2. suggested by fancy; imaginary; unreal.
fanciful lands of romance.
Synonyms: illusory, baseless, visionary
3. led by fancy rather than by reason and experience; whimsical.
a fanciful mind.
fanciful
/ ˈfænsɪfʊl/
adjective
not based on fact; dubious or imaginary
fanciful notions
2. made or designed in a curious, intricate, or imaginative way
3. indulging in or influenced by fancy; whimsical
Discover More
Other Word Forms
fancifully adverb
fancifulness noun
overfanciful adjective
overfancifully adverb
overfancifulness noun
unfanciful adjective
Discover More
Word History and Origins
Origin offanciful1
First recorded in 1620–30; fancy + -ful
Discover More
Example Sentences
Examples are provided to illustrate real-world usage of words in context. Any opinions expressed do not reflect the views of Dictionary.com.
The continent may call, where he could find a set-up that suits him, but the notion of a big Premier League post is fanciful in the extreme.
FromBBC
The possibility of having to turn, cap in hand, to the International Monetary Fund for a loan or to require intervention from the European Central Bank, is no longer fanciful.
FromBBC
In response to the petition, one Supreme Court judge on the two-judge bench called the allegations "fanciful ideas".
FromBBC
To think treasure hunters could be making the journey in the pitch-black sounds fanciful and yet this site has fallen victim to nighthawking multiple times.
FromBBC
The all-ages appeal of the museum is a testament to the everlasting approach of the couple’s narratives, which handle difficult life moments with a fanciful nature, but never hold your hand.
FromLos Angeles Times
Advertisement
Discover More
Related Words
absurd
bizarre
extravagant
fantastic
fantastical www.thesaurus.com
fictional
imaginative
offbeat
preposterous
unreal
whimsical
Advertisement
Advertisement
Advertisement
fancierfancify |
187805 | https://general.chemistrysteps.com/the-effect-of-a-common-ion-on-solubility/ | Skip to content
Chemistry Steps
General Chemistry
### Acid–Base and Solubility Equilibria The Effect of a Common Ion on Solubility
In the previous post, we talked about the solubility and solubility product constant (Ksp) of ionic compounds with low solubility.
For example, the dissolution equation and the Ksp for CaF2 are:
CaF2(s) ⇆ Ca2+(aq) + 2F–(aq)
Ksp = [Ca2+][ F–]2 = 3.9 x 10-11
By assigning x mol/L as the concentration of CaF2 dissolved in a saturated solution, we were able to determine the molar solubility of CaF2 from the Ksp.
Setting up an ICE table helps determine the concentrations correctly.
x x 2x CaF2(s) ⇆ Ca2+(aq) + 2F–(aq)
| | | |
---
| | [Ca2+] | [F–] |
| Initial | 0 | 0 |
| Change | +x | +2x |
| Equil | x | 2x |
So, the expression for Ksp can be written as:
Ksp = [Ca2+][ F–]2 = (x)(2x) = 3.9 x 10-11
Therefore,
(x)(2x)2 = 3.9 x 10-11
4x3 = 3.9 x 10-11
x = 2.1 x 10-4
2.1 x 10-4 mol/L is the concentration of dissolved CaF2 because it is in 1:1 ratio with Ca2+, and this is the molar solubility of CaF2.
The Effect of a Common Ion on Solubility
Let’s now assume that the solution of CaF2 contains Ca(NO3)2 with a concentration of 0.20 M. What is the molar solubility of CaF2 in this solution?
Before doing the calculations, we can predict the solubility will go down because Ca(NO3)2is a strong electrolyte and completely dissociates into Ca2+ and NO3– ions. Importantly, the Ca2+ is also formed when CaF2 dissolves in water, so it is a common ion, and as we discussed earlier it pushes the equilibrium to the opposite direction according to the Le Chtelier’s principle.
And now, let’s calculate the molar solubility of CaF2. It is going to be the same procedure, except the initial concentration of Ca2+ ions is not zero since the dissociation of Ca(NO3)2 produces an equivalent amount of the cation.
0.20 0.20 0.40Ca(NO3)2 (aq) ⇆ Ca2+(aq) + 2NO3–(aq)
x x 2x CaF2(s) ⇆ Ca2+(aq) + 2F–(aq)
| | | |
---
| | [Ca2+] | [F–] |
| Initial | 0.20 | 0 |
| Change | +x | +2x |
| Equil | 0.20 + x | 2x |
So, the expression for Ksp is:
Ksp = [Ca2+][ F–]2 = (0.20 + x)(2x)2 = 3.9 x 10-11
Now, because the ionization of CaF2 is negligible compared to the one of Ca(NO3)2, we assume that 0.20 + x ≈ 0.20, and the simplified equation will be:
(0.20)(2x)2 = 3.9 x 10-11
0.80x2 = 3.9 x 10-11
Therefore,
x = 7.0 x 10-6
The x is very small compared to 0.20, and therefore, the approximation was valid, and 7.0 x 10-6 M is the molar solubility of CaF2 in the presence of 0.20 M Ca(NO3)2. As expected it is lower than the solubility of CaF2 in pure water (2.1 x 10-4).
Let’s do another example, where the solution contains 0.15 M NaF.
In this case, the common ion is F- and therefore, its initial concentration is going to be equal to 0.15 M since NaF is a strong electrolyte and completely dissociates in aqueous solutions.
0.15 0.15 0.15NaF(aq) ⇆ Na+(aq) + F–(aq)
x x 2x CaF2(s) ⇆ Ca2+(aq) + 2F–(aq)
| | | |
---
| | [Ca2+] | [F–] |
| Initial | 0 | 0.15 |
| Change | +x | +2x |
| Equil | x | 0.15 + 2x |
So, the expression for Ksp is:
Ksp = [Ca2+][ F–]2 = (x)(0.15 + 2x)2 = 3.9 x 10-11
Now, because the ionization of CaF2 is negligible compared to the one of NaF, we assume that 0.15 + x ≈ 0.15, and the simplified equation will be:
(x)(0.15)2 = 3.9 x 10-11
0.0225x = 3.9 x 10-11
Therefore,
x = 1.7 x 10-9
The x is very small compared to 0.15, and therefore, the approximation was valid, and 1.7 x 10-9 M is the molar solubility of CaF2 in the presence of 0.15 M NaF. As expected it is lower than the solubility of CaF2 in pure water (2.1 x 10-4).
So, to summarize, remember that in general, the solubility of a slightly soluble ionic compound decreases with the presence of a common ion in the solution.
Check Also
Buffer Solutions
The Henderson–Hasselbalch Equation
The pH of a Buffer Solution
Preparing a Buffer with a Specific pH
The Common Ion Effect
The pH and pKa Relationship
Strong Acid–Strong Base Titrations
Titration of a Weak Acid by a Strong Base
Titration of a Weak Base by a Strong Acid
Titration of Polyprotic Acids
Buffer Solutions Practice Problems
Kspand Molar Solubility
The Effect of pH on Solubility
Will a Precipitate Form? Ksp and Q
Kspand Molar Solubility Practice Problems
Leave a Comment Cancel reply |
187806 | https://byjus.com/physics/isothermal-process/ | Physics, studies systems and objects to measure their temperatures, motions and other physical characteristics. It can be applied to anything from single-celled organisms to mechanical systems to galaxies, stars and planets and the processes that govern them. In physics, thermodynamics is a branch that deals with the relationships between heat energy and other forms of energy. It describes how thermal energy is converted into other forms of energy and how it affects matter.
An isothermal process is a thermodynamic process in which the temperature of a system remains constant. The transfer of heat into or out of the system happens so slowly that thermal equilibrium is maintained. At a particular constant temperature, the change of a substance, object or system is known as the Isothermal Process. Usually, there are two phenomena under which this process can take place. If a system is in contact with a thermal reservoir from outside, then, to maintain thermal equilibrium, the system slowly adjusts itself with the temperature of the reservoir through heat exchange. In contrast, in another phenomenon, no heat transfer occurs between a system and its surrounding. In this process, the temperature of the system is changed in order to keep the heat constant. This process is known as the Adiabatic Process.
Isothermal and Adiabatic Process DifferenceExamples of Isothermal ProcessWhat is Boyles Law
Difference Between Isothermal and Adiabatic Process
An isothermal process is a process that occurs under constant temperature but other parameters of the system can be changed accordingly. On the other hand, in an adiabatic process, heat transfer occurs to keep the temperature constant. The main difference between isothermal and adiabatic process is that the isothermal process occurs under constant temperature, while the adiabatic process occurs under varying temperature. The work done in an isothermal process is due to the change in the net heat content of the system. Meanwhile, the work done in an adiabatic process is due to the change in its internal energy.
Examples of Isothermal Process
An isothermal process occurs in systems that have some means of regulating the temperature. This process occurs in systems ranging from highly structured machines to living cells. A few examples of an isothermal process are given below.
What is Boyle’s Law?
An isothermal process is of special interest for ideal gasses. An ideal gas is a hypothetical gas whose molecules don’t interact and face an elastic collision with each other. Joule’s second law states that the internal energy of a fixed amount of an ideal gas only depends on the temperature. Thus, the internal energy of an ideal gas in an isothermal process is constant.
In an isothermal condition, for an ideal gas, the product of Pressure and Volume (PV) is constant. This is known as Boyle’s law. Physicist and chemist Robert Boyle published this law in 1662. Boyle’s law is often termed as Boyle–Mariotte law, or Mariotte’s law because French physicist Edme Mariotte independently discovered the same law in 1679.
Boyle’s Law Equation
The absolute pressure exerted by an object of an ideal gas is inversely proportional to the volume it occupies if the temperature and amount of gas remain unchanged within a closed system.
There are a couple of ways in which the above-stated law can be expressed. The most basic way is given as follows:
PV = k
where P is the pressure, V is the volume and k is a constant.
The law can also be used to find the volume and pressure of a system when the temperature is held constant in the system as follows:
PiVi = Pf Vf
where,
The way people breathe and exhale air out of their lungs can be explained by Boyle’s Law. When the diaphragm contracts and expands, lung volume decreases and increases respectively, changing the air pressure inside them. The pressure difference between the interior of the lungs and the external air produces either inhalation or exhalation.
Recommended Video
When scientists study isothermal processes in systems, they examine heat and energy and their relation and also the mechanical energy it takes to change or maintain the temperature of a system. Such understanding helps biologists study the regulation of temperature in living organisms. It also comes into play in planetary science, space science, engineering, geology, and many other branches of science.
Visit BYJU’S to learn more Physics concepts.
Put your understanding of this concept to test by answering a few MCQs. Click ‘Start Quiz’ to begin!
Select the correct answer and click on the “Finish” button
Check your score and answers at the end of the quiz
Congrats!
Visit BYJU’S for all Physics related queries and study materials
Your result is as below
Request OTP on
Voice Call
Comments
Leave a Comment Cancel reply
Your Mobile number and Email id will not be published. Required fields are marked
Request OTP on
Voice Call
Website
Post My Comment
Register with BYJU'S & Download Free PDFs
Register with BYJU'S & Watch Live Videos |
187807 | https://physics.stackexchange.com/questions/446658/regarding-the-mathbfe-%C3%97-mathbfb-drift-in-the-earths-magnetic-field | electromagnetism - Regarding the $\mathbf{E} × \mathbf{B} $ drift in the Earth's magnetic field - Physics Stack Exchange
Join Physics
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Physics helpchat
Physics Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Thanks for your vote!
You now have 5 free votes weekly.
Free votes
count toward the total vote score
does not give reputation to the author
Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation.
Got it!Go to help center to learn more
Regarding the E×B E×B drift in the Earth's magnetic field
Ask Question
Asked 6 years, 9 months ago
Modified4 months ago
Viewed 891 times
This question shows research effort; it is useful and clear
4
Save this question.
Show activity on this post.
So I have a burning question: The only reason that the E×B E×B drift doesn't generate an electric current is because both the electrons and the positive ions move towards the same direction (towards Earth's ionosphere) therefore a charge separation isn't formed? Are opposite velocities the deal breaker for charge separation or am I missing something else?
electromagnetism
atmospheric-science
plasma-physics
geomagnetism
Share
Share a link to this question
Copy linkCC BY-SA 4.0
Cite
Improve this question
Follow
Follow this question to receive notifications
edited Mar 30, 2024 at 11:54
Sancol.
958 1 1 gold badge 6 6 silver badges 20 20 bronze badges
asked Dec 11, 2018 at 17:36
Lysandros BafaloukosLysandros Bafaloukos
51 3 3 bronze badges
2
Can you explain what you mean by ExB drift? I know this as the momentum or equivalently the energy transport of the EM field.my2cts –my2cts 2018-12-11 18:40:02 +00:00 Commented Dec 11, 2018 at 18:40
1 The force that comes as the product of the electrical current of the magnetotail being perpendicular to Earth's magnetic field that results in the particles drifting towards Earth Lysandros Bafaloukos –Lysandros Bafaloukos 2018-12-11 18:43:04 +00:00 Commented Dec 11, 2018 at 18:43
Add a comment|
2 Answers 2
Sorted by: Reset to default
This answer is useful
1
Save this answer.
Show activity on this post.
The only reason that the E x B drift doesn't generate an electric current is because both the electrons and the positive ions move towards the same direction...
The ExB-drift is independent of charge, so yes, both ions and electrons will undergo the same ExB-drift velocity.
...therefore a charge separation isn't formed?
Counter streaming particles of opposite charge do not a charge separation make. Rather, such a scenario leads to a current given by:
j=∑s n s q s V s j=∑s n s q s V s
where n s n s is the number density, q s q s is the total charge (including sign), and V s V s is the bulk velocity vector of species s s.
As an example, if electrons and protons flow in the same direction both at V o V o, there would be zero net current (assuming quasi-neutrality) because the two species have the opposite charge. If they flow relative to each other, i.e., V e≠V i V e≠V i, then there will be a net current.
Are opposite velocities the deal breaker for charge separation or am I missing something else?
Charge separation in a plasma is generally difficult because it results in electric fields. Since a plasma is an ionized gas of freely moving electrons and ions, any electric field will quickly act to eliminate itself by doing work on the charged particles. Most plasmas have incredibly high conductivities, so the lifetime of charge separations are very short.
I think you are confusing flow with charge separation, which are two different phenomena. Particle species in plasmas can flow relative to each other without generating any charge separation.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Improve this answer
Follow
Follow this answer to receive notifications
answered Dec 15, 2018 at 16:19
honeste_viverehoneste_vivere
16.2k 4 4 gold badges 44 44 silver badges 153 153 bronze badges
Add a comment|
This answer is useful
0
Save this answer.
Show activity on this post.
In uniform crossed magnetic and electric fields, the motion of a charge can be separated into a gyromotion and a constant drift. A constant motion implies zero force, so the charge doesn't affect the drift velocity.
The drift velocity is derived in George K. Parks, Physics of Space Plasmas (1991) pages 93-97, and the derivation is a bit too long to reproduce here, so I'll just give the answer:
v=E×B B 2.v=E×B B 2.
Note that charge doesn't appear in this expression. The Lorentz force on a particle with charge q q is
F=q E+q v×B.F=q E+q v×B.
If you substitute the drift velocity in this equation and use (E×B)×B=(E⋅B)B−B 2 E(E×B)×B=(E⋅B)B−B 2 E), you can confirm that F=0.F=0.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Improve this answer
Follow
Follow this answer to receive notifications
answered Dec 13, 2018 at 17:16
A. NewellA. Newell
574 2 2 silver badges 10 10 bronze badges
2
But the E x B field actually acts on the particles and it propels them towards the ionosphere the fact that it doesn't distinguishes between the two charges is irrelevant Lysandros Bafaloukos –Lysandros Bafaloukos 2018-12-13 17:24:29 +00:00 Commented Dec 13, 2018 at 17:24
You wanted to know why there is no charge separation, and the answer is that the opposite charges are drifting in the same direction - on average. The E×B E×B drift applies to the center of the gyromagnetic motion, not the instantaneous motion of each charge.A. Newell –A. Newell 2018-12-14 17:24:35 +00:00 Commented Dec 14, 2018 at 17:24
Add a comment|
Your Answer
Thanks for contributing an answer to Physics Stack Exchange!
Please be sure to answer the question. Provide details and share your research!
But avoid …
Asking for help, clarification, or responding to other answers.
Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Draft saved
Draft discarded
Sign up or log in
Sign up using Google
Sign up using Email and Password
Submit
Post as a guest
Name
Email
Required, but never shown
Post Your Answer Discard
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
electromagnetism
atmospheric-science
plasma-physics
geomagnetism
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Related
16What is the source of Earth's magnetic field?
1Change in line density of electrons
1E×B E×B drift plasma physics
1Why is the electric field perpendicular to a circuit zero?
0Does a conducting rod moving in a magnetic field itself generate another magnetic field?
7Electric and magnetic force between two charges moving together move at the speed of light
0A possible way to generate Electric current
Hot Network Questions
A time-travel short fiction where a graphologist falls in love with a girl for having read letters she has not yet written… to another man
What is a "non-reversible filter"?
If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church?
Does a Linux console change color when it crashes?
Does the mind blank spell prevent someone from creating a simulacrum of a creature using wish?
Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth?
How to locate a leak in an irrigation system?
Making sense of perturbation theory in many-body physics
What's the expectation around asking to be invited to invitation-only workshops?
Can a cleric gain the intended benefit from the Extra Spell feat?
Numbers Interpreted in Smallest Valid Base
Fundamentally Speaking, is Western Mindfulness a Zazen or Insight Meditation Based Practice?
What NBA rule caused officials to reset the game clock to 0.3 seconds when a spectator caught the ball with 0.1 seconds left?
Storing a session token in localstorage
"Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf
Weird utility function
How to start explorer with C: drive selected and shown in folder list?
Does the Mishna or Gemara ever explicitly mention the second day of Shavuot?
Does "An Annotated Asimov Biography" exist?
Where is the first repetition in the cumulative hierarchy up to elementary equivalence?
Suggestions for plotting function of two variables and a parameter with a constraint in the form of an equation
Direct train from Rotterdam to Lille Europe
The geologic realities of a massive well out at Sea
How do you emphasize the verb "to be" with do/does?
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Physics
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.29.34589
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
Cookie Consent Preference Center
When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
Accept all cookies
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Necessary cookies only Confirm my choices |
187808 | https://math.stackexchange.com/questions/4988288/proof-that-a2b2-2ab | Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Proof that $a^2+b^2 > 2ab$
Ask Question
Asked
Modified 11 months ago
Viewed 285 times
0
$\begingroup$
I'm going through Cunningham's A Logical Introduction to Proof. An exercise is asking to prove that $a^2+b^2 > 2ab$, by letting $a$ and $b$ be distinct numbers and using the fact $x^2 >0$
I understand $a^2+b^2 > 2ab$ is related to the extended form of $(a-b)^2$, but using something like $a^2+b^2 - 2ab > 0$ to start my proof feels like going back to what I'm being asked to prove.
What I know so far tells me that I can take $a^2 > 0$, but if I wanted to get $a^2 + b^2$, wouldn't that turn my inequality in $a^2 + b^2 > b^2$ (by the law that states $a+c < b+c$)? Where does the $2ab$ derives from and how can I even start the proof?
inequality
proof-writing
Share
edited Oct 22, 2024 at 17:32
andalodandalod
asked Oct 22, 2024 at 17:31
andalodandalod
3155 bronze badges
$\endgroup$
4
2
$\begingroup$ You're making it harder than it is. $(a-b)^2 \geq 0$. And thus...? $\endgroup$
David G. Stork
– David G. Stork
2024-10-22 17:34:36 +00:00
Commented Oct 22, 2024 at 17:34
1
$\begingroup$ $a^2+b^2-2ab=(a-b)^2>0$ for $a\neq b$. For a different solution, see this post. $\endgroup$
Dietrich Burde
– Dietrich Burde
2024-10-22 17:36:54 +00:00
Commented Oct 22, 2024 at 17:36
$\begingroup$ You don't know $a\neq0,$ so you can't say $a^2>0.$ You do know $a-b\neq0,$ however. $\endgroup$
Thomas Andrews
– Thomas Andrews
2024-10-22 17:36:59 +00:00
Commented Oct 22, 2024 at 17:36
1
$\begingroup$ "...feels like going back to what I'm being asked to prove" : it is not circular reasoning. Instead, it is perfectly valid. $\endgroup$
user2661923
– user2661923
2024-10-22 18:36:45 +00:00
Commented Oct 22, 2024 at 18:36
Add a comment |
3 Answers 3
Reset to default
4
$\begingroup$
You may be experiencing the difference between discovering a proof and articulating it. This is possibly the result of a long-lived movement in mathematics education: the tendency to present results as though they were handed down from on high, without much motivation.
From that perspective, there's no problem, because although it may seem arbitrary to start your proof with "$a \not= b$, therefore $(a-b)^2 > 0$"for some people, it's not clear why you would know to start with thatit's not actually logically prior to your desired conclusion. You're not assuming that which you want to prove.
However, for those people, it does seem epistemologically prior, in the sense that they don't see how to motivate starting that way. All I can offer in the way of guidance there is that this seems to me mostly a matter of experience: You see those terms, and your mind will eventually more or less immediately leap to that approach. There's no particular short cut to that state of mind, I'm afraid.
Share
edited Oct 22, 2024 at 18:01
answered Oct 22, 2024 at 17:40
Brian TungBrian Tung
35.6k33 gold badges4646 silver badges8282 bronze badges
$\endgroup$
0
Add a comment |
3
$\begingroup$
WARNING: possibly wildly overkill, but I literally don't even care (because I think it will help OP).
Similar to Brian Tung's answer. There is a difference between doing scratch work and writing a logical proof.
Scratch work is stuff you quickly scribble down to see what's going on and get ideas going to try to prove the statement/proposition. Often with scratch work for algebraic proofs like the one here, we start with the statement we are trying to prove and try to end up at a familiar true statement. Note that this does not give us a formal proof, however, since in a formal proof, we cannot start by assuming the thing we are trying to prove is true (as that's cheating!).
However, before we even do scratch work, we need to be crystal clear of the precise statement we are trying to prove.
Statement we are trying to prove (A.K.A. "Proposition"): If $a\neq b,$ then $a^2+b^2 > 2ab.$
Scratch work:
$$a^2 +b^2 > 2ab$$
$$+(-2ab) \text{ to both sides: }\quad a^2 + b^2 -2ab > 0 $$
$$ (a-b)^2 > 0.$$
We know that last statement is true, so we have sort of got somewhere, maybe. However, this is not yet a logical proof. It is a bunch of (maybe) equivalent statements that haven't yet been joined up in a logical argument (which is all that a proof is at the end of the day). In our proof proper, we have to start with facts we know to be true end up with "Therefore it is true that $a^2 +b^2 > 2ab$ whenever $a\neq b$".
Most people who are new to proofs would do well to write something like the following as a solution:
$$ (a-b)^2 = a^2 + b^2 -2ab $$
$$ (a-b)^2 \text{ is always } > 0, \text{ therefore } a^2 + b^2 -2ab > 0,$$
$$ \text{ therefore, by adding } 2ab \text{ to both sides, } a^2 + b^2 > 2ab. $$
However, this is not perfect as it is missing details, like working/reasoning of some steps, and it's also missing all-important implication signs or logical indications like "therefore" or "because". Better is:
We are trying to prove that, If $a\neq b,$ then $a^2+b^2 > 2ab.$ So suppose that $a,b\in\mathbb{R}$ such that $a\neq b.$ [We aim to show that $a^2+b^2 > 2ab.$]
$$ \text{ By expanding brackets, for all } a\in\mathbb{R},\ b\in\mathbb{R},\quad (a-b)^2 = (a-b)(a-b) = a(a-b) -b(a-b)=a^2 -ab -ba+b^2= a^2 + b^2 -2ab. $$
$$ \text{In particular, for all } a\in\mathbb{R},\ b\in\mathbb{R}, \text{ with } a\neq b,\quad (a-b)^2 = a^2 + b^2 -2ab. $$
$$ \text{ Since } a\neq b,\ \text{ it follows that } a-b\neq 0, \text{ and so either } a-b <0 \text{ or } a-b>0. \text{ Either way, we have: } (a-b)^2=(a-b)(a-b),$$
$$ \text{ which is either two positive numbers multiplied together or two negative numbers multiplied together, and therefore is always } > 0.$$
$$\text{ In summary, } (a-b)^2 >0. \text{ Therefore, } a^2 + b^2 -2ab \text{ is always }> 0.$$
$$ \text{ Therefore, by adding } 2ab \text{ to both sides, } a^2 + b^2 > 2ab. $$
This is overkill, but it's better to be overkill with proofs and then later on reel them in by making them more concise, but only once you are confident that the steps you intend on omitting from the proof are correct.
So you see that formulating algebraic proofs takes several stages.
Clearly state the proposition you wish to prove.
Jot down ideas and scratch work that hopefully sort-of prove the proposition (albeit not in a logical manner, but maybe it would be logical if the order of the steps are reversed).
Try to arrange all your ideas in a logically deductive proof.
Check to make sure your proof is valid and sound, and that you have started with statements you already know to be true like axioms, and then the proof finishes with the proposition being shown to be a true statement.
Keep in mind that proofs are usually a work in progress and can always be improved in clarity and brevity. For example, my Proposition statement itself can be clarified to: "If $a,b\in\mathbb{R}$ and $a\neq b,$ then $a^2+b^2 > 2ab$", since one might wonder from the proposition I originally wrote if $a$ and $b$ could be allowed to be complex numbers or something else.
As you get better at proofs, sometimes you can omit some of the details, although you should not omit the logical implication part of your proof, because, for example, $A\implies B$ is not the same as $B \implies A.$
Share
edited Oct 23, 2024 at 12:06
answered Oct 22, 2024 at 18:58
Adam RubinsonAdam Rubinson
24.8k66 gold badges2929 silver badges6969 bronze badges
$\endgroup$
0
Add a comment |
1
$\begingroup$
Work with equivalence signs:
$a^2+b^2>2ab \Leftrightarrow a^2-2ab+b^2>0 \Leftrightarrow (a-b)^2 > 0 \Leftrightarrow a \neq b$
Share
answered Oct 22, 2024 at 18:03
VosoniVosoni
27111 silver badge88 bronze badges
$\endgroup$
Add a comment |
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
inequality
proof-writing
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Linked
3 Showing $a^2 + b^2 > 2ab$ without using the fact that $(a-b)^2 = a^2 + b^2 -2ab$?
Related
1 Question about Logic Proof
2 Prove $\frac{2ab}{a+b}\leq\sqrt {ab}$
1 How to show ¬ in terms of ¨ and §
4 Proof: $a < \sqrt{ab} < \frac{a+b}{2} < b$
1 Is this proof style legitimate?
Can anyone clarify the rules for $\forall$ intro and elimination, and $\exists$ intro and elimination?
4 How do you know when trying to find a proof/counterexample isn't worth it?
0 Check proof that any integer can be written as the difference of two positive integers
3 What is wrong with this proof by contradiction?
Hot Network Questions
Transforming wavefunction from energy basis to annihilation operator basis for quantum harmonic oscillator
What meal can come next?
Does the mind blank spell prevent someone from creating a simulacrum of a creature using wish?
Matthew 24:5 Many will come in my name!
Can peaty/boggy/wet/soggy/marshy ground be solid enough to support several tonnes of foot traffic per minute but NOT support a road?
A time-travel short fiction where a graphologist falls in love with a girl for having read letters she has not yet written… to another man
How to use \zcref to get black text Equation?
Origin of Australian slang exclamation "struth" meaning greatly surprised
Why, really, do some reject infinite regresses?
Riffle a list of binary functions into list of arguments to produce a result
What can be said?
Spectral Leakage & Phase Discontinuites
I'm having a hard time intuiting throttle position to engine rpm consistency between gears -- why do cars behave in this observed way?
Can a state ever, under any circumstance, execute an ICC arrest warrant in international waters?
How to home-make rubber feet stoppers for table legs?
How many stars is possible to obtain in your savefile?
What were "milk bars" in 1920s Japan?
Another way to draw RegionDifference of a cylinder and Cuboid
Can Monks use their Dex modifier to determine jump distance?
Is it safe to route top layer traces under header pins, SMD IC?
How to rsync a large file by comparing earlier versions on the sending end?
Numbers Interpreted in Smallest Valid Base
Why include unadjusted estimates in a study when reporting adjusted estimates?
Is direct sum of finite spectra cancellative?
more hot questions
Question feed |
187809 | https://arxiv.org/pdf/2012.05198 | Published Time: Mon, 23 Jan 2023 07:53:35 GMT
On cyclic and nontransitive probabilities ∗
Pavle Vuksanovic † A.J. Hildebrand ‡
August 30, 2020
Abstract
Motivated by classical nontransitivity paradoxes, we call an n-tuple ( x1, . . . , x n) ∈
[0 , 1] n cyclic if there exist independent random variables U1, . . . , U n with P (Ui = Uj ) = 0 for i 6 = j such that P (Ui+1 > U i) = xi for i = 1 , . . . , n − 1 and P (U1 > U n) = xn. We call the tuple ( x1, . . . , x n) nontransitive if it is cyclic and in addition satisfies xi > 1/2for all i.Let pn (resp. p∗
n
) denote the probability that a randomly chosen n-tuple ( x1, . . . , x n) ∈
[0 , 1] n is cyclic (resp. nontransitive). We determine p3 and p∗
3
exactly, while for n ≥ 4we give upper and lower bounds for pn that show that pn converges to 1 as n → ∞ .We also determine the distribution of the smallest, middle, and largest elements in a cyclic triple.
1 Introduction
A classic example of a nontransitive probability paradox is provided by the Efron dice , a set of four dice invented by Bradley Efron and popularized by Martin Gardner . The Efron dice are six-sided dice with face values given as follows:
A = {0, 0, 4, 4, 4, 4}, B = {1, 1, 1, 5, 5, 5},C = {2, 2, 2, 2, 6, 6}, D = {3, 3, 3, 3, 3, 3}.
(1.1) One can easily check that, with probability 2 /3 each, B beats A, C beats B, and D beats
C while, at the same time, A beats D with probability 2 /3. In this sense, the dice A, B, C,
D form a nontransitive cycle. More formally, if we let A, B, C, D denote independent discrete random variables that are uniformly distributed over the values listed in (1.1), then these variables satisfy (1.2) P (B > A ) = P (C > B ) = P (D > C ) = P (A > D ) = 23.
∗
Subject Classification: 60C05, 91A60
†
University of Illinois, email pavlev2@illinois.edu
‡
University of Illinois, email ajh@illinois.edu (corresponding author)
1
arXiv:2012.05198v3 [math.PR] 24 Feb 2021
Another classic example of nontransitivity is given by the following set of three-sided dice which seems to have first appeared (in a different, but equivalent, context) in a paper by Moon and Moser : (1.3) A = {1, 5, 9}, B = {2, 6, 7}, C = {3, 4, 8}.
The three-sided dice A, B, C defined by (1.3) form a nontransitive cycle with probabilities (1.4) P (B > A ) = P (C > B ) = P (A > C ) = 59.
There now exists a large body of research motivated by such nontransitivity phenomena; see [2, 5, 7, 9, 10, 11, 13, 15] for some recent work on this subject. It is natural to ask what the most “extreme” level of nontransitivity is that can be achieved with constructions such as the Efron and Moon-Moser dice. Can one replace the probabilities 2 /3 and 5 /9 in (1.2) and (1.4) by even larger numbers? To formalize this question, one can consider, for each integer n ≥ 3, the quantity (1.5) πn = max min ( P (U2 > U 1), . . . , P (Un > U n−1), P (U1 > U n)) ,
where the maximum is taken over all sets of independent random variables U1, . . . , U n. The Efron and Moon-Moser dice constructions show that π4 ≥ 2/3 and π3 ≥ 5/9. The quantities πn were first investigated in the 1960s by Steinhaus and Trybula , Try-bula [21, 22], Chang , and Usiskin who showed that lim n→∞ πn = 3 /4 and determined the first few values of πn. It particular, the values of π3 and π4 are given by (1.6) π3 =
√5 − 12 , π4 = 23.
Thus the Efron dice construction is best-possible in the sense of achieving the value πn for
n = 4; meanwhile, the Moon-Moser dice construction for n = 3 is not best-possible as 5/9 < (√5 − 1) /2. More recently (see, e.g., [3, 11, 17]) it was shown that, for any n ≥ 3, (1.7) πn = 1 − 14 cos 2(π/ (n + 2)) .
Interestingly, the numbers πn defined by (1.5) have come up independently in very different contexts such as graph theory [4, 14, 17] and theoretical computer science [12, 20]. In this paper, we focus on another aspect of the nontransitivity phenomenon that has received less attention in the literature, namely the question of which n-tuples can be realized as a tuple of cyclic probabilities (P (U2 > U 1), . . . , P (Un > U n−1), P (U1 > U n)), and how common such tuples are among all n-tuples in [0 , 1] n. We introduce the following definitions:
Definition 1.1 (Cyclic and nontransitive tuples) . Let n be an integer with n ≥ 3. (i) An n-tuple ( x1, . . . , x n) ∈ [0 , 1] n is called cyclic if there exist independent random variables U1, . . . , U n with P (Ui = Uj ) = 0 for i 6 = j such that (1.8) P (Ui+1 > U i) = xi (i = 1 , . . . , n − 1) and P (U1 > U n) = xn.
2(ii) An n-tuple ( x1, . . . , x n) ∈ [0 , 1] n is called nontransitive if it is cyclic and satisfies
xi > 1/2 for all i.The dice examples (1.1) and (1.3) show that the 4-tuple (2 /3, 2/3, 2/3, 2/3) and the triple (5 /9, 5/9, 5/9) are nontransitive and, hence, also cyclic. The n-tuple (1 /2, 1/2, . . . , 1/2) is cyclic for any n ≥ 3 as can be seen by taking U1, . . . , U n to be independent random variables with a continuous common distribution. On the other hand, the tuple (1 , 1, . . . , 1) is not cyclic. Indeed, otherwise there would exist random variables Ui such that, with probability 1, both U1 < U 2 < · · · < U n and
Un < U 1 hold. But the first of these relations implies that U1 < U n holds with probability 1, contradicting the second relation. A similar contradiction arises whenever the components of ( x1, . . . , x n) are sufficiently close to 1. The first non-trivial case of Definition 1.1 is the case n = 3, i.e., the case of cyclic
triples . For this case, Trybula and, independently, Suck , gave necessary and sufficient conditions for a triple ( x, y, z ) to be cyclic (see Lemma 2.3 below). For n ≥ 4 a complete characterization of cyclic n-tuples is not known, though some partial results are known (see, for example, Trybula ). In this paper we consider the question of how common cyclic and nontransitive tuples are among all tuples in the n-dimensional unit cube [0 , 1] n. Let pn (resp. p∗
n
) be the probability that an n-tuple ( x1, . . . , x n) chosen randomly and uniformly from [0 , 1] n is cyclic (resp. non-transitive). These probabilities are given by the volumes of the regions Cn (resp. C ∗
n
) of cyclic (resp. nontransitive) n-tuples inside the unit cube [0 , 1] n; that is, we have (1.9) pn = vol( Cn), p∗
n
= vol(C ∗
n
),
where
Cn = {(x1, . . . , x n) ∈ [0 , 1] n : (x1, . . . , x n) is cyclic },(1.10) C∗
n
= {(x1, . . . , x n) ∈ [0 , 1] n : (x1, . . . , x n) is nontransitive }.(1.11) In our first theorem, we determine the probabilities p3 and p∗
3
exactly.
Theorem 1.
(i) A random triple (x1, x 2, x 3) in [0 , 1] 3 is cyclic with probability
p3 = 11 √54 − 17 4 − 6 ln( √5 − 1) ≈ 0.627575 . . . . (1.12)
(ii) A random triple (x1, x 2, x 3) in [0 , 1] 3 is nontransitive with probability
p∗
3
= 11 √58 − 43 16 − 3 ln( √5 − 1) + 3 ln 2 8 ≈ 0.011217 . . . (1.13) In particular, the theorem shows that a random triple in [0 , 1] 3 is more likely than not to be cyclic, while only about 1% of all such triples are nontransitive. In our second theorem, we consider the case of general n ≥ 4. We derive upper and lower bounds for the probabilities pn that show that pn converges to 1 as n → ∞ .3Theorem 2. For any integer n ≥ 4 the probability pn that a random n-tuple (x1, . . . , x n) ∈
[0 , 1] n is cyclic satisfies
(1.14) 1 − 3
( 2
π
)n
≤ pn ≤ 1 − 2
(14
)n
.
In particular, pn converges exponentially to 1 as n → ∞ .
In our final result we consider the distribution of the smallest, middle, and largest elements of a cyclic triple. Let ( X1, X 2, X 3) be a random vector that is uniformly dis-tributed over the region C3 of all cyclic triples, and let ( X∗
1
, X ∗
2
, X ∗
3
) be the order statistics of ( X1, X 2, X 3). Thus, Xi is the i-th coordinate of a random cyclic triple, while X∗
i
is the
i-th smallest among the three coordinates of a random cyclic triple, where “random” is to be interpreted with respect to the usual Lebesgue measure on C3.In particular, X∗
1
is the smallest element of a random cyclic triple, and the triple ( X1, X 2, X 3)is nontransitive if and only if X∗
1
1/2. Thus it is of interest to determine the precise dis-tribution of the random variable X∗
1
. More generally, in the following theorem we determine the density function fi(x) for each of the three random variables X∗
i
, i = 1 , 2, 3.
Theorem 3. Let f1(x), f2(x), and f3(x) denote, respectively, the density function of the smallest, middle, and largest element of a random cyclic triple; i.e., fi(x) is the density of the random variable X∗
i
defined above. Then:
f1(x) =
3
p3
(x3 − 3x2 + 1−x
2−x
− (1 − x) ln(1 − x)) if 0 ≤ x ≤ 3−√52 ,
3
p3
(x2 − 3x + 1 − (1 − x) ln(1 − x)) if 3−√52 < x ≤ 12 ,
3
p3
(x2 + x − 1 + (1 − x) ln(1 − x) − 2(1 − x) ln x) if 12 < x ≤ √5−12 ,
0 otherwise;
(1.15)
f2(x) =
3
p3
(3 x2 − x3) if 0 ≤ x ≤ 3−√52 ,
6
p3
(
3x − x2 − 12(1 −x)
)
if 3−√52 < x ≤ 12 ,
f2(1 − x) if 12 < x ≤ 1,
0 otherwise;
(1.16)
f3(x) = f1(1 − x),(1.17)
where p3 is given by (1.12) .
4Figure 1: The density functions f1(x) and f2(x) of the smallest, respectively middle, value in a random cyclic triple. The dark-shaded portion of the graph on the left corresponds to nontransitive triples. The densities f1(x) and f2(x) are shown in Figure 1. The dark-shaded portion of the graph of f1(x) is the portion of this density corresponding to nontransitive triples. As can be seen from the graph, the contribution of such triples to cyclic triples is quite small. This is consistent with the result of Theorem 1, which shows that nontransitive triples make up a proportion of only around 0 .627575 /0.011217 ≈ 1/55 of all cyclic triples. Formula (1.15) shows that the density f1(x) is supported on the interval [0 , (√5 − 1) /2]. Note that the right endpoint of this interval, ( √5 − 1) /2, is equal to the number π3 defined above (see (1.5) and (1.6)). Figure 1 shows that the density f1(x) is strictly positive on the entire interval [0 , (√5 − 1) /2), and that it has a unique mode (i.e., local maximum) located at around 0 .1. It is interesting to compare the distribution of X∗
1
, the smallest element in a random
cyclic triple in [0 , 1] 3, to that of X∗, the smallest element in an unrestricted random triple in [0 , 1] 3. An easy calculation shows that X∗ has density function f (x) = 3(1 − x)2, mean 1/4, and median 1 − 2−1/3 = 0 .206 . . . . In contrast to the density function f1(x), the latter density function is supported on the entire interval [0 , 1] and is strictly decreasing on this interval. Further statistics on the distributions f1(x) and f (x) are given in Table 1. Random Variable Expected Value Median Mode
X∗
1
0.211 . . . 0.197 . . . 0.107 . . . X∗ 0.25 0.206 . . . 0Table 1: Statistics on the distributions of X∗
1
, the smallest element in a random cyclic triple, and X∗, the smallest element in a random triple in [0 , 1] 3.The graph on the right of Figure 1 shows the density of the middle value in a random cyclic triple. As can be seen from formula (1.16), this distribution is symmetric with respect to the line x = 1 /2, and it is supported on the full interval [0 , 1]. 5Formula (1.17) shows that the distribution of the largest value in a random cyclic triple, up to a reflection at the line x = 1 /2, is the same as the distribution of the smallest value. This is a consequence of the symmetry properties of cyclic triples (cf. Lemma 2.1 below). The remainder of this paper is organized as follows. In Section 2 we prove some ele-mentary properties of cyclic n-tuples, we present Trybula’s characterization of cyclic triples, and we derive a simplified form of this characterization under additional assumptions on the triples. Sections 3–5 contain the proofs of our main results, Theorems 1–3. We conclude in Section 6 with a discussion of some related questions and open problems suggested by our results.
2 Auxiliary results
We begin by deriving some elementary properties of cyclic n-tuples. Here, and in the remain-der of the paper, we make the convention that subscripts of n-tuples are to be interpreted modulo n. Thus, for example, the definition (1.8) of a cyclic n-tuple ( x1, . . . , x n) can be written more concisely as
P (Ui+1 > U i) = xi (i = 1 , . . . , n ).
Given a real number t ∈ [0 , 1], we write (2.1) t = 1 − t.
Lemma 2.1. Let (x1, . . . , x n) ∈ [0 , 1] n be a cyclic n-tuple. Then: (i) Any cyclic permutation of (x1, . . . , x n) is cyclic; that is, for any i ∈ { 1, 2, . . . , n }, the tuple (xi, x i+1 , . . . , x i+n) (with subscripts interpreted modulo n) is cyclic as well. (ii) The “reverse” tuple (xn, x n−1, . . . , x 1) is cyclic. (iii) The “complementary” tuple (x1, . . . , x n) is cyclic. Proof. Part (i) of the lemma follows immediately from the definition (1.8) of cyclic n-tuples. For parts (ii) and (iii) suppose ( x1, . . . , x n) ∈ [0 , 1] n is a cyclic n-tuple with associated random variables U1, . . . , U n satisfying (1.8). Setting U ∗
i
= −Un+1 −i we have
P (U ∗
i+1
U ∗
i
) = P (−Un−i > −Un+1 −i)= P (Un+1 −i > U n−i) = xn−i (i = 0 , 1, . . . , n − 1) ,
which shows that the tuple ( xn, x n−1, . . . , x 1) is cyclic. Similarly, the fact that ( x1, . . . , x n) is cyclic follows by letting U ∗
i
= −Ui and observing that
P (U ∗
i+1
U ∗
i
) = P (−Ui+1 > −Ui)= P (Ui > U i+1 ) = 1 − xi (i = 1 , . . . , n ).
This completes the proof of the lemma. 6Lemma 2.2. If (x, y, z ) ∈ [0 , 1] 3 is a cyclic triple, then so is any permutation of (x, y, z ).Proof. Suppose ( x, y, z ) ∈ [0 , 1] 3 is cyclic. By part (i) of Lemma 2.1 the cyclic permutations (y, z, x ) and ( z, x, y ) are also cyclic. By part (ii) the reverse triple ( z, y, x ) is cyclic. Applying part (i) again to the triple ( z, y, x ), we obtain that ( y, x, z ) and ( x, z, y ) are cyclic as well. Hence all permutations of ( x, y, z ) are cyclic. The next result contains Trybula’s characterization of cyclic triples . We state this characterization in the slightly different—though equivalent—version given by Suck .
Lemma 2.3 (Trybula [21, Theorem 1]; Suck [19, Theorems 2 and 3]) . A triple (x, y, z ) ∈
[0 , 1] 3 is cyclic if and only if it satisfies the following inequalities:
min ( x + yz, y + zx, z + xy ) ≤ 1,(2.2) min ( x + y z, y + z x, z + x y ) ≤ 1.(2.3) The conditions (2.2) and (2.3) in this characterization are rather unwieldy to work with directly. However, by imposing additional constraints on the variables x, y, z, the conditions simplify significantly as the next lemma shows.
Lemma 2.4. Let (x, y, z ) ∈ [0 , 1] 3.(i) If x ≤ y ≤ z, then the triple (x, y, z ) is cyclic if and only if it satisfies the following two inequalities:
x + yz ≤ 1,(2.4)
z + x y ≤ 1.(2.5)
(ii) If x = min( x, y, z ) and y, z ≥ 1/2, then (x, y, z ) is cyclic if and only if (2.4) holds. Proof. We need to show that the conditions (2.2) and (2.3) reduce to (2.4) and (2.5) under the assumptions of part (i), and to (2.4) under the assumptions of part (ii). Suppose first that ( x, y, z ) satisfies the conditions of part (i), i.e., that x ≤ y ≤ z. Then
x(1 −y) ≤ z(1 −y) and x(1 −z) ≤ y(1 −z), and therefore x+yz ≤ z +xy and x+yz ≤ y +zx .Thus, the minimum on the left of the inequality (2.2) is equal to x + yz , and the inequality therefore holds if and only if x + yz ≤ 1, Similarly, noting that x ≤ y ≤ z is equivalent to
z ≤ y ≤ x, we see that condition (2.3) is equivalent to z + x y ≤ 1. This proves part (i) of the lemma. Next, suppose that y, z ≥ 1/2 and x = min( x, y, z ). Then either z = max( x, y, z ) or
y = max( x, y, z ). In the first case we have x ≤ y ≤ z, so part (i) of the lemma applies and shows that the triple ( x, y, z ) is cyclic if and only if it satisfies (2.4) and (2.5). But since
y, z ≥ 1/2, we have y, z ≤ 1/2 and therefore z + x y ≤ (1 /2) + 1 · (1 /2) = 1. Hence condition (2.5) holds trivially, so ( x, y, z ) is cyclic if and only if (2.4) holds. In the case where we have x ≤ z ≤ y, we note that, by Lemma 2.2, ( x, y, z ) is cyclic if and only if ( x, z, y ) is cyclic. Applying the above argument to ( x, z, y ) then yields the same conclusion. This completes the proof of part (ii). 73 Proof of Theorem 1
By (1.9) we have p3 = vol( C3) and p∗
3
= vol(C ∗
3
), so computing these probabilities amounts to computing the volumes of the regions C3 and C ∗
3
of cyclic and nontransitive triples. We begin by using the symmetry properties established in Lemmas 2.1 and 2.2 to reduce this computation to one involving simpler regions. Let
C(I)3 = {(x, y, z ) ∈ C3 : 1 /2 < x ≤ 1 and x ≤ y, z ≤ 1},(3.1)
C(II )3 = {(x, y, z ) ∈ C3 : 0 ≤ x < 1/2 and 1 /2 < y, z ≤ 1}.(3.2)
Lemma 3.1. We have
vol(C ∗
3
) = 3 vol( C(I)3 ),(3.3) vol( C3) = 6 vol( C(I)3 ) + 6 vol( C(II )3 ).(3.4)
Proof. By definition, the set C ∗
3
of nontransitive triples consists of those cyclic triples ( x, y, z )in which all coordinates are > 1/2. Since, by Lemma 2.2, the “cyclic triple” property is invariant with respect to taking permutations, the volume of this set is three times the volume of the set of those triples ( x, y, z ) in C ∗
3
which satisfy x = min( x, y, z ), i.e., the set
C(I)3 . This proves (3.3). To prove (3.4), note first that we may ignore triples ( x, y, z ) in which one of the coordi-nates is equal to 1 /2 as these do not contribute to the volume. We classify the remaining triples ( x, y, z ) in C3 into 8 mutually disjoint classes, according to their signature σ, defined as
σ = (sign( x − 1/2) , sign( y − 1/2) , sign( z − 1/2)) ,
where sign( t) = 1 if t > 0 and sign( t) = −1 if t < 0. For example, a triple ( x, y, z ) with
x < 1/2, y > 1/2, z > 1/2 has signature ( −1, 1, 1). Letting C σ
3
denote the set of cyclic triples ( x, y, z ) with signature σ, we then have (3.5) vol( C3) = ∑
σ=( ±1,±1,±1)
vol( C σ
3
),
where the sum is over all 8 possible values of the signature σ. Now note that the cyclic triples with signature (1 , 1, 1) are exactly the nontransitive triples, and the cyclic triples with signature ( −1, 1, 1) are exactly those counted in the set C(II )3 . Thus we have vol( C(1 ,1,1) 3 ) = vol(C ∗
3
) = 3 vol( C(I)3 ),(3.6) vol( C(−1,1,1) 3 ) = vol( C(II )3 ).(3.7) Next, observe that if ( x, y, z ) has signature σ, then the complementary triple ( x, y, z ) = (1 − x, 1 − y, 1 − z) has signature −σ. Since, by Lemma 2.1(iii), a triple ( x, y, z ) is cyclic if and only if ( x, y, z ) is cyclic, it follows that (3.8) vol( C(−1,−1,−1) 3 ) = vol( C(1 ,1,1) 3 ) = vol(C ∗
3
).
8Finally, using Lemma 2.1(iii) along with Lemma 2.2, we see that vol( C(−1,−1,1) 3 ) = vol( C(1 ,−1,−1) 3 ) = vol( C(−1,1,−1) 3 )(3.9) = vol( C(1 ,1,−1) 3 ) = vol( C(1 ,−1,1) 3 ) = vol( C(−1,1,1) 3 )= vol( C(II )3 ).
Combining (3.5)–(3.9) yields (3.4). Let (3.10) ω =
√5 − 12 = 0 .618034 . . .
and note that ω is the positive root of the quadratic equation (3.11) ω2 + ω = 1 .
We remark that ω is also equal to the number π3 defined in (1.5) and (1.6), i.e., ω is the largest number for which there exists a cyclic triple ( x, y, z ) with min( x, y, z ) ≥ ω. We will, however, not use this fact in our proof.
Lemma 3.2.
(i) A triple (x, y, z ) belongs to the set C(I)3 if and only if it satisfies
(3.12)
12 < x ≤ ωx ≤ y ≤ 1 − xxx ≤ z ≤ 1 − xy
.
(ii) A triple (x, y, z ) belongs to the set C(II )3 if and only if it satisfies
(3.13)
0 ≤ x < 1212 < y ≤ 1 − x
12 < z ≤ 1
or
0 ≤ x < 121 − x < y ≤ 112 < z ≤ 1 − xy
.
Proof. By definition (see (3.1) and (3.2)) the sets C(I)3 and C(II )3 consist of those cyclic triples (x, y, z ) ∈ [0 , 1] 3 that satisfy 1 /2 < x ≤ y, z ≤ 1 and 0 ≤ x < 1/2 < y, z ≤ 1, respectively. By part (ii) of Lemma 2.4, under either of the latter two conditions, a triple ( x, y, z ) is cyclic if and only if it satisfies x + yz ≤ 1. Next, note that any triple ( x, y, z ) ∈ C(I)3 must satisfy x ≤ y and x ≤ z, so the inequality
x + yz ≤ 1 can only hold if x + x2 ≤ 1, i.e., if x ≤ ω, where ω is defined by (3.10) and (3.11). 9For each x in the range 1 /2 < x ≤ ω, the set of pairs ( y, z ) with x ≤ y, z ≤ 1 satisfying
x + yz ≤ 1 is nonempty and consists of exactly those pairs that satisfy x ≤ y ≤ (1 − x)/x
and x ≤ z ≤ (1 − x)/y . It follows that a triple ( x, y, z ) belongs to C(I)3 if and only if it satisfies (3.12). This proves part (i) of the lemma. For the proof of part (ii), note that C(II )3 is the set of triples ( x, y, z ) satisfying 0 ≤ x <
1/2, 1 /2 < y, z ≤ 1, and x + yz ≤ 1. For any fixed pair ( x, y ) with 0 ≤ x < 1/2 and 1/2 < y ≤ 1, the set of values z with 1 /2 < z ≤ 1 for which x + yz ≤ 1 holds is exactly the interval (1 /2, min((1 − x)/y, 1)]. The latter interval is nonempty, and it reduces to (1 /2, 1] if y ≤ 1 − x, and to (1 /2, (1 − x)/y ] if 1 − x < y ≤ 1. The desired characterization (3.13) now follows.
Lemma 3.3. We have
(3.14) vol( C(II )3 ) = 316 − ln 2 8 .
Proof. Using the characterization (3.13) of the set C(II )3 we obtain vol( C(II )3 ) =
∫ 1/20
∫ 1−x
1/2
∫ 11/2
dz dy dx +
∫ 1/20
∫ 11−x
∫ 1−xy
1/2
dz dy dx
= 12
∫ 1/20
(12 − x
)
dx +
∫ 1/20
∫ 11−x
(1 − xy − 12
)
dy dx
= 12
∫ 1/20
(12 − 2x
)
dx −
∫ 1/20
(1 − x) ln(1 − x) dx
= −
∫ 1/20
(1 − x) ln(1 − x) dx
= −
[(1 − x)2
4 (1 − 2 ln(1 − x))
]1/20
= 316 − ln 2 8 .
Lemma 3.4. We have
(3.15) vol( C(I)3 ) = ln 2 8 − ln(2 ω) + 11 ω
12 − 716 ,
where ω = ( √5 − 1) /2 is defined as in (3.10) .Proof. Using the characterization (3.12) of the set C(I)3 we obtain, on noting that 1 > (1 −
x)/x ≥ x for 1 /2 < x ≤ ω (since ω is the positive root of ω2 = 1 − ω) and 1 > (1 − x)/y ≥ x
for x ≤ y ≤ (1 − x)/x ,vol( C(I)3 ) =
∫ ω
1/2
∫ 1−xx
x
∫ 1−xy
x
dz dy dx =
∫ ω
1/2
∫ 1−xx
x
(1 − xy − x
)
dy dx (3.16) =
∫ ω
1/2
(1 − x) ln
(1 − xx2
)
dx −
∫ ω
1/2
x
(1 − xx − x
)
dx
= I1 − I2,
10 say. The integrals I1 and I2 can be evaluated as follows, using the relations (see (3.11))
ω2 = 1 − ω and ω3 = ω − ω2 = 2 ω − 1:
I1 =
[−(1 − x)2
2 ln
(1 − xx2
)] ω
1/2
+
∫ ω
1/2
(1 − x)2
2
(
− 11 − x − 2
x
)
dx (3.17) = −(1 − ω)2
2 ln
(1 − ωω2
)
ln 2 8 +
∫ ω
1/2
(
− 1
x + 32 − x
2
)
dx
= ln 2 8 − ln(2 ω) + 3( ω − 1/2) 2 − ω2 − 1/44= ln 2 8 − ln (2 ω) + 7ω
4 − 15 16 ,I2 =
∫ ω
1/2
(1 − x − x2) dx = ω − 12 − ω2 − 1/42 − ω3 − 1/83 = 5ω
6 − 12.(3.18) Substituting (3.17) and (3.18) into (3.16) yields (3.15).
Proof of Theorem 1. Combining (1.9) and Lemmas 3.1, 3.3, and 3.4, we obtain
p∗
3
= vol(C ∗
3
) = 3 vol( C(I)3 )= 3
(ln 2 8 − ln(2 ω) + 11 ω
12 − 716
)
= 3
(
ln 2 8 − ln( √5 − 1) + 11 √5 − 11 24 − 716
)
= 3 ln 2 8 − 3 ln( √5 − 1) + 11 √58 − 43 16 = 0 .01121759 . . .
and
p3 = vol( C3) = 6 vol( C(I)3 ) + 6 vol( C(II )3 )= 6
(
ln 2 8 − ln( √5 − 1) + 11 √5 − 11 24 − 716
)
6
( 316 − ln 2 8
)
= 11 √5 − 17 4 − 6 ln( √5 − 1) = 0 .6275748 . . .
These are the desired formulas (1.13) and (1.12).
4 Proof of Theorem 2
Proof of Theorem 2, upper bound. For the upper bound in (1.14), note that, by (1.7), a cyclic tuple ( x1, . . . , x n) ∈ [0 , 1] n must satisfy min( x1, . . . , x n) ≤ πn = 1 − 14 cos 2(π/ (n + 2)) < 34.
11 Therefore any tuple ( x1, . . . , x n) ∈ [3 /4, 1] n is not cyclic. Moreover, since, by Lemma 2.1(iii), a tuple ( x1, . . . , x n) is cyclic if and only if the complementary tuple (1 − x1, . . . , 1 − xn) is cyclic, any tuple ( x1, . . . , x n) satisfying xi ∈ [0 , 1/4] for all i is also not cyclic. Thus the set
Cn of cyclic tuples lies in the complement of the sets [3 /4, 1] n and [0 , 1/4] n. Hence we have
pn = vol( Cn) ≤ 1 − vol ([3 /4, 1] n) − vol ([0 , 1/4] n) = 1 − 2
(14
)n
,
which is the desired upper bound. We next turn to the lower bound in (1.14). The argument is based on the following lemma, which gives a sufficient condition for an n-tuple to be cyclic. Recall our convention that subscripts in n-tuples ( x1, . . . , x n) are to be interpreted modulo n.
Lemma 4.1. Let (x1, . . . , x n) ∈ [0 , 1] n, and suppose that there exists an index i ∈ { 1, . . . , n }
such that
(4.1) xi + xi+1 ≥ 1 and xi+2 + xi+3 ≤ 1.
Then (x1, . . . , x n) is cyclic. Proof. Since, by Lemma 2.1(i), a cyclic permutation of a cyclic tuple is also cyclic, we may assume without loss of generality that the assumption (4.1) of the lemma holds with i = n−2, i.e., that (4.2) xn−2 + xn−1 ≥ 1 and xn + x1 ≤ 1.
Let ( x1, . . . , x n) ∈ [0 , 1] n be an n-tuple satisfying (4.2). Define independent random variables U1, . . . , U n with values in {− n + 1 , −n + 2 , . . . , n + 1 , n + 2 } as follows: (4.3)
P (Un−1 = −n + 1) = 1 − xn−2,P (Ui = −i) = 1 − xi−1 (n − 2 ≥ i ≥ 3) ,P (U2 = −2) = 1 − x1
1 − xn
,P (U1 = 0) = 1 − xn,P (U2 = 2) = x1
1 − xn
,P (Ui = i) = xi−1 (3 ≤ i ≤ n − 2) ,P (Un−1 = n − 1) = xn−1 + xn−2 − 1,P (Un = n) = 1 ,P (U1 = n + 1) = xn,P (Un−1 = n + 2) = 1 − xn−1.
The values of the random variables Ui are shown below:
Un−1
−(n − 1) . . . Ui
−i
. . . U2
−2
U1
0
U2
. . Ui
i
. . . Un−1
n − 1
Un
n
U1
n + 1
Un−1
n + 2 12 The idea behind this construction is the following: If we let Ui be random variables with values ±i, then Ui+1 > U i holds if and only if Ui+1 = i + 1. Thus, if we require that
P (Ui+1 = i + 1) = xi (and hence P (Ui+1 = −i − 1) = 1 − xi), then Ui+1 > U i holds with the desired probability xi. This is indeed how the variables Ui in (4.3) are defined for the “interior” indices i = 3 , . . . , n − 2, so the probabilities P (Ui+1 > U i) have the desired value
xi if 2 ≤ i ≤ n − 3. For the remaining indices i = 1 , 2, n − 1, n , the definition of Ui has to be adjusted to ensure that the “wrap-around” probability P (Un > U 1) also has the desired value. The following calculations show that if U1, U 2, U n−1, U n are defined as in (4.3), then the remaining probabilities P (Ui+i > U i) also have the desired values:
P (U2 > U 1) = P (U2 = 2) P (U1 = 0) = x1
1 − xn
· (1 − xn) = x1,P (Un > U n−1) = 1 − P (Un−1 = n + 2) = 1 − (1 − xn−1) = xn−1,P (Un−1 > U n−2) = 1 − P (Un−1 = −n + 1) = 1 − (1 − xn−2) = xn−2,P (U1 > U n) = P (U1 = n + 1) = xn.
The assumption (4.2) ensures that the numbers x1/(1 − xn) and xn−1 + xn−2 − 1 arising in the definition of U2 and Un−1 are contained in the interval [0 , 1] and thus can represent probabilities. Thus the tuple ( x1, . . . , x n) is indeed cyclic.
Lemma 4.2. Suppose (x1, . . . , x n) ∈ [0 , 1] n satisfies neither of the conditions
(4.4) xi + xi+1 < 1 (i = 1 , . . . , n )
and
(4.5) xi + xi+1 > 1 (i = 1 , . . . , n ).
Then (x1, . . . , x n) is cyclic. Proof. Let
si = xi + xi+1 .
By Lemma 4.1, it suffices to show that if neither (4.4) nor (4.5) hold, then there exists an index i ∈ { 1, . . . , n } such that (4.6) si ≥ 1 and si+2 ≤ 1.
We split the argument into two cases based on the parity of n.If n is odd, consider the sequence of n numbers s1, s 3, . . . , s n, s 2, s 4, . . . , s n−1. If there is no
i for which (4.6) holds, then the numbers in this sequence must be either all strictly greater than 1 or all strictly less than 1. But since this sequence is a permutation of s1, s 2, . . . , s n,it then follows that either (4.4) or (4.5) holds, contradicting the assumption of the lemma. If n is even, consider the two sequences A = {s1, s 3, . . . , s n−1} and B = {s2, s 4, . . . , s n}.If there is no i for which (4.6) holds, we conclude as before that, within each of these two sequences , either all elements are greater than 1 or all elements are less than 1. If the elements 13 of both sequences are all greater than 1 or all less than 1, then (4.4) or (4.5) follows, and we again have a contradiction. In the remaining case the elements of one sequence are all greater than 1 and those of the other sequence are all less than 1. Since each sequence has exactly n/ 2 elements, one of the two sums ∑
a∈A
a and ∑
b∈B
b must be strictly greater than n/ 2, while the other sum must be strictly less than n/ 2. On the other hand, the identity
∑
a∈A
a =
n/ 2
∑
j=1
(x2j−1 + x2j ) =
n
∑
i=1
xi =
n/ 2
∑
j=1
(x2j + x2j+1 ) = ∑
b∈B
b
shows that the two sums ∑
a∈A
a and ∑
b∈B
b are in fact equal. Thus, this case cannot occur and the proof of the lemma is complete. Define the sets
D(I)
n
= {(x1, . . . , x n) ∈ [0 , 1] n : (x1, . . . , x n) satisfies (4.4) },(4.7)
D(II )
n
= {(x1, . . . , x n) ∈ [0 , 1] n : (x1, . . . , x n) satisfies (4.5) },(4.8)
D∗
n
= {(x1, . . . , x n) ∈ D(I)
n
: x1 = min( x1, . . . , x n)}.(4.9)
Lemma 4.3. We have
vol( D(I)
n
) = vol( D(II )
n
),(4.10) vol( D(I)
n
) = n vol( D∗
n
),(4.11) vol( Cn) ≥ 1 − 2n vol( D∗
n
).(4.12)
Proof. First note that an n-tuple ( x1, . . . , x n) satisfies condition (4.5) if and only if the complementary n-tuple ( x1, . . . , x n) = (1 − x1, . . . , 1 − xn) satisfies (4.4). The transformation (x1, . . . , x n) → (x1, . . . , x n) then shows that the regions D(I)
n
and D(II )
n
have the same volume. This yields (4.10). For the proof of (4.11), note that if a tuple ( x1, . . . , x n) satisfies condition (4.4), then so does any cyclic permutation of this tuple. Therefore the set D(I)
n
is invariant with respect to taking cyclic permutations. It follows that the volume of D(I)
n
is n times that of D∗
n
, the set consisting of those tuples in D(I)
n
with smallest component x1. This proves (4.11). Finally, (4.12) follows on noting that, by Lemma 4.2, the set Cn of cyclic n-tuples contains the complement of the set D(I)
n
∪ D(II )
n
. Therefore, vol( Cn) ≥ 1 − vol( D(I)
n
∪ D(II )
n
) ≥ 1 − vol( D(I)
n
) − vol( D(II )
n
)= 1 − 2 vol( D(I)
n
) = 1 − 2n vol( D∗
n
),
where in the last step we have used (4.10) and (4.11). To obtain the desired lower bound for the volume of Cn, it now remains to obtain an appropriate upper bound for the volume of the region D∗
n
. The following lemma expresses this volume in terms of a combinatorial quantity counting alternating permutations. Here a permutation of a set of n distinct real numbers x1, . . . , x n is called up-down alternating if it satisfies x1 < x 2 > x 3 < . . . , i.e., if xi+1 − xi is positive for odd i and negative for even i; see Andr´ e for more about such permutations. 14 Lemma 4.4. We have
(4.13) vol( D∗
n
) = An−1
(2 n)( n − 1)! ,
where An−1 is the number of up-down alternating permutations of length n − 1.Proof. By definition, D∗
n
is the set of n-tuples ( x1, . . . , x n) ∈ [0 , 1] n satisfying the inequalities (4.4) and in addition x1 = min( x1, . . . , x n). Under the latter condition the last inequality in (4.4), xn + x1 < 1, is implied by the second-last inequality, xn−1 + xn < 1, and thus can be omitted. Moreover, the inequalities (4.4) can only hold if 0 ≤ x1 < 1/2. Thus, the set D∗
n
is the set of tuples ( x1, . . . , x n) satisfying (4.14)
0 ≤ x1 < 1/2
x1 ≤ x2 < 1 − x1
...
x1 ≤ xn < 1 − xn−1
.
Hence we have vol( D∗
n
) =
∫ 1/20
∫ 1−x1
x1
∫ 1−x2
x1
· · ·
∫ 1−xn−1
x1
dx n · · · dx 3dx 2dx 1.(4.15) To evaluate the latter integral, we apply the change of variables (4.16) y0 = x1, yi = xi+1 − x1
1 − 2x1
(i = 1 , . . . , n − 1) .
It is easily checked that the transformation (4.16) has Jacobian determinant ±(1 − 2y0)n−1
and maps the region (4.14) to the set of n-tuples ( y0, . . . , y n−1) satisfying (4.17) 0 ≤ y0 < 1/2, 0 ≤ y1 < 1, 0 ≤ yi < 1 − yi−1 (i = 2 , . . . , n − 1) .
It follows that vol( D∗
n
) =
∫ 1/20
(1 − 2y0)n−1 dy 0
∫ 10
∫ 1−y1
0
· · ·
∫ 1−yn−2
0
dy n−1 · · · dy 2dy 1(4.18) = 12n
∫ 10
∫ 1−y1
0
· · ·
∫ 1−yn−2
0
dy n−1 · · · dy 2dy 1 = 12n vol( En−1),
where
En−1 =
{
(y1, . . . , y n−1) ∈ [0 , 1] n−1 : 0 ≤ y1 < 1, 0 ≤ yi < 1 − yi−1 (i = 2 , . . . , n − 1)
}
.
Now set, for i = 1 , . . . , n − 1, (4.19) ui =
{
yi if i is odd, 1 − yi if i is even. 15 The transformation (4.19) has Jacobian determinant ±1 and thus is volume-preserving. Moreover, noting that the condition yi < 1 − yi−1 is equivalent to ui < u i−1 when i is odd, and to ui−1 < u i when i is even, we see that this transformation maps the region En−1 to the region
E∗
n−1
= {(u1, . . . , u n−1) ∈ [0 , 1] n−1 : u1 < u 2 > u 3 < . . . }.
The set E∗
n−1
is, up to a set of volume 0, the set of tuples in the ( n − 1)-dimensional unit cube whose coordinates form an up-down alternating permutation of length n − 1. Since there are ( n − 1)! permutations of the coordinates and, by symmetry, each such permutation contributes an amount 1 /(n − 1)! to the volume of the unit cube, it follows that (4.20) vol( En−1) = vol( E∗
n−1
) = An−1
(n − 1)! ,
where An−1 is the number of up-down alternating permutations of length n − 1. Combining (4.18) with (4.20) we obtain vol( D∗
n
) = 12n vol( En−1) = 12n vol( E∗
n−1
) = An−1
(2 n)( n − 1)! .
This is the desired formula (4.13).
Lemma 4.5. The number An of alternating permutations satisfies
(4.21) An
n! ≤ 3
( 2
π
)n+1
(n ≥ 1) .
Proof. Andr´ e [1, p. 23] showed that, for any integer m ≥ 1,
A2m
(2 m)! = 2
( 2
π
)2m+1 (
1 − 132m+1 + 152m+1 − 172m+1 + . . .
)
,(4.22)
A2m−1
(2 m − 1)! = 2
( 2
π
)2m (
1 + 132m + 152m + 172m + . . .
)
.(4.23) The desired bound (4.21) follows as the series in parentheses in (4.22) is bounded above by 1, while the series in (4.23) is at most
≤ 1 +
∞
∑
n=3
1
n2 = −14 + π2
6 < 32.
Proof of Theorem 2, lower bound. Combining Lemmas 4.3, 4.4, and 4.5 we obtain
pn = vol( Cn) ≥ 1 − 2n vol( D∗
n
) = 1 − 2n An−1
(2 n)( n − 1)! = 1 − An−1
(n − 1)! ≥ 1 − 3
( 2
π
)n
,
which is the desired lower bound for pn.16 5 Proof of Theorem 3
Let (5.1) Co3 = {(x, y, z ) ∈ [0 , 1] 3 : ( x, y, z ) is cyclic and x ≤ y ≤ z} ,
i.e., Co3 is the set of triples in C3 with nondecreasing coordinates. The following lemma characterizes the set Co3 in terms of inequalities on x, y, and z.Recall (see (3.10) and (3.11)) that ω = ( √5 − 1) /2 is the positive root of ω2 + ω = 1.
Lemma 5.1. We have: (i) A triple (x, y, z ) belongs to the set Co
3
if and only if it satisfies
(5.2)
0 ≤x ≤ ωx ≤y ≤ √1 − x
max( y, (1 − x)(1 − y)) ≤z ≤ min
(
1, 1 − xy
)
.
(ii) A triple (x, y, z ) belongs to the set Co
3
if and only if it satisfies
(5.3)
0 ≤y ≤ 10 ≤x ≤ min (y, 1 − y2)
max( y, (1 − x)(1 − y)) ≤z ≤ min
(
1, 1 − xy
)
.
Moreover, the intervals for the variables x, y, and z in (5.2) and (5.3) are always non-empty, i.e., the upper bounds on these variables are always greater or equal to the corresponding lower bounds. Proof. By Lemma 2.4(i), a triple ( x, y, z ) ∈ [0 , 1] 3 with x ≤ y ≤ z is cyclic if and only if it satisfies the inequalities (5.4) x + yz ≤ 1 and (1 − z) + (1 − y)(1 − x) ≤ 1.
Hence Co3 is the set of triples ( x, y, z ) ∈ [0 , 1] 3 satisfying x ≤ y ≤ z and (5.4). Consider now a triple ( x, y, z ) ∈ [0 , 1] 3 satisfying these conditions. Then x+x2 ≤ x+yz ≤
1, and hence x ≤ ω, with ω = ( √5 − 1) /2 defined as above. Thus, we must have (5.5) 0 ≤ x ≤ ω.
Next, note that x + y2 ≤ x + yz ≤ 1 and hence y ≤ √1 − x. Since we also have x ≤ y ≤ z,it follows that (5.6) x ≤ y ≤ √1 − x.
Note that since x ≤ ω, with ω being the positive root of ω2 + ω = 1, we have x2 + x ≤ 1 and hence x ≤ √1 − x. Thus, the interval for y in (5.6) is non-empty. 17 Next, we have trivially (5.7) 0 ≤ y ≤ 1,
while the inequalities x ≤ y ≤ z and x + y2 ≤ x + yz ≤ 1 imply (5.8) 0 ≤ x ≤ min( y, 1 − y2).
Since 0 ≤ y ≤ 1, the interval (5.8) is clearly non-empty. Finally, the inequalities (5.4) imply z ≤ (1 − x)/y and z ≥ (1 − y)(1 − x), which combined with 0 ≤ z ≤ 1 and x ≤ y ≤ z yields (5.9) max( y, (1 − x)(1 − y)) ≤ z ≤ min
(
1, 1 − xy
)
.
Again the interval for z here is non-empty since, by (5.6), (1 − x)/y ≥ y, while we also have (1 − x)/y ≥ (1 − x) ≥ (1 − x)(1 − y), 1 ≥ y, and 1 ≥ (1 − x)(1 − y) which follow from 0 ≤ x, y ≤ 1. Note that the inequalities (5.5), (5.6), and (5.9) are exactly those in the desired charac-terization (5.2) of Co3 , while (5.7), (5.8), and (5.9) are those in (5.3). Thus, we have shown that any triple ( x, y, z ) ∈ Co3 must satisfy (5.2) and (5.3) and that the intervals for x, y, and
z in (5.2) and (5.3) are all non-trivial. Conversely, assume that ( x, y, z ) satisfies (5.2) or (5.3). Then 0 ≤ x ≤ y ≤ z ≤ 1, and z
must satisfy (5.9). The upper bound for z in (5.9) implies z ≤ (1 −x)/y and hence x+yz ≤ 1, i.e., the first inequality in (5.4). The lower bound for z in (5.9) implies (1 − x)(1 − y) ≤ z
and hence (1 − z) + (1 − x)(1 − y) ≤ 1, i.e., the second inequality in (5.4). Thus, ( x, y, z )must be an element of Co3 .This completes the proof of the lemma.
Proof of Theorem 3. Since, by Lemma 2.2, the “cyclic triple” property is invariant with respect to taking permutations, we have (5.10) vol( Co3 ) = 13! vol( C3) = 16p3.
Moreover, the distribution of the order statistics ( X∗
1
, X ∗
2
, X ∗
3
) of a random triple in C3 is the same as the distribution of a random vector ( X, Y, Z ) that is distributed uniformly over the region Co3 . It follows that the density functions f1, f2, and f3 in Theorem 3 are the marginal densities of the latter vector and thus can be obtained by integrating over appropriate slices of the region Co3 and dividing by the volume of this region. That is, we have (5.11) f1(x) = 1vol( Co3 )
∫ ∫
A1(x)
1 dz dy = 6
p3
∫ ∫
A1(x)
1 dz dy,
where (5.12) A1(x) = {(y, z ) ∈ [0 , 1] 2 : ( x, y, z ) ∈ Co3 },
along with analogous formulas for f2 and f3.18 Using the first characterization of Co3 in Lemma 5.1, we see that f1(x) is supported on the interval [0 , ω ], and that for each x with 0 ≤ x ≤ ω we have
f1(x) = 6
p3
∫ √1−xx
∫ min (1, 1−xy )
max( y, (1 −x)(1 −y))
dz dy
= 6
p3
∫ √1−xx
(
min
(
1, 1 − xy
)
− max( y, (1 − x)(1 − y))
)
dy
= 6
p3
12
(x3 − 3x2 + 1−x
2−x
− (1 − x) ln(1 − x)) if 0 ≤ x ≤ 1 − ω,
12
(x2 − 3x + 1 − (1 − x) ln(1 − x)) if 1 − ω ≤ x ≤ 1/2,
12
(x2 + x − 1 + (1 − x) ln(1 − x) − 2(1 − x) ln x) if 1 /2 ≤ x ≤ ω.This yields the desired formula (1.15) on noting that ω = (1 − √5) /2 and (5.13) 1 − ω = 1 −√5 − 12 = 3 − √52 .
Similarly, using the second characterization of Lemma 5.1, we obtain
f2(y) = 6
p3
∫ min (y, 1−y2)
0
∫ min (1, 1−xy )
max( y, (1 −x)(1 −y))
dz dx
= 6
p3
∫ min (y, 1−y2)
0
(
min
(
1, 1 − xy
)
− max( y, (1 − x)(1 − y))
)
dx
= 6
p3
12
(3 y2 − y3) if 0 ≤ y ≤ 1 − ω,3y − y2 − 12(1 −y) if 1 − ω ≤ y ≤ 1/2, 2 − y − y2 − 12y if 1 /2 ≤ y ≤ ω,
12
(y3
2
− 3y
2
1
)
if ω ≤ y ≤ 1=
3
p3
(3 x2 − x3) if 0 ≤ x ≤ 1 − ω,
6
p3
(
3x − x2 − 12(1 −x)
)
if 1 − ω < x ≤ 1/2,
f2(1 − x) if 1 /2 < x ≤ 1, This yields the desired formula (1.16) for f2(y). Finally, since, by Lemma 2.1(iii), a triple ( x1, x 2, x 3) is cyclic if and only if (1 − x1, 1 −
x2, 1 − x3) is cyclic, the distribution of X∗
3
, the largest element in a cyclic triple, is the same as that of 1 − X∗
1
. Thus, we have f3(x) = f1(1 − x). This completes the proof of Theorem 3.
6 Concluding Remarks
In this section we mention some related questions and conjectures suggested by our results. 19 Precise asymptotic behavior of pn. Theorem 2 implies that
e−(1+ o(1)) c1n ≤ 1 − pn ≤ e−(1+ o(1)) c2n (n → ∞ )holds with c1 = ln 4 and c2 = ln( π/ 2). It seems reasonable to expect that there exists a
single constant c, with c2 ≤ c ≤ c1, such that 1 − pn = e−(1+ o(1)) cn (n → ∞ ),
i.e., that the limit
c = lim
n→∞
− ln(1 − pn)
n
exits. If true, the value of c must lie between the constants c2 = ln( π/ 2) = 0 .451 . . . and
c1 = ln 4 = 1 .386 . . . .
Prescribing all pairwise probabilities P (Ui > U j ). By definition, a cyclic n-tuple is an n-tuple of real numbers in [0 , 1] that can represent the n probabilities P (Ui+1 > U i), i =1, . . . , n , with a suitable choice of independent random variables Ui, i = 1 , . . . , n , satisfying
P (Ui = Uj ) = 0 for i 6 = j.As a natural extension of this concept, one can ask which arrays xij , i, j = 1 , . . . , n , can represent probabilities of the form (6.1) xij = P (Uj > U i) (i, j = 1 , . . . , n ),
under the same assumptions on the random variables Ui. This problem arises in the theory of Social Choice, where it is known under the term utility representations ; see Suck for background and references. Obviously, in order for (6.1) to hold, it is necessary that xii = 0 for all i. Moreover, the assumption that P (Ui = Uj ) = 0 for i 6 = j implies xij = P (Uj > U i) = 1 − P (Ui > U j ) = 1 − xji for i > j . Thus the array xij is determined by the (n
2
) values xij , 1 ≤ i < j ≤ n.Considering these values as tuples in the (n
2
)-dimensional unit cube, one can then ask for the probability that a random tuple in this cube has a representation of the form (6.1). When n = 3, the values xij are determined by x12 = P (U2 > U 1), x23 = P (U3 > U 2), and
x13 = P (U3 > U 1) = 1 − P (U1 > U 3). Thus we see that a representation (6.1) is possible if and only if the triple ( x12 , x 23 , 1 − x13 ) is cyclic, so the problem is equivalent to the problem considered in Theorem 1. In the general case, however, the two problems are different. Indeed, the question of characterizing arrays xij that have representations of the form (6.1) seems to be an ex-traordinarily hard problem that remains largely unsolved; we refer to Suck for further discussion and some partial results.
Representations with dependent random variables Ui. In our definition of cyclic tuples we assumed the random variables Ui to be independent. This is a natural assumption that is made in most of the mathematical literature on nontransitivity phenomena. In the context of dice rolls the assumption reflects the fact that if n dice are rolled, then the values showing on the faces of these dice are indeed independent. 20 However, there also exist well-known nontransitivity paradoxes—especially in voting the-ory and social choice theory—in which the underlying random variables are dependent. Thus, it would be of interest to study the analogs of cyclic and nontransitive tuples if the indepen-dence assumption on Ui is dropped. We refer to Marengo et al. for some recent results in this direction, and to Suck for an in-depth discussion of the differences between dependent and independent representations of the form (6.1).
References
D´ esir´ e Andr´ e, Sur les permutations altern´ ees , J. Math. Pures Appl. 7 (1881), 167–184. Levi Angel and Matt Davis, Properties of sets of nontransitive dice with few sides ,Involve 11 (2018), no. 4, 643–659. MR 3778917 I.I. Bogdanov, Nontransitive Roulette , Mat. Prosveshch. 3 (2010), 240–255. Adrian Bondy, Jian Shen, St´ ephan Thomass´ e, and Carsten Thomassen, Density condi-tions for triangles in multipartite graphs , Combinatorica 26 (2006), no. 2, 121–131. MR 2223630 Joe Buhler, Ron Graham, and Al Hales, Maximally nontransitive dice , Amer. Math. Monthly 125 (2018), no. 5, 387–399. MR 3785874 Li-Chien Chang, On the maximin probability of cyclic random inequalities , Scientia Sinica 10 (1961), no. 5, 499. Brian Conrey, James Gabbard, Katie Grant, Andrew Liu, and Kent E. Morrison, In-transitive dice , Math. Mag. 89 (2016), no. 2, 133–143. MR 3510988 Martin Gardner, The paradox of nontransitive dice and the elusive principle of indiffer-ence , Scientific American 223 (1970), no. 6, 110–115. E. Gilson, C. Cooley, W. Ella, M. Follett, and L. Traldi, The Efron dice voting system ,Soc. Choice Welf. 39 (2012), no. 4, 931–959. MR 2983388 Artem Hulko and Mark Whitmeyer, A game of nontransitive dice , Math. Mag. 92
(2019), no. 5, 368–373. MR 4044971 Andrzej Komisarski, Nontransitive random variables and nontransitive dice , Amer. Math. Monthly (2021), to appear. Daniel Kr´ al, Locally satisfiable formulas , Proceedings of the Fifteenth Annual ACM-SIAM Symposium on Discrete Algorithms, ACM, New York, 2004, pp. 330–339. MR 2291069 A. V. Lebedev, The nontransitivity problem for three continuous random variables ,Autom. Remote Control 80 (2019), no. 6, 1058–1068. S0005117919060055
21 L. Lov´ asz and J. Pelik´ an, On the eigenvalues of trees , Period. Math. Hungar. 3 (1973), 175–182. MR 416964 James E. Marengo, Quinn T. Kolt, and David L. Farnsworth, An upper bound for a cyclic sum of probabilities , Statist. Probab. Lett. 165 (2020), 108861. MR 4118940 J. W. Moon and L. Moser, Generating oriented graphs by means of team comparisons ,Pacific J. Math. 21 (1967), 531–535. MR 216984 Zolt´ an L´ or´ ant Nagy, A multipartite version of the Tur´ an problem—density conditions and eigenvalues , Electron. J. Combin. 18 (2011), no. 1, Paper 46, 15. MR 2776822 Hugo Steinhaus and S. Trybula, On a paradox in applied probabilities , Bull. Acad. Polon. Sci 7 (1959), no. 67-69, 108. Reinhard Suck, Independent random utility representations , Math. Social Sci. 43 (2002), no. 3, 371–389. MR 2072963 Luca Trevisan, On local versus global satisfiability , SIAM J. Discrete Math. 17 (2004), no. 4, 541–547. MR 2085212 S. Trybula, On the paradox of three random variables , Zastos. Mat. 5 (1960/61), 321– 332. MR 126865 , On the paradox of n random variables , Zastos. Mat. 8 (1965), 143–156. MR 190970 Zalman Usiskin, Max-min probabilities in the voting paradox , Ann. Math. Statist. 35
(1964), 857–862. MR 161353 22 |
187810 | https://www.efile.com/pdf/0594.pdf | 361001200094 Department of Taxation and Finance Amended Resident Income Tax Return New York State • New York City • Yonkers • MCTMT IT-201-X c Single d Married filing joint return (enter spouse’s Social Security number above) e Married filing separate return (enter spouse’s Social Security number above) f Head of household (with qualifying person) g Qualifying widow(er) For the full year January 1, 2020, through December 31, 2020, or fiscal year beginning ... 20 and ending ...
A Filing status (mark an X in one box): D1 Did you file an amended federal return? (see instructions) .................................................... Yes No D2 Were you required to report any nonqualified deferred compensation, as required by IRC § 457A, on your 2020 federal return? (see Form IT-201-I, page 15) Yes No E (1) Did you or your spouse maintain living quarters in NYC during 2020? ..................... Yes No (2) Enter the number of days spent in NYC in 2020 (any part of a day spent in NYC is considered a day) ......... F NYC residents and NYC part-year residents only: (1) Number of months you lived in NYC in 2020 ................ (2) Number of months your spouse lived in NYC in 2020 ........................................................
G Enter your 2-character special condition code(s) if applicable (see instructions) ..................
B Did you itemize your deductions on your 2020 federal income tax return? ............ Yes No C Can you be claimed as a dependent on another taxpayer’s federal return? ........... Yes No Taxpayer’s permanent home address (number and street or rural route) Apartment number City, village, or post office State ZIP code NY Taxpayer’s date of death (mmddyyyy) Spouse’s date of death (mmddyyyy) Decedent information See the instructions, Form IT-201-X-I, for help completing your amended return.
Your first name MI Your last name (for a joint return, enter spouse’s name on line below) Your date of birth (mmddyyyy) Your Social Security number Spouse’s first name MI Spouse’s last name Spouse’s date of birth (mmddyyyy) Spouse’s Social Security number Mailing address (number and street or PO box) Apartment number New York State county of residence City, village, or post office State ZIP code Country (if not United States) School district name School district code number ...............
First name MI Last name Relationship Social Security number Date of birth (mmddyyyy) H Dependent information If more than 7 dependents, mark an X in the box.
For office use only 361002200094 Page 2 of 6 IT-201-X (2020) 20 Interest income on state and local bonds and obligations (but not those of NYS or its local governments) 20 .00 21 Public employee 414(h) retirement contributions from your wage and tax statements ............... 21 .00 22 New York’s 529 college savings program distributions ............................................................... 22 .00 23 Other (Form IT-225, line 9) ............................................................................................................. 23 .00 24 Add lines 19a through 23 ............................................................................................................ 24 .00 New York additions 1 Wages, salaries, tips, etc. ........................................................................................................... 1 .00 2 Taxable interest income ............................................................................................................... 2 .00 3 Ordinary dividends ...................................................................................................................... 3 .00 4 Taxable refunds, credits, or offsets of state and local income taxes (also enter on line 25) ........... 4 .00 5 Alimony received ......................................................................................................................... 5 .00 6 Business income or loss (submit a copy of federal Schedule C, Form 1040) ...................................... 6 .00 7 Capital gain or loss (if required, submit a copy of federal Schedule D, Form 1040) .............................. 7 .00 8 Other gains or losses (submit a copy of federal Form 4797) ............................................................. 8 .00 9 Taxable amount of IRA distributions. If received as a beneficiary, mark an X in the box ... 9 .00 10 Taxable amount of pensions and annuities. If received as a beneficiary, mark an X in the box 10 .00 11 Rental real estate, royalties, partnerships, S corporations, trusts, etc. (submit copy of federal Schedule E, Form 1040) 11 .00 12 Rental real estate included in line 11 ................................. 12 .00 13 Farm income or loss (submit a copy of federal Schedule F, Form 1040) ............................................. 13 .00 14 Unemployment compensation ..................................................................................................... 14 .00 15 Taxable amount of Social Security benefits (also enter on line 27) ................................................. 15 .00 16 Other income Identify: 16 .00 17 Add lines 1 through 11 and 13 through 16 ................................................................................ 17 .00 18 Total federal adjustments to income Identify: 18 .00 19 Federal adjusted gross income (subtract line 18 from line 17) ........................................................ 19 .00 19a Recomputed federal adjusted gross income (see Form IT-201-I, page 16, Line 19a worksheet) .... 19a .00 Federal income and adjustments Whole dollars only New York subtractions 25 Taxable refunds, credits, or offsets of state and local income taxes (from line 4) 25 .00 26 Pensions of NYS and local governments and the federal government 26 .00 27 Taxable amount of Social Security benefits (from line 15) ....... 27 .00 28 Interest income on U.S. government bonds ...................... 28 .00 29 Pension and annuity income exclusion ............................. 29 .00 30 New York’s 529 college savings program deduction/earnings 30 .00 31 Other (Form IT-225, line 18) .................................................. 31 .00 32 Add lines 25 through 31 .............................................................................................................. 32 .00 33 New York adjusted gross income (subtract line 32 from line 24) .................................................. 33 .00 Your Social Security number 361003200094 Standard deduction or itemized deduction 34 Enter your standard deduction (from table below) or your itemized deduction (from Form IT-196) Mark an X in the appropriate box: Standard - or - Itemized 34 .00 35 Subtract line 34 from line 33 (if line 34 is more than line 33, leave blank) .......................................... 35 .00 36 Dependent exemptions (enter the number of dependents listed in item H) ......................................... 36 000.00 37 Taxable income (subtract line 36 from line 35) ............................................................................... 37 .00 c Single and you marked item C Yes ............... $ 3,100 c Single and you marked item C No .................. 8,000 d Married filing joint return ......... 16,050 e Married filing separate return ...................................... 8,000 f Head of household (with qualifying person) .......... 11,200 g Qualifying widow(er) .............. 16,050 New York State standard deduction table Filing status Standard deduction (from the front page) (enter on line 34 above) (continued on page 4) Name(s) as shown on page 1 Your Social Security number IT-201-X (2020) Page 3 of 6 361004200094 60 Voluntary contributions as reported on your original return (or as adjusted by the Tax Department; see instructions) ................................................................................................ 60 .00 61 Total New York State, New York City, Yonkers, and sales or use taxes, MCTMT, and voluntary contributions (add lines 46, 58, 59, and 60) .............................................................. 61 .00 Tax computation, credits, and other taxes 38 Taxable income (from line 37 on page 3) ....................................................................................... 38 .00 39 NYS tax on line 38 amount .......................................................................................................... 39 .00 40 NYS household credit ........................................................ 40 .00 41 Resident credit .................................................................. 41 .00 42 Other NYS nonrefundable credits (Form IT-201-ATT, line 7) 42 .00 43 Add lines 40, 41, and 42 .............................................................................................................. 43 .00 44 Subtract line 43 from line 39 (if line 43 is more than line 39, leave blank) .......................................... 44 .00 45 Net other NYS taxes (Form IT-201-ATT, line 30) ............................................................................. 45 .00 46 Total New York State taxes (add lines 44 and 45) ........................................................................ 46 .00 47 NYC taxable income ........................................................ 47 .00 47a NYC resident tax on line 47 amount ................................ 47a .00 48 NYC household credit ...................................................... 48 .00 49 Subtract line 48 from line 47a (if line 48 is more than line 47a, leave blank) ........................................................ 49 .00 50 Part-year NYC resident tax (Form IT-360.1) ....................... 50 .00 51 Other NYC taxes (Form IT-201-ATT, line 34) ........................ 51 .00 52 Add lines 49, 50, and 51 .................................................. 52 .00 53 NYC nonrefundable credits (Form IT-201-ATT, line 10) ........ 53 .00 54 Subtract line 53 from line 52 (if line 53 is more than line 52, leave blank) ......................................................... 54 .00 54a MCTMT net earnings base .... 54a .00 54b MCTMT ............................................................................ 54b .00 55 Yonkers resident income tax surcharge ........................... 55 .00 56 Yonkers nonresident earnings tax (Form Y-203) ................ 56 .00 57 Part-year Yonkers resident income tax surcharge (Form IT-360.1) 57 .00 58 Total New York City and Yonkers taxes / surcharges and MCTMT (add lines 54 and 54b through 57) 58 .00 59 Sales or use tax as reported on your original return (see instructions. Do not leave line 59 blank.) 59 .00 New York City and Yonkers taxes, credits, and surcharges and MCTMT Page 4 of 6 IT-201-X (2020) Your Social Security number 361005200094 62 Enter amount from line 61 ........................................................................................................... 62 .00 Amount you owe 81 If line 79 is less than line 62, subtract line 79 from line 62 (see instructions) ............................... 81 .00 82 Account information for direct deposit or electronic funds withdrawal (see instructions) If the funds for your payment (or refund) would come from (or go to) an account outside the U.S., mark an X in this box (see instructions) ..............................................................................................................................
See Important information in the instructions.
Your refund Account information 80 If line 79 is more than line 62, subtract line 62 from line 79 and indicate how you want your refund direct (fill in lines 82 paper Mark one refund choice: deposit through 82c) - or - check ................................... 80 .00 Payments and refundable credits 63 Empire State child credit .................................................. 63 .00 64 NYS/NYC child and dependent care credit ...................... 64 .00 65 NYS earned income credit (EIC) .............................. 65 .00 66 NYS noncustodial parent EIC .......................................... 66 .00 67 Real property tax credit .................................................... 67 .00 68 College tuition credit ......................................................... 68 .00 69 NYC school tax credit (fixed amount) (also complete F on page 1) 69 .00 69a NYC school tax credit (rate reduction amount) ................. 69a .00 70 NYC earned income credit ....................................... 70 .00 70a This line intentionally left blank ........................................ 70a 71 Other refundable credits (Form IT-201-ATT, line 18) ............. 71 .00 72 Total New York State tax withheld ................................... 72 .00 73 Total New York City tax withheld ..................................... 73 .00 74 Total Yonkers tax withheld ............................................... 74 .00 75 Total estimated tax payments / Amount paid with Form IT-370 75 .00 76 Amount paid with original return, plus additional tax paid after your original return was filed (see instructions) ........ 76 .00 77 Total payments (add lines 63 through 76) ..................................................................................... 77 .00 78 Overpayment, if any, as shown on original return or previously adjusted by NY State (see instr.) ... 78 .00 78a Amount from original Form IT-201, line 79 (see instructions) 78a .00 79 Subtract line 78 from line 77 ....................................................................................................... 79 .00 IT-201-X (2020) Page 5 of 6 82a Account type: Personal checking - or - Personal savings - or - Business checking - or - Business savings 82b Routing number 82c Account number 82d Electronic funds withdrawal (see instructions) ............... Date Amount .00 Name(s) as shown on page 1 Your Social Security number To pay by electronic funds withdrawal, mark an X in the box and fill in lines 82 through 82d. If you pay by check or money order you must complete Form IT-201-V and mail it with your return.
You must submit all required forms. Failure to do so will result in an adjustment to your return.
361006200094 Your signature Your occupation Spouse’s signature and occupation (if joint return) Date Daytime phone number Email: Name of partnership or S corporation Identifying number Principal business activity Address of partnership or S corporation See instructions for where to mail your return. 83a Federal audit change (complete lines 84 through 91 below) ................................................ 83b Worthless stock/securities .............. 83c Claim of right ............................... 83d Wages ........................................... 83e Military ............................................. 83f Court ruling .................................. 83g Workers’ compensation .................. 83h Treaties/visa .................................... 83i Tax shelter transaction ................ 83j Credit claim ..................................... 83k Protective claim (see instructions) ...... 83 Reason(s) for amending your return (mark an X in all applicable boxes; see instructions) Print designee’s name Designee’s phone number Personal identification ( ) number (PIN) Email: Third-party designee? Yes No ▼ Taxpayer(s) must sign here ▼ ( ) 83l Net operating loss (see instructions). Mark an X in the box .... and enter the year of the loss .... 83m Report Social Security number (SSN) Prior identification number Date SSN was issued 83n Other. Mark an X in the box ... and explain: 83o To report adjustments to partnership or S corporation income, gain, loss or deduction, provide the following information: Partnership S corporation 90 Federal credits disallowed ........ Earned income credit Amount disallowed Child care credit Amount disallowed 91 Federal penalties assessed 91a Fraud ............................................. 91b Negligence ........................ 91c Other (explain below) .......................... 84 Enter the date (mmddyyyy) of the 85 Do you concede the federal audit final federal determination changes (If No, explain below.) ......... Yes No (Explain) If you marked an X in box 83a above, you must complete lines 84 through 91 below. All others may skip lines 84 through 91 and go directly to the Third-party designee question. You must sign your amended return below. 86 List federal changes 86a 86a .00 86b 86b .00 86c 86c .00 86d 86d .00 86e 86e .00 87 Net federal changes (increase or decrease) ........................................................................... 87 .00 88 Federal taxable income (mark an X in one box) .... Per return Previously adjusted 88 .00 89 Corrected federal taxable income ............................................................................................ 89 .00 Page 6 of 6 IT-201-X (2020) Your Social Security number ▼ Paid preparer must complete ▼ (see instructions) Preparer’s NYTPRIN NYTPRIN excl. code Preparer’s signature Preparer’s printed name Firm’s name (or yours, if self-employed) Preparer’s PTIN or SSN Address Employer identification number Date Email: Department of Taxation and Finance Instructions for Form IT-201-X Amended Resident Income Tax Return New York State • New York City • Yonkers • MCTMT IT-201-X-I Important information Follow these steps to complete your amended Form IT-201-X: • Complete your Form IT-201-X as if you are filing your return for the first time.
• Carefully review and follow the instructions below. You must enter the same amount of sales and use tax and voluntary contributions from your original return; you cannot change these amounts.
• Do not submit a copy of your original Form IT-201, IT-203, or IT-195 with your amended Form IT-201-X.
• Submit with your amended Form IT-201-X any: – amended Form IT-196; – amended credit claim form or other amended form (do not submit the original version); – new credit claim form or any other form that you are filing for the first time with your amended Form IT-201-X; and – original credit claim form(s) (for example, Forms IT-213, IT-215, and IT-216); withholding form(s) (for example, Form IT-2), and all other form(s) that you submitted with your original return and are not amending (for example, Forms IT-196, IT-201-ATT, and IT-227).
If you do not submit all the necessary forms with your amended return, we will adjust your return and disallow the amounts claimed on the missing forms.
General information You must file an amended 2020 New York State return if: • You made an error when you filed your original 2020 New York State income tax return.
• The Internal Revenue Service (IRS) made changes to your 2020 federal return.
• You need to file a protective claim for 2020.
• You need to report an NOL carryback for 2020.
See the instructions for 2020 Form IT-201 to determine which amended return to file (Form IT-201-X or IT-203-X).
Do not file an amended return on Form IT-201-X to protest a paid assessment that was based on a statement of audit changes. If you receive an assessment from the Tax Department, do not file an amended return strictly to protest the assessment. Follow the instructions you receive with the assessment.
To file an amended return, complete all six pages of Form IT-201-X, using your original return as a guide, and make any necessary changes to income, deductions, and credits. Use 2020 Form IT-201-I, Instructions for Form IT-201, and the specific instructions below to complete Form IT-201-X.
Generally, Form IT-201-X must be filed within three years of the date the original return was filed or within two years of the date the tax was paid, whichever is later. (A return filed early is considered filed on the due date.) Do not file Form IT-201-X unless you have already filed your original return.
If you file an amended federal return to make changes to your federal income, total taxable amount, capital gain or ordinary income portion of a lump-sum distribution, the amount of your earned income credit or credit for child and dependent care expenses, or the amount of your foreign tax credit affecting the computation of the resident credit for taxes paid to a province of Canada, you must also file an amended New York State return within 90 days of the date you amend your federal return. If the IRS changes any of these items, report these changes to the New York State Tax Department on an amended return within 90 days of the IRS final determination. If you do not agree with the IRS determination, you must still file an amended state return indicating your disagreement. To report changes for a tax year prior to 1988, use Form IT-115, Report of Federal Changes.
If you file an amended return to report an NOL carryback, you must generally file Form IT-201-X within three years from the date the loss year return was due (including any extensions).
Specific instructions Use the 2020 Form IT-201 instructions when completing Form IT-201-X, along with the following specific line instructions. If you are amending any credit claim form or other form, or are using any credit claim form or other form for the first time, write Amended across the top of that form and submit it with your amended return. Any other credit claim form or other form that you submitted with your original return (including Form IT-558, Form IT-196, or Form IT-227) must also be submitted with your amended return.
Entering whole dollar amounts When entering amounts on this form, enter whole dollar amounts only (zeros have been preprinted). Do not write in dollar signs or commas when making entries. Use the following rounding rules when entering your amounts; drop amounts below 50 cents and increase amounts from 50 to 99 cents to the next dollar. For example, $1.39 becomes $1 and $2.50 becomes $3.
Item D1 – Amended federal return You must mark an X in the Yes or No box.
Item G – Special condition code If you entered a special condition code(s) on your original return, enter the same code(s).
In addition, if you qualify for one or more of the special conditions below, enter the 2-character code(s).
Code A6 Enter this code if you are filing Form IT-201-X to reduce your NYAGI for Build America Bond interest included in your recomputed federal AGI.
Code C7 Enter this code if you now qualify for an extension of time to file and pay your tax due under the combat zone or contingency operation relief provisions. See Publication 361, New York State Income Tax Information for Military Personnel and Veterans.
Code 56 Enter this code if you are filing Form IT-201-X to report a theft loss for a Ponzi-type fraudulent investment.
Code P2 Enter this code if you are filing Form IT-201-X to file a protective claim. Also, be sure to mark an X in the line 83k box.
Page 2 of 4 IT-201-X-I (2020) Code N3 Enter this code if you are filing Form IT-201-X to report an NOL. Also, be sure to mark an X in the line 83l box and complete the information requested for the loss year. For more information on claiming an NOL carryback, see the instructions for Form IT-201. Code M4 Enter this code if as a civilian spouse of a military servicemember you are making an election to use the same state of legal residence as the servicemember for state income tax purposes. For additional information, see TSB-M-19(3)I, Veterans Benefits and Transition Act of 2018, available on our website.
Line 34 – Standard or itemized deduction Standard deduction: If you are claiming the standard deduction on your amended return, enter the appropriate amount, for your filing status, from the table on page 3 of Form IT-201-X.
Itemized deduction: If you are claiming the New York itemized deduction on your original and amended return and you meet all three of the following conditions, submit a copy of your original Form IT-196: • You are not amending your New York State itemized deductions.
• Your NYAGI on your original and amended returns is $100,000 or less.
• You are not claiming the college tuition itemized deduction.
If you do not meet all of the above conditions, you must recalculate your New York State itemized deduction using Form IT-196.
If you are reporting an NOL carryback and you were subject to the New York itemized deduction adjustment on your original 2020 Form IT-196, you should recompute your New York itemized deduction adjustment to reflect the decrease in your NYAGI. Line 59 – Sales or use tax Enter the amount of New York State and local sales or use tax you reported on your original return. You cannot change the amount of sales or use tax you owe using Form IT-201-X. If you need to increase the amount of sales or use tax paid with your original return, you must file Form ST-140, Individual Purchaser’s Annual Report of Sales and Use Tax. If you are entitled to a refund of any amount you originally paid, you must file Form AU-11, Application for Credit or Refund of Sales or Use Tax.
Line 60 – Voluntary contributions Enter the total amount of voluntary contributions you reported on your original return. This amount should be the same as the total reported on your original Form IT-227, New York State Voluntary Contributions. If the voluntary contributions you reported on your original Form IT-227 were previously adjusted by the Tax Department, enter the total adjusted amount on this line. You cannot change the amount of your contributions as reported (or adjusted) on your original return or original Form IT-227. You must submit your original Form IT-227 with your amended Form IT-201-X. Line 76 – Amount paid with original return, plus additional tax paid after your original return was filed From your original Form IT-201, line 80 (or Form IT-203, line 70). If you paid additional amounts since your original return was filed, also include these payments on line 76. If you did not pay the entire balance due shown on your original return, enter the actual amount that was paid. Do not include payments of interest or penalties.
Line 78 – Overpayment, if any, as shown on original return From your original Form IT-201, line 77 (or Form IT-203, line 67). If the overpayment claimed on your original return was previously adjusted by the Tax Department, enter the adjusted overpayment on this line. Do not include interest you received on any refund.
Line 78a – Amount from original return If you filed Form IT-203, enter the amount from Form IT-203, line 69.
Line 80 – Refund If line 79 is more than line 62, subtract line 62 from line 79; this is your refund amount. You have two ways to receive your refund. You can choose direct deposit to have the funds deposited directly into your bank account (the fastest option for most filers), or you can choose to have a paper check mailed to you. Mark an X in one box to indicate your choice.
Refund options Direct deposit – If you choose direct deposit, enter your account information on line 82 for a fast and secure direct deposit of your refund. If you do not enter complete and correct account information at line 82, we will mail you a paper check. Paper check refunds – We will mail your refund check to the mailing address on your return. Paper checks for joint filers will be issued with both names and must be signed by both spouses. Paper checks take weeks to be processed, printed, and mailed. If you do not have a bank account, you will likely be charged a fee to cash your check.
Line 81 – Amount you owe Enter on line 81 the amount of tax you owe.
Payment options By automatic bank withdrawal You may authorize the Tax Department to make an electronic funds withdrawal from your bank account either by completing line 82, or on our website.
This payment option is not available if the funds for your payment would come from an account outside the U.S. (see Note below).
If you choose to complete line 82 to pay by electronic funds withdrawal, mark an X in the box, enter your account information on lines 82a through 82c, and enter your electronic funds withdrawal information on line 82d.
By check or money order If you owe more than one dollar, submit Form IT-201-V, Payment Voucher for Income Tax Returns, and full payment with your return. Make your check or money order payable in U.S. funds to New York State Income Tax, and write the last four digits of your Social Security number and 2020 Income Tax on it. Do not send cash.
Interest – If a balance due is shown on your amended return, include the interest amount on line 81. Compute the interest by accessing our website or call 518-457-5181, and we will compute the interest for you. Include with your payment any interest computed.
Fee for payments returned by banks – The law allows the Tax Department to charge a $50 fee when a check, money order, or electronic payment is returned by a bank for nonpayment. However, if an electronic payment is returned as a result of an error by the bank or the department, the department will not charge the fee.
If your payment is returned, we will send a separate bill for $50 for each return or other tax document associated with the returned payment.
Line 82 – Account information If you marked the box that indicates your payment (or refund) would come from (or go to) an account outside the U.S., stop. Do not complete lines 82a through 82d (see Note below). All others, supply the information requested.
Note: Banking rules prohibit us from honoring requests for electronic funds withdrawal or direct deposit when the funds for your payment (or refund) would come from (or go to) an account outside the U.S. Therefore, if you marked this box, you must pay any amount you owe by check or money order (see above); or if you are requesting a refund, we will send your refund to the mailing address on your return.
The following requirements apply to both direct deposit and electronic funds withdrawal: Use the sample image as a guide; enter your own information exactly as it appears on your own check or bank records. Do not enter the information from the sample check below.
On line 82a, mark an X in the box for the type of account. On line 82b, enter your bank’s 9-digit routing number (refer to your check or contact your bank). The first two digits always begin with 01 through 12, or 21 through 32. On the sample check below, the routing number is 111111111.
Note: If your check states that it is payable through a bank different from the one where you have your checking account, do not use the routing number on that check. Instead, contact your bank for the correct routing number to enter on line 82b.
On line 82c, enter your account number.
• If you marked personal or business checking on line 82a, enter the account number shown on your checks. • If you marked personal or business savings on line 82a, enter your savings account number from a preprinted savings account deposit slip, your passbook or other bank records, or from your bank.
The account number can be up to 17 characters (both numbers and letters). Include hyphens (-) but omit spaces and special symbols. Enter the number from left to right. On the sample check below, the account number is 9999999999.
Sample JOHN SMITH 999 Maple Street Someplace, NY 10000 Date 1234 15-0000/0000 Pay to the Order of Dollars SOME BANK Someplace, NY 10000 For Do not include the check number QQQQ9QQQQQ 1234 Note: The routing and account numbers may appear in different places on your check.
$ 111111111 X X Sample routing number Sample account number Note: The routing and account numbers may appear in different places on your check.
IT-201-X-I (2020) Page 3 of 4 Contact your bank if you need to verify routing and account numbers or confirm that it will accept your direct deposit or process your electronic funds withdrawal. If you encounter any problem with direct deposit to, or electronic withdrawal from, your account, call 518-457-5181. Allow six to eight weeks for processing your return.
Line 82d – Electronic funds withdrawal Enter the date you want the Tax Department to make an electronic funds withdrawal from your bank account and the amount from line 81 you want electronically withdrawn. If you are amending your return prior to the original due date (generally April 15), enter a date that is on or before the due date of your return. If we receive your amended return after the original return due date or you do not enter a date, we will withdraw the funds on the day we accept your return.
Your confirmation will be your bank statement that includes a NYS Tax Payment line item.
We will only withdraw the amount that you authorize. If we determine that the amount you owe is different from the amount claimed on your return, we will issue you a refund for any amount overpaid or send you a bill for any additional amount owed, which may include penalty and interest.
You may revoke your electronic funds withdrawal authorization only by contacting the Tax Department at least 5 business days before the payment date.
If you complete the entries for electronic funds withdrawal, do not send a check or money order for the same amount due unless you receive a notice.
Line 83k – Protective claim If you marked the Protective claim box, be sure you have entered code P2 at item G on the front of your Form IT-201-X. Complete your amended return in full assuming that the item(s) that is the subject of the protective claim is eligible for refund. A protective claim is a refund claim that is based on an unresolved issue(s) that involves the Tax Department or another taxing jurisdiction that may affect your New York tax(es). The purpose of filing a protective claim is to protect any potential overpayment for a tax year for which the statute of limitations is due to expire. Line 83l – Net operating loss For New York State income tax purposes, your NOL carryback is limited to the federal NOL carryback that would have been allowed using the rules in place prior to any changes made to the IRC after March 1, 2020. Therefore, there is no carryback of NOLs for New York State purposes; except for certain farming losses.
If you marked the Net operating loss box, you must enter the year of the loss at line 83l and enter code N3 at item G on the front of your Form IT-201-X. You must file Form IT-201-X to claim an NOL carryback within three years from the date the loss year return was due (including any extensions).
Submit all of the following with your Form IT-201-X: • A copy of your federal Form 1040 and Schedule A, if applicable, for the loss year. In addition, provide any schedules or statements that are related to your loss. If your NOL will have an effect on more than one tax year, this federal information must only be submitted with the amended return for the first carryback year.
• A copy of your federal NOL computation, including federal Form 1045 and all related schedules. You do not have to include the alternative minimum tax NOL computation.
Visit our website at www.tax.ny.gov • get information and manage your taxes online • check for new online services and features Telephone assistance Automated income tax refund status: 518-457-5149 Personal Income Tax Information Center: 518-457-5181 To order forms and publications: 518-457-5431 Text Telephone (TTY) or TDD Dial 7-1-1 for the equipment users New York Relay Service Need help?
Page 4 of 4 IT-201-X-I (2020) • A copy of your original federal Form 1040 and Schedule A, if applicable, for the carryback year. No additional schedules/statements are required.
• A copy of any federal documentation (if available) showing the IRS has accepted your NOL carryback claim.
Line 83m – Report Social Security number If you filed your original return using either an individual taxpayer identification number (ITIN) or a New York State temporary identification number (with a TF prefix) and have received a Social Security number (SSN), then mark the box, enter the identification number used on your original return, and enter the date when the SSN was issued.
If you received notification (Form TR-298) from the Tax Department that you were assigned a temporary identification number, follow the instructions in that notice to report your valid identification number (SSN or ITIN) to us. Do not file Form IT-201-X to report only your new identification number. Line 83n – Other If you marked the Other box, include an explanation of the change on the explanation line at line 83n (for example, you are changing your New York State dependent exemption amount). If you need additional room, submit a separate sheet with your explanation. Include your name and SSN on the additional sheet.
Line 83o – Partnership or S corporation If you marked a box at line 83o, give the partnership’s or S corporation’s name, identifying number, principal business activity, and address.
Lines 84 through 91 If you marked the line 83a box and are reporting changes made by the IRS, complete lines 84 through 91 by entering the information requested as it appears on your final federal report of examination changes. Use a minus sign to show any decreases.
Important: Fully explain the changes you are making on Form IT-201-X. Submit any schedules or forms that apply, along with any available federal documentation. Documentation may include, but is not limited to, copies of: your federal Form 1040X; federal acceptance of your amended federal return (include copies of the refund check, if applicable); amended federal Schedule B, Schedule C, or Schedule D; and revised federal Schedule K-1. Failure to include this information when filing Form IT-201-X may delay the processing of your return or the issuance of your refund.
Where to file If enclosing a payment (check or money order), mail your return and Form IT-201-V to: STATE PROCESSING CENTER PO BOX 15555 ALBANY NY 12212-5555 If not enclosing a payment, mail your return to: STATE PROCESSING CENTER PO BOX 61000 ALBANY NY 12261-0001 Private delivery services – If you are not submitting your form by U.S. Mail, be sure to consult Publication 55, Designated Private Delivery Services, for the address and other information.
Paid preparer’s signature If you pay someone to prepare your return, the paid preparer must also sign it and fill in the other blanks in the paid preparer’s area of your return. A person who prepares your return and does not charge you should not fill in the paid preparer’s area.
Paid preparer’s responsibilities – Under the law, all paid preparers must sign and complete the paid preparer section of the return. Paid preparers may be subject to civil and/or criminal sanctions if they fail to complete this section in full.
When completing this section, enter your New York tax preparer registration identification number (NYTPRIN) if you are required to have one. If you are not required to have a NYTPRIN, enter in the NYTPRIN excl. code box one of the specified 2-digit codes listed below that indicates why you are exempt from the registration requirement. You must enter a NYTPRIN or an exclusion code. Also, you must enter your federal preparer tax identification number (PTIN) if you have one; if not, you must enter your Social Security number.
Code Exemption type Code Exemption type 01 Attorney 02 Employee of attorney 03 CPA 04 Employee of CPA 05 PA (Public Accountant) 06 Employee of PA 07 Enrolled agent 08 Employee of enrolled agent 09 Volunteer tax preparer 10 Employee of business preparing that business’ return See our website for more information about the tax preparer registration requirements.
Privacy notification New York State Law requires all government agencies that maintain a system of records to provide notification of the legal authority for any request for personal information, the principal purpose(s) for which the information is to be collected, and where it will be maintained. To view this information, visit our website, or, if you do not have Internet access, call and request Publication 54, Privacy Notification. See Need help? for the Web address and telephone number. |
187811 | https://www.reddit.com/r/learnmath/comments/tqziq5/possible_5_digit_codes_using_0123456789/ | Possible 5 Digit codes using 0,1,2,3,4,5,6,7,8,9 : r/learnmath
Skip to main contentPossible 5 Digit codes using 0,1,2,3,4,5,6,7,8,9 : r/learnmath
Open menu Open navigationGo to Reddit Home
r/learnmath A chip A close button
Get App Get the Reddit app Log InLog in to Reddit
Expand user menu Open settings menu
Go to learnmath
r/learnmath
r/learnmath
Post all of your math-learning resources here. Questions, no matter how basic, will be answered (to the best ability of the online subscribers).
390K Members Online
•3 yr. ago
Hutchie2306
Possible 5 Digit codes using 0,1,2,3,4,5,6,7,8,9
RESOLVED
Hey everyone I am running into some problems with this problem I invented for myself.
Note: Just for simplicity sake, as an example 00140 is a 5 digit code even though it really is just 140
Possible code options without repeat = 10P5 = 30,240
Possible code options with repeat : 10^5 = 100,000
So that means there are 69,760 codes where a digit is repeated at least twice.
As there are 10 digits to choose from, that means 6,976 where 9 is represented at least twice within the code
Codes with 5 9s and 0 other numbers:
9,9,9,9,9
So total codes = 1
Codes with 4 9s and 1 other number:
x,9,9,9,9
9,x,9,9,9
9,9,x,9,9
9,9,9,x,9
9,9,9,9,x
where x can be 0,1,2,3,4,5,6,7,8 (9 options)
So total codes = 5 x 9 = 45
Codes with 3 9s and 2 other numbers:
x,x,9,9,9
x,9,x,9,9
x,9,9,x,9
x,9,9,9,x
9,x,x,9,9
9,x,9,x,9
9,x,9,9,x
9,9,x,x,9
9,9,x,9,x
9,9,9,x,x
where x,x can be any 2 numbers from 0,1,2,3,4,5,6,7,8,9
so x,x has 9^2 options as repetitions are allowed
Hence total codes = 10 x 81 = 810
Codes with 4 9s and 1 other number:
9,9,x,x,x
9,x,9,x,x
9,x,x,9,x
9,x,x,x,9
x,9,9,x,x
x,9,x,9,x
x,9,x,x,9
x,x,9,9,x
x,x,9,x,9
x,x,x,9,9
where x,x,x can be any 3 numbers from 0,1,2,3,4,5,6,7,8,9
so x,x,x has 9^3 options as repetitions are allowed
Hence total codes = 10 x 729 = 7290
But if we add together the total codes where 9 has at least 2 digits I run into the error:
7290 + 810 + 45 + 1 = 8146
but 8146 =! 6976
Any help is greatly appreciated. Thanks :)
Read more
Share Share
Related Answers Section
Related Answers
Possible 5 digit codes using digits 0-9
List of all possible 5 digit combinations 0-9
Common 5 digit passcodes
Popular codes for 5 digit combinations
5 digit number combinations without repeats
New to Reddit?
Create your account and connect with a world of communities.
Continue with Email
Continue With Phone Number
By continuing, you agree to ourUser Agreementand acknowledge that you understand thePrivacy Policy.
Public
Anyone can view, post, and comment to this community
Top Posts
RedditreReddit: Top posts of March 29, 2022
RedditreReddit: Top posts of March 2022
RedditreReddit: Top posts of 2022
Reddit RulesPrivacy PolicyUser AgreementAccessibilityReddit, Inc. © 2025. All rights reserved.
Expand Navigation Collapse Navigation
TOPICS
Internet Culture (Viral)
Amazing
Animals & Pets
Cringe & Facepalm
Funny
Interesting
Memes
Oddly Satisfying
Reddit Meta
Wholesome & Heartwarming
Games
Action Games
Adventure Games
Esports
Gaming Consoles & Gear
Gaming News & Discussion
Mobile Games
Other Games
Role-Playing Games
Simulation Games
Sports & Racing Games
Strategy Games
Tabletop Games
Q&As
Q&As
Stories & Confessions
Technology
3D Printing
Artificial Intelligence & Machine Learning
Computers & Hardware
Consumer Electronics
DIY Electronics
Programming
Software & Apps
Streaming Services
Tech News & Discussion
Virtual & Augmented Reality
Pop Culture
Celebrities
Creators & Influencers
Generations & Nostalgia
Podcasts
Streamers
Tarot & Astrology
Movies & TV
Action Movies & Series
Animated Movies & Series
Comedy Movies & Series
Crime, Mystery, & Thriller Movies & Series
Documentary Movies & Series
Drama Movies & Series
Fantasy Movies & Series
Horror Movies & Series
Movie News & Discussion
Reality TV
Romance Movies & Series
Sci-Fi Movies & Series
Superhero Movies & Series
TV News & Discussion
RESOURCES
About Reddit
Advertise
Reddit Pro BETA
Help
Blog
Careers
Press
Communities
Best of Reddit
Top Translated Posts
Topics |
187812 | https://k12.libretexts.org/Bookshelves/Mathematics/Precalculus/05%3A_Trigonometric_Functions/5.04%3A_Vertical_Shift_of_Sinusoidal_Functions | 5.4: Vertical Shift of Sinusoidal Functions - K12 LibreTexts
Skip to main content
Table of Contents menu
search Search build_circle Toolbar fact_check Homework cancel Exit Reader Mode
school Campus Bookshelves
menu_book Bookshelves
perm_media Learning Objects
login Login
how_to_reg Request Instructor Account
hub Instructor Commons
Search
Search this book
Submit Search
x
Text Color
Reset
Bright Blues Gray Inverted
Text Size
Reset
+-
Margin Size
Reset
+-
Font Type
Enable Dyslexic Font - [x]
Downloads expand_more
Download Page (PDF)
Download Full Book (PDF)
Resources expand_more
Periodic Table
Physics Constants
Scientific Calculator
Reference expand_more
Reference & Cite
Tools expand_more
Help expand_more
Get Help
Feedback
Readability
x
selected template will load here
Error
This action is not availabl e.
chrome_reader_mode Enter Reader Mode
5: Trigonometric Functions
Precalculus
{ }
{ "5.01:_The_Unit_Circle" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.02:_The_Sinusoidal_Function_Family" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.03:_Amplitude_of_Sinusoidal_Functions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.04:_Vertical_Shift_of_Sinusoidal_Functions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.05:_Frequency_and_Period_of_Sinusoidal_Functions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.06:_Phase_Shift_of_Sinusoidal_Functions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.07:_Graphs_of_Other_Trigonometric_Functions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.08:_Graphs_of_Inverse_Trigonometric_Functions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" }
{ "00:_Front_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "01:_Functions_and_Graphs" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "02:_Polynomials_and_Rational_Functions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "03:_Logs_and_Exponents" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "04:_Basic_Triangle_Trigonometry" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "05:_Trigonometric_Functions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "06:_Analytic_Trigonometry" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "07:_Vectors" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "08:_Systems_and_Matrices" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "09:_Conics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "10:_Polar_and_Parametric_Equations" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11:_Complex_Numbers" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12:_Discrete_Math" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "13:_Finance" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "14:_Concepts_of_Calculus" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "15:_Concepts_of_Statistics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "16:_Logic_and_Set_Theory" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "zz:_Back_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" }
Sun, 13 Feb 2022 22:29:19 GMT
5.4: Vertical Shift of Sinusoidal Functions
981
981
admin
{ }
Anonymous
Anonymous User
2
false
false
[ "article:topic", "showtoc:no", "program:ck12", "authorname:ck12", "license:ck12", "source@ ]
[ "article:topic", "showtoc:no", "program:ck12", "authorname:ck12", "license:ck12", "source@ ]
Search site Search Search Go back to previous article
Sign in
Username Password Sign in
Sign in
Sign in
Forgot password
Contents
1. Home
2. Bookshelves
3. Mathematics
4. Precalculus
5. 5: Trigonometric Functions
6. 5.4: Vertical Shift of Sinusoidal Functions
Expand/collapse global location
Precalculus
Front Matter
1: Functions and Graphs
2: Polynomials and Rational Functions
3: Logs and Exponents
4: Basic Triangle Trigonometry
5: Trigonometric Functions
6: Analytic Trigonometry
7: Vectors
8: Systems and Matrices
9: Conics
10: Polar and Parametric Equations
11: Complex Numbers
12: Discrete Math
13: Finance
14: Concepts of Calculus
15: Concepts of Statistics
16: Logic and Set Theory
Back Matter
5.4: Vertical Shift of Sinusoidal Functions
Last updated Feb 13, 2022
Save as PDF
5.3: Amplitude of Sinusoidal Functions
5.5: Frequency and Period of Sinusoidal Functions
Page ID 981
( \newcommand{\kernel}{\mathrm{null}\,})
Table of contents
1. Vertical Shift of Sinusoidal Functions
2. Examples
1. Example 1
2. Example 2
3. Example 3
4. Example 4
5. Example 5
Your knowledge of transformations, specifically vertical shift, apply directly to sinusoidal functions. In practice, sketching shifted sine and cosine functions requires greater attention to detail and more careful labeling than other functions. Can you describe the following transformation in words?
f(x)=sinx→g(x)=−3sinx−4
In what order do the reflection, stretch and shift occu r? Is there a difference?
Vertical Shift of Sinusoidal Functions
The general form of a sinusoidal function is:
f(x)=±a⋅sin(b(x+c))+d
Recall that a controls amplitude and the ± controls reflection. Here you will see how d controls the vertical shift.
The most straightforward way to think about vertical shift of sinusoidal functions is to focus on the sinusoidal axis, the horizontal line running through the middle of the sine or cosine wave. At the start of the problem identify the vertical shift and immediately draw the new sinusoidal axis. Then proceed to graph amplitude and reflection about that axis as opposed to the x axis.
The graphs of the following three functions are shown below:
f(x)=sinx+3
g(x)=sinx−2
h(x)=sinx+1 2
To draw these graphs, the new sinusoidal axis for each graph is drawn first. Then, a complete sine wave for each one is drawn. Note the five important points that separate each quadrant to help to get a clear sense of the graph. There are no reflections in these graphs and they all have an amplitude of 1. Right now every cycle starts at 0 and ends at 2π but this will not always be the case.
Watch the portions of the following video focused on vertical translations:
Examples
Example 1
Earlier, you were asked which order vertical shift and reflection should be performed in and if it matters. The following transformation can be described as follows.
f(x)=sinx→g(x)=−3sinx−4
Describe the stretching and reflecting first and then the vertical shift. This is the most logical way to discuss the transformation verbally because then the numbers like 3 and -4 can be explicitly identified in the graph.
The order in describing the transformation matters. When describing vertical transformations it is most intuitive to simply describe the transformations in the same order as the order of operations.
Example 2
Identify the equation of the following transformed cosine graph.
since there is no sinusoidal axis given, you must determine the vertical shift, stretch and reflection. The peak occurs at (π,3) and the trough occurs at (0,-1) so the horizontal line directly between +3 and -1 is y=1. since the sinusoidal axis has been shifted up by one unit d=1. From this height, the graph goes two above and two below which means that the amplitude is 2 . since this cosine graph starts its cycle at (0,-1) which is a lowest point, it is a negative cosine. The function is f(x)=−2cosx+1
Example 3
Transform the following sine graph in two ways. First, transform the sine graph by shifting it vertically up 1 unit and then stretching it vertically by a factor of 2 units. Second, transform the sine graph by stretching it vertically by a factor of 2 units and then shifting it vertically up 1 unit.
When doing ordered transformations it is good to show where you start and where you end up so that you can effectively compare and contrast the outcomes. See how both transformations start with a regular sine wave. The two columns represent the sequence of transformations that produce different outcomes.
Example 4
What equation models the following gra ph?
f(x)=3⋅sinx−1
Example 5
Graph the following function: f(x)=−2⋅cosx+1
First draw the horizontal sinusoidal axis and identify the five main points for the cosine wave. Be careful to note that the amplitude is 2 and the cosine wave starts and ends at a low point because of the negative sign.
Review
Graph each of the following functions that have undergone a vertical stretch, reflection, and/or a vertical shift.
f(x)=−2sinx+4
g(x)=1 2cosx−1
h(x)=3sinx+2
j(x)=−1.5cosx+1 2
k(x)=2 3sinx−3
Find the minimum and maximum values of each of the following functions.
f(x)=−3sinx+1
g(x)=2cosx−4
h(x)=1 2sinx+1
j(x)=−cosx+5
k(x)=sin(x)−1
Give the equation of each function graphed below.
11.
12.
13.
14.
15.
This page titled 5.4: Vertical Shift of Sinusoidal Functions is shared under a CK-12 license and was authored, remixed, and/or curated by CK-12 Foundation via source content that was edited to the style and standards of the LibreTexts platform.
LICENSED UNDER
Back to top
5.3: Amplitude of Sinusoidal Functions
5.5: Frequency and Period of Sinusoidal Functions
Was this article helpful?
Yes
No
Recommended articles
5.7: Graphs of Other Trigonometric Functions
5.8: Graphs of Inverse Trigonometric Functions
5.1: The Unit Circle
5.3: Amplitude of Sinusoidal Functions
Article typeSection or PageAuthorCK12LicenseCK-12OER program or PublisherCK-12Show TOCno
Tags
source@
© Copyright 2025 K12 LibreTexts
Powered by CXone Expert ®
?
The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Privacy Policy. Terms & Conditions. Accessibility Statement.For more information contact us atinfo@libretexts.org.
Support Center
How can we help?
Contact Support Search the Insight Knowledge Base Check System Status×
contents readability resources tools
☰
5.3: Amplitude of Sinusoidal Functions
5.5: Frequency and Period of Sinusoidal Functions |
187813 | https://docs.oracle.com/cd/E19205-01/819-5267/bkakn/index.html | 14.3.1 Output Using iostream (Sun Studio 12: C++ User's Guide)
Documentation Home>Sun Studio 12: C++ User's Guide>Part III Libraries>Chapter 14 Using the Classic iostream Library>14.3 Using the Classic iostream Library> 14.3.1 Output Using iostream
Sun Studio 12: C++ User's Guide
Previous: 14.2 Basic Structure of iostream Interaction
Next: 14.3.2 Input Using iostream
14.3.1 Output Using iostream
Output using iostream usually relies on the overloaded left-shift operator (<<) which, in the context of iostream, is called the insertion operator. To output a value to standard output, you insert the value in the predefined output stream cout. For example, given a value someValue, you send it to standard output with a statement like:
cout << someValue;
The insertion operator is overloaded for all built-in types, and the value represented by someValue is converted to its proper output representation. If, for example, someValue is a float value, the << operator converts the value to the proper sequence of digits with a decimal point. Where it inserts float values on the output stream, << is called the float inserter. In general, given a type X, << is called the X inserter. The format of output and how you can control it is discussed in the ios(3CC4) man page.
The iostream library does not support user-defined types. If you define types that you want to output in your own way, you must define an inserter (that is, overload the << operator) to handle them correctly.
The << operator can be applied repetitively. To insert two values on cout, you can use a statement like the one in the following example:
cout << someValue << anotherValue;
The output from the above example will show no space between the two values. So you may want to write the code this way:
cout << someValue << " " << anotherValue;
The << operator has the precedence of the left shift operator (its built-in meaning). As with other operators, you can always use parentheses to specify the order of action. It is often a good idea to use parentheses to avoid problems of precedence. Of the following four statements, the first two are equivalent, but the last two are not.
cout << a+b; // + has higher precedence than <<
cout << (a+b);
cout << (a&y); // << has precedence higher than &
cout << a&y // probably an error: (cout << a) & y
14.3.1.1 Defining Your Own Insertion Operator
The following example defines a string class:
include
include
class string {
private:
char data;
size_t size;
public:
// (functions not relevant here)
friend ostream& operator<<(ostream&, const string&);
friend istream& operator>>(istream&, string&);
};
The insertion and extraction operators must in this case be defined as friends because the data part of the string class is private.
ostream& operator<< (ostream& ostr, const string& output)
{ return ostr << output.data;}
Here is the definition of operator<< overloaded for use with string s.
cout << string1 << string2;
operator<< takes ostream& (that is, a reference to an ostream) as its first argument and returns the same ostream, making it possible to combine insertions in one statement.
14.3.1.2 Handling Output Errors
Generally, you don’t have to check for errors when you overload operator<< because the iostream library is arranged to propagate errors.
When an error occurs, the iostream where it occurred enters an error state. Bits in the iostream’s state are set according to the general category of the error. The inserters defined in iostream ignore attempts to insert data into any stream that is in an error state, so such attempts do not change the iostream’s state.
In general, the recommended way to handle errors is to periodically check the state of the output stream in some central place. If there is an error, you should handle it in some way. This chapter assumes that you define a function error, which takes a string and aborts the program. error is not a predefined function. See 14.3.9 Handling Input Errors for an example of an error function. You can examine the state of an iostream with the operator !,which returns a nonzero value if the iostream is in an error state. For example:
if (!cout) error("output error");
There is another way to test for errors. The ios class defines operator void(), so it returns a NULL pointer when there is an error. You can use a statement like:
if (cout << x) return; // return if successful
You can also use the function good, a member of ios:
if (cout.good()) return; // return if successful
The error bits are declared in the enum:
enum io_state {goodbit=0, eofbit=1, failbit=2,
badbit=4, hardfail=0x80};
For details on the error functions, see the iostream man pages.
14.3.1.3 Flushing
As with most I/O libraries, iostream often accumulates output and sends it on in larger and generally more efficient chunks. If you want to flush the buffer, you simply insert the special value flush. For example:
cout << "This needs to get out immediately." << flush;
flush is an example of a kind of object known as a manipulator, which is a value that can be inserted into an iostream to have some effect other than causing output of its value. It is really a function that takes an ostream& or istream& argument and returns its argument after performing some actions on it (see 14.7 Manipulators).
14.3.1.4 Binary Output
To obtain output in the raw binary form of a value, use the member function write as shown in the following example. This example shows the output in the raw binary form of x.
cout.write((char)&x, sizeof(x));
The previous example violates type discipline by converting &x to char. Doing so is normally harmless, but if the type of x is a class with pointers, virtual member functions, or one that requires nontrivial constructor actions, the value written by the above example cannot be read back in properly.
Previous: 14.2 Basic Structure of iostream Interaction
Next: 14.3.2 Input Using iostream
© 2010, Oracle Corporation and/or its affiliates |
187814 | https://www.youtube.com/watch?v=VjQm6eHPJ-A | Thomas Calculus || Exercise 2.7 || Question 05 || Slopes and Tangent Lines || Allah Dad
Allah Dad
6650 subscribers
76 likes
Description
4508 views
Posted: 23 Feb 2024
Thomas Calculus || Exercise 2.7 || Question 05 || Slopes and Tangent Lines || Allah Dad
Thomas Calculus 11th edition chapter number 2 limits and continuity exercise 2.7 it means section 2.7 question number 5 to 10 slope and tangent line.
(00:00) Introduction
(01:26) Tangent to a Circle
(02:40) Slope and Tangent Line
(04:04) Method of finding the tangent line
(04:55) Difference Quotient
(05:20) Derivative
(08:36) Question NO 05
Book.............👉👉👉.........Thomas Calculus
Writer............👉👉👉............Thomas Finney
Edition..........👉👉👉..................11th Edition
Chapter.........👉👉👉........................No # 02
Name............👉...LIMITS AND CONTINUITY
( ..............Tangents and Derivatives............... )
Exercise 2.7.......................................................
Slopes and Tangent Lines.................................
Question .....................................................(05)
thomascalculus
slopes
tangents
allahdad
Please Like, Share, Comments, Subscribe and press the bell icon on the YouTube app. Thank You Very Much.....😊🙂
13 comments
Transcript:
[संगीत] बिस्मिल्लाह रहमा रहीम देखें स्टूडेंट ये आपके पास थॉमस कैलकुलस 11थ एडिशन चैप्टर नंबर टू एक्ससाइज 2.7 है देखें स्लोप्स एंड टेंज लाइन इन एक्ससाइज 5 टू 10 फाइंड एन इक्वेशन फॉर द टेंज टू द कर्व एट द गिवन पॉइंट देन स्केच द कर्व एंड टेंज टुगेदर क्या कह रहा है कि हमने इक्वेशन एक्ससाइज में क्वेश्चन 5 टू 10 में जितने भी क्वेश्चन है एक इक्वेशन ऑफ टेंज लाइन फाइंड आउट करनी होगी ठीक है फिर हमारे पास जो फंक्शन है उस फंक्शन का जो ग्राफ बनेगा इस फंक्शन का जो ग्राफ बनेगा उस ग्राफ के ऊपर हमने इक्वेशन ऑफ टेंज लाइन को ड्रा करना होगा फाइंड करने के बाद देखि न क्या कह रहा है स्केच द कर्व एंड टेंज टूगेदर कि वो जो फंक्शन का फंक्शन का जो कर्व बनेगा उस कर्व के ऊपर हमने क्या करना होगा टेंज को ड्रा करना होगा एक साथ टूगेदर ठीक है ये बात बताया अब जो गिवन पॉइंट है ना वो यहां पर कह रहा है एट द गिवन पॉइंट कि दिए गए पॉइंट के ऊपर हमने क्या फाइंड आउट करनी है इक्वेशन ऑफ टेंज एट लाइन फाइंड आउट करनी है ठीक है इक्वेशन इस गिवन पॉइंट की मदद से इक्वेशन ट लाइन फाइंड कर करेंगे ना फिर उसके बाद जो हमारे पास इक्वेशन ऑफ टेंज लाइन आ जाएगी वो जो हमारे पास फंक्शन का कर्व बनेगा उस कर्व के ऊपर उस इक्वेशन ऑफ टेंज लाइन को ड्रा कर देंगे बस सिंपल हमारा यही काम है ठीक है तो सबसे पहले हम सीख लेते हैं हमारा जो टॉपिक है वो है स्लोप एंड टेंज लाइन का बेसिकली स्लोप होता क्या है टेंज लाइन क्या होता है इसकी मदद से हम एक्ससाइज के इस क्वेश्चन को सॉल्व करते तो सबसे पहले हम इसके थोरेट्स को समझ लेते हैं कि बेसिकली स्लोप एंड टेंज लाइन होती क्या है ठीक है स्टूडेंट देखें तो देखें स्टूडेंट टेंज टू अ सर्कल क्या होता है दोस्तो इसको सीख लेते हैं देखें लाइन एल इज द टेंज टू द सर्कल एट पॉइंट प कि पॉइंट प के ऊपर ठीक है पॉइंट प पर सर्कल की जो टेंज लाइन है वो कौन सी है एल है देखें कौन सा पॉइंट ये वाला पॉइंट अब ये आपके पास क्या है सर्कल है इस सर्कल में हमारे पास यहां पर भी पॉइंट यहां पर भी पॉइंट यहां पर भी हर जगह पर कोई ना कोई पॉइंट होगा ना तो ये उस पॉइंट को हमने यहां पर एक पॉइंट है इस पॉइंट को हमने क्या कहा हुआ है प कहा हुआ है ठीक है उस पॉइंट प के ऊपर सर्कल की सर्कल की टेंज लाइन है वो कौन सी है पॉइंट p के ऊपर सर्कल के पॉइंट प के ऊपर जो टेंज लाइन है वो कौन सी है l है किस तरह बनती है इफ इट पास थ्रू द पॉइंट p एंड परपेंडिकुलर टू द रेडियस o देखिए आपके पास ये सर्कल है इस सर्कल का ये क्या है सेंटर है जीरो का नाम दिया हुआ है इस सेंटर o से लेकर p तक आपके पास ये क्या है सर्कल का रेडियस है इस रेडियस के जो परपेंडिकुलर है इस रेडियस के क्या है ये देखें परपेंडिकुलर लाइन लगाई गई है ये लाइन क्या है टेंज लाइन है सर्कल के ऊपर क्योंकि ये देखें आपको यहां पर नजर भी आ रहा है क्योंकि ये सर्कल के साथ टच करके गुजर रही है तो ये कौन सी लाइन है लाइन है सर्कल के ऊपर किस पॉइंट के ऊपर p के ऊपर रेडियस के क्या ओ रेडियस के क्या है परपेंडिकुलर है यही बात बताई जा रही है इसके ऊपर आगे चलते हैं स्टूडेंट आगे क्या बात बताई जा रही है स्लोप एंड टेंज क्या होता है देखें स्टूडेंट अ स्लोप द स्लोप ऑफ द कर्व y इ इक्वल ट f f एक आपके पास कोई भी फंक्शन है इस फंक्शन में आपने ना कोई रियल नंबर वैल्यूज जैसे कि नेचुरल नंबर आपने इंटी जर पुट करने है ठीक है वो पुट करने के बाद आपके पास एक्सवा में कोई ग्राफ जनरेट होगा ना व वो बेसिकली क्या होगा कर्व होगा वो उसका कर्व के ऊपर स्लोप को हम डिफाइन कर रहे हैं उस कर्व के ऊपर स्लोप को किस तरह से डिफाइन करेंगे देखिए द स्लोप ऑफ द कर्व y = f x एट द पॉइंट कौन सा पॉइंट p x नॉ और ये कौन सा f x नॉ बेसिकली y नॉट है तो इस पॉइंट के ऊपर कर्व का जो स्लोप बनेगा वो कौन सा होगा स्लो का क्या फार्मूला है इज द नंबर m इक्वल टू लिमिट h अप्रोच टू 0 f x0 + h - f x0 डिवा h अगर इसकी लिमिट प्रोवाइडेड है एजिस्ट करती है तब फिर आपके पास क्या कहेंगे वो वो क्या होगा स्लोप होगा स्टूडेंट ठीक है अब बेसिकली ये डेरिवेटिव डेरिवेटिव की डेफिनेशन लिखी हुई है ठीक है मैं आपको आगे चलके बताऊंगा किस तरह से ये आगे क्या बता रहा है द टेंज लाइन टू द कर्व एट पॉइंट p कि देखिए p कोई भी पॉइंट है कर्व का उसके ऊपर अगर कोई टेंज एट लाइन हमने ड्रा करनी है तो वो किस तरह है इज द लाइन थ्रो p विद द स्लोप m तो स्लोप m और उस थ्रो पॉइंट p के थ्रू क्या होगी एक लाइन होगी टेंज लाइन क्या है बेसिकली एक लाइन है कर्व के ऊपर एक लाइन है जिसका स्लोप क्या रिप्रेजेंट कर रहा है m स्लोप क्या होता है m ठीक है अब देखें आगे क्या कह रहा है टेंज को किस तरह से हम फाइंड करेंगे स्टूडेंट फाइंड करने के लिए टेंज को देखें ये कुछ स्टेप है इस इन्हीं स्टेप को हमें फॉलो करना होगा कैलकुलेट सबसे पहले हम f x नॉ कैलकुलेट करेंगे ठीक है जो फंक्शन गिवन होगा उसमें आपने x नॉ पुट कर देना है तो फिर x नॉ में प्लस h ऐड करना है ये दोनों चीजें कैलकुलेट करनी है फिर कैलकुलेट स्लोप को कैलकुलेट करना है किस तरह से उसका फार्मूला स्लोप का आप याद रखें m = लिमिट h अप्रोच टू 0 f x नॉ + h - f x न डि ब बहुत ही आसान सा फार्मूला है स्लोप का आपने ये उसकी ये दोनों स्टेप करने के बाद ये फार्मूला में वैल्यूज को पुट करना है सिंपलीफाई करना है तो स्लोप m आ जाएगा फिर उसके बाद आपने क्या करना है अगर इसकी लिमिट ये वाली लिमिट एजिस्ट करती है न फिर हम नेक्स्ट स्टेप में चले जाएंगे फाइंड द टेंज लाइन इस तरह से फाइंड करेंगे देखिए ये फार्मूला है टेंज लाइन का y = y नॉ + m x - x नॉ ठीक है अब लास्ट एक छोटी सी डेफिनेशन है स्टूडेंट वो करते हैं फिर उसके बाद एक्सरसाइज के कुछ रिमेनिंग क्वेश्चन को सॉल्व करेंगे स्टूडेंट अब देखें आपके पास ये एक डेफिनेशन है डेफिनेशन क्या बोलते हैं स्टूडेंट देखें डिफरेंस क्वेश की डेफिनेशन है द एक्सप्रेशन f x न प् h - f x0 / h यहां पर पहले क्या था हमारे पास स्लो न टेंज में हमने पढ़ा था लिमिट यहां पर थी h अप्रोच टू 0 लेकिन वो लिमिट नहीं है लिमिट को छोड़ कर उसके अंदर वाला जो हिस्सा है ना ये वाला इसको क्या बोलते हैं डिफरेंस क्वेश बोलते हैं फंक्शन का ट पॉइंट x नॉ के ऊपर ठीक है विद इंक्रीमेंट h के ऊपर अब इसी तरह देखें डेरिवेटिव क्या है डेरिवेटिव का कांसेप्ट मैंने आपको बता रहा है देखिए यहां पर लिमिट इफ द लिमिट ऑफ द एक्सप्रेशन f x नॉ - f x नॉ डिवाइड बा h तो ये क्या है अगर मैं अगर यहां पर लगा दूं लिमिट h अप्रोच टू 0 और ये चीज ठीक है तो वो क्या बनता है हमारे पास स्लोप बनता है लेकिन अगर ना इसकी लिमिट यहां पर इसकी लिमिट एजिस्ट करती हो ठीक है लिमिट h अप्रोच टू 0 और ये वाला फंक्शन जैसे एजिस्ट करती हो जैसे ही हम लिमिट में x अप्रोच टू x नॉट अप्लाई करें ठीक है इसकी लिमिट एजिस्ट करती हो तब हम कहेंगे कि लिमिट इज कॉल्ड डेरिवेटिव ऑफ द फंक्शन f x एट x = x नॉ बेसिकली ये चीज किसका है ये ये क्या बता रहा है आपके पास डेरिवेटिव की डेफिनेशन बता रहा है डेरिवेटिव की डेफिनेशन क्या होती है लिमिट मैं यहां पर नीचे ख लिख रहा हूं देखें लिमिट h अप्रोचेबल निकलता है इसका जो भी आंसर निकलता है इट मीन आपने उस फंक्शन के डेरिवेटिव का आंसर निकाला हुआ है ठीक है ये डेरिवेटिव की डेफिनेशन है डेरिवेटिव का ही फार्मूला है वैसे तो हमें पता है डेरिवेटिव का फार्मूला होता है हमारे पास जैसा कि हमने फंक्शन का डेरिवेटिव लेना है वो किस तरह लेते हैं मान ले आपने डेरिवेटिव ना है वो पावर रूल की मदद से d / [संगीत] इससे भी आंसर वही निकलता है जो इससे निकलता है लेकिन हमने ये वाला यूज़ करना है ठीक है स्टूडेंट इसकी मदद से इसका जो आंसर आएगा वही स्लोप होगा ठीक है तो अब हम एक्ससाइज के क्वेश्चन को सॉल्व करते हैं स्टूडेंट स्टेप बाय स्टेप तो अब स्टूडेंट यहां पर आप क्वेश्चन के स्टेटमेंट पढ़े ये क्या बोल रहा है स्टूडेंट स्टेटमेंट के ऊपर ये देखें स्टूडेंट ये देखिए इन एक्सरसाइज 5 टू 10 फाइंड एन इक्वेशन फॉर द टेंज टेंज की इक्वेशन का फार्मूला क्यों कौन सा होता है स्टूडेंट अभी हमने पढ़ा है इक्वेशन ऑफ टेंज कहां हमें पता है हमारे पास फार्मूला होता है y = y नॉ + m x - x नॉ हमें ये तो पता है y नॉ और x नॉ को अगर हम इस पॉइंट में से चूज कर ले ठीक हो गया फिर m की m क्या होगा m होगा स्लोप ठीक है फिर हमने स्लोप भी तो फाइंड आउट करना है तो स्लोप का भी हमें फार्मूला पता है स्टूडेंट टेंज लाइन के लिए हमें लाजमी है इस स्लोप की वैल्यू को फाइंड करना इसके लिए अलग से फार्मूला होता है m इक्वल टू अभी हमने पढ़ा है स्लोप का फार्मूला क्या होता है लिमिट h डिवाइडेड बाय h इसकी लिमिट एजिस्ट करनी चाहिए तब जाकर इसका जो आंसर आएगा वो क्या होगा आपके पास स्लोप होगा ठीक है स्टूडेंट अब अगर आप गौर से देखे ना स्टूडेंट यहां पर तो हमें सबसे पहले इक्वेशन टेंज लाइ फाइंड आट करनी है लेकिन टेंज लाइट में हमने देखा हमारे पास m पड़ा हुआ है तो m के लिए हमारे पास ये फार्मूला होता है तो सबसे पहले हम ये फार्मूला लगाकर m की वैल्यू निकालते हैं m की फम वैल्यू यहां पर रखेंगे तो हमारे पास क्या आ जाएगी इक्वेशन ऑफ टेंज लाइन आ आ जाएगी फिर उस उसमें पॉइंट की मदद से हमने उसको कर्व ड्रा करना होगा कर्व के ऊपर फिर हमने इक्वेशन ऑ टेंज लाइन को क्या करना होगा स्केच करना होगा ठीक है तो अब मैं आपको ये स्टेप बाय स्टेप करके दिखाता हूं ठीक है स्टूडेंट तो देखिए आपके पास ये क्वेश्चन नंबर फाइव है स्टेप बाय स्टेप सॉल्व करते हैं स्टूडेंट तो देखें हमें क्या चाहिए इक्वेशन ऑ टेंज लाइन चाहिए तो इक्वेशन ऑ टेंज लाइन से पहले हमें यहां पर m पड़ा हुआ है तो m का फार्मूला ये होता है तो इसमें हमें सबसे पहले कौन सा चाहिए f x न चाहिए स्टूडेंट तो f x न देखें हमने y किसके इक्वल होता है y बेसिकली आपके पास f एक के इक्वल होता है ठीक है तो मैं यहां पर लिखता हूं गिवन है हमारे पास f ऑफ एक क्या है स्टूडेंट f एक है आपके पास 4 - x स् हियर तो अब हमें क्या चाहिए स्टूडेंट हमें चाहिए f x नॉ तो जहां पर x होगा उसकी जगह आपने रिप्लेस कर देना है x नॉ तो क्या बनेगा f x0 = 4 - x0 की पावर 2 ठीक है अब हमें हमारे पास ये चीज आ गई स्टूडेंट अब हमें और क्या चाहिए स्टूडेंट और हमें चाहिए f x0 + h यानी कि x में + h को ऐड कर देता हूं तो क्या बनेगा स्टूडेंट देखें यहां पर जहां पर भी आपके पास x नट होगा ना देखिए ना हमें ये चाहिए था ये मिल गया ठीक है अब हमें क्या चाहिए f x नॉ प् h चाहिए यानी कि x नॉ के अंदर आपने h को ऐड करना है तो क्या बनेगा यहां से देखें तो मैं ऐड करता हूं देखें f ऑफ x न इसमें प्लस h को ऐड कर दें तो मैं अगर h को ऐड करूं तो क्या बनेगा देखें 4 माइनस हम किसम ऐड कर रहे हैं x नॉ में h को ऐड कर ें हैं फोर में ऐड नहीं कर रहे तो फोर एज इट आएगा x नॉ में h कर रहे हैं तो x नॉ में h को आप ऐड कर रहे हैं तो ये उसका स्क्वायर एज इट है तो वो एज इट आएगा देखें हमें ये चीज भी मिल गई हमें ये चीज भी मिल गई अब हमने फाइनली लिमिट लगानी है तो अब हम लिमिट लगाते हैं देखें स्टूडेंट तो मैंने यहां पर लिखा है सिंस वी नो दैट फॉर स्लोप स्लोप आगे मैंने ब्रैकेट में लिखा है m तो उसका क्या फार्मूला है ये वाला फार्मूला तो देखिए m इ इक्वल टू आपके पास क्या है लिमिट h अप्रोचेबल निकाली है ये वाली वैल्यू यहां पर रख दें तो क्या बनेगा देखें 4 - x0 + h और उसकी पावर टू आगे क्या है आपके पास माइनस f x नॉ f x नॉ क्या अपने ये वैल्यू निकाल ली तो यहां पर रखें ब्रैकेट लगाए क्योंकि माइनस है 4 - x नॉ की पावर 2 डिवाइडेड बाय नीचे आपके पास क्या है स्टूडेंट नीचे देखें एच है एच को आपने एज इट रखना है वैसे ठीक है अब आप इसको सिंपलीफाई करें देखिए स्टूडेंट अगर आप गौर से देखें ना अगर मैं डायरेक्ट यहां पर h मैं पुट कर दूं रो तो समथिंग और जीरो इंफिनिट बन जाएगा वो लिमिट एजिस्ट नहीं करेगी तो लिमिट को एजिस्ट करवाने के लिए लाजमी है कि नीचे वाले एज को आप खत्म करें किसी ना किसी तरीके से तो नीचे वाले एच को खत्म करने के लिए ऊपर वाले मुझे स्क्वेयर ओपन करने पड़ेंगे तो मैं इसके स्क्वेयर को ओपन करता हूं देखें स्टूडेंट देखिए आप आपने इसके स्क्वायर को ओपन करना किस तरह से करेंगे देखि इसका स्क्वायर ओपन किस तरह होगा एक्स नॉ की पावर 2 प्लस ए की पावर टू प्स 2 ए एक्स नॉ इस तरह ओपन होगा ना फिर आप एज लिखें 4 माइन एक्स नॉ का स्क्वायर ठीक है नीचे डिवाइडेड बाय क्या है आपके पास एच एच को वैसे ही लिखा रहने दे ठीक है तो अब इसको मैं सॉल्व करता हूं देखि सॉल्व किस तरह से करेंगे आपके पास क्या है लिमिट ए मल्टीप्लाई करें तो क्या बनेगा 4 - x नॉ की पावर 2 - h का स्क्वा - 2hx न -4 और ये माइनस और ये माइनस मिलके पॉजिटिव हो जाएंगे x नॉ की पावर 2 डिवाइडेड बा आपके पास क्या बनेगा स्टूडेंट नीचे h बनेगा अब अगर आप गौर से देखें ना जो चीज कैंसिल आउट होती है स्टूडेंट उन्हें आप कैंसिल आउट करने की कोशिश करें अगर गौर से देखें ये देखें पॉजिटिव फोर है ये नेगेटिव फोर है कैंसिल आउट हो गया ये देखिए आपके पास क्या है - x नॉ का स्क्वायर है और ये क्या है पॉजिटिव x नॉट का स्क्वायर है कैंसिल आउट हो गया इन इन दोनों में से आपने h को कॉमन निकाल लेना है तो क्या बनेगा देखें लिमिट h अप्रोचेबल गे तो क्या बनेगा देखें यहां से मैं अगर h को कॉमन निकालूं तो अंदर क्या बनेगा - h -2 ठीक है h कॉमन आ गया अंतर आपके पास रह गया x नॉ डिवाइडेड बा नीचे आपके पास क्या है h अब देखें इस h से ये वाला एच कैंसिल आउट हो जाएगा अब नीचे तो कुछ है नहीं तो हम ऐसा जीरो तो पोट कर नहीं सकते क्योंकि नीचे h खत्म हो चु गया अब हमारे पास लिमिट का कोई ना कोई फाइना इट आंसर आएगा तो देखें उसके लिए लिमिट में h की जगह 0 रख द तो ये क्या है -0 -2 एक न तो आंसर क्या आया स्टूडेंट -2 और ये क्या है x नॉट तो x नॉट आपके पास पॉइंट कौन सा गिवन था स्टूडेंट मैं आपको पॉइंट दिखाता हूं देखें ये देखें ये वाला पॉइंट गिवन था ना क्वेश्चन की स्टेटमेंट में -1 और थ तो आपके पास ये कौन सा पॉइंट है स्टूडेंट ये आपके पास है x नॉट और ये आपके पास कौन सा है आप इसे कंसीडर करें y नॉट ठीक है तो ये x नॉट y नॉ तो x0 की वैल्यू कौन सी है -1 तो मैं -1 यहां से उठाकर यहां पर पुट कर देता हूं इसमें x नॉ की जगह क्या रख देता हूं -1 तो क्या बनेगा आपके पास पॉजिटिव 2 ठीक है अब अगर आप गौर से देखें ना स्टूडेंट तो इस फंक्शन का मैंने क्या लिया डेरिवेटिव लिया डेरिवेटिव को फिर स्लोप के इक्वल पुट करते हैं तो m की वैल्यू क्या आ गई टू आ गई ये देखिए ना स्लोप m की वैल्यू किसके इक्वल आ गई है ये टू के इक्वल अगर मैं डायरेक्ट डेरिवेटिव भी लेता ना इस फंक्शन का ये देखें इस फंक्शन का डेरिवेटिव लेता तो क्या बनता मेरे पास देखें डेरिवेटिव ऑफ f x f x का डेरिवेटिव d / dx2 = d / ठीक है डायरेक्ट जो जो हम आगे चैप्टर नंबर थ्री में पढ़ेंगे तो इसका इसका डेरिवेटिव क्या बनेगा आपके पास f का फर्स्ट डेरिवेटिव इज इक्वल टू फोर कांस्टेंट इसका डेरिवेटिव जीरो माइनस वैसे ठीक है आपके पास तो ये क्या है पावर टू है आपके पास तो अब हमने पावर को लगाना है इसमें तो क्या बनेगा आपके पास टू और ये क्या बनेगा x बनेगा ठीक है तो अब आपने क्या निकालना है स्लोप की वैल्यू निकालना है तो ये आपके पास डेरिवेटिव आया f का फर्स्ट डेरिवेटिव इनटू x आया तो इसमें आपने स्लोप की वैल्यू निकालनी है किस पॉइंट के ऊपर -1 के ऊपर x नॉट की वैल्यू कौन सी है या x = माइ 1 के ऊपर निकालेंगे तो -2 -1 तो आंसर इज इक्वल टू 2 तो ये डेरिवेटिव का डायरेक्ट डेरिवेटिव का क्या है आपके पास डायरेक्ट रूल है ये आपने यूज नहीं करना वैसे मैं समझाने के लिए कर रहा हूं आपने ये वाली डेफिनेशन को फॉलो करना है स्टूडेंट ठीक है तो अब हमने देखें हमने ये स्लोप निकाला f का फर्स्ट डेरिवेटिव लेकर उसमें आपने क्या निकालना है x की जगह -1 रखना है जो पॉइंट गिवन है आपके पास ये देखें ना यहां पर आपके पास पॉइंट कौन सा गिवन अब हम हमारे पास यहां पर कौन सा था x था ये देखें ये x है आपके पास ठीक है ये देखें ये x था तो अब आपने इसको x कंसीडर करना है ये इस वाले को अगर x नॉ है तो x न कंसीडर करें हमारे पास x था तो मैंने x x को -1 पुट कर दिया तो स्लोप आ गया टू ठीक है अब इसी तरह हमारे पास यहां पर क्योंकि x नॉट था स्टूडेंट ये देखिए यहां पर x नॉ था तो इस वजह से मैंने इसे कहा x नॉ ठीक है तो x नॉट की वैल्यू कौन सी आई -1 पुट करेंगे तो आंसर क्या आएगा टू तो अब ये ये क्या है आपके पास स्लोप की वैल्यू है स्टूडेंट ठीक है अब इसी तरह स्टूडेंट अब हमें ये चीज मिलेगी हम इक्वेशन ऑफ टेंज लाइन को इजली फाइंड कर सकते हैं ठीक है तो ये देखें इस इक्वेशन में पुट कर इक्वेशन ऑफ टेंज लाइन हमारा फाइनल काम कौन सा रह गया इक्वेशन ऑ टेंज लाइन को फाइंड आउट करना तो मैं आप लिखता हूं नो वी नो दैट फॉर टेंज तो मैं लिखता हूं टेंज के लिए स्टूडेंट हमें पता है हमारे पास इक्वेशन होती है फार्मूले के अकॉर्डिंग y = होता है y नॉ + m x - x0 स्टूडेंट अगर आप गौर से देखें ना आपके पास पॉइंट गिवन है कौन सा पॉइंट गिवन है स्टूडेंट -1 और 3 तो अगर आप गौर से देखें इस -1 को आप कहते x नॉट ठीक है और इस थी को आप कहते y नॉट इस पॉइंट के इक्वल करके यहां पर रख दें तो क्या बनेगा देखें y नॉ की वैल्यू कौन सी है स्टूडेंट थ्री के इक्वल है तो थी पुट करें m की वैल्यू आपने निकाली है स्टूडेंट टू ये देखें टू है तो आपने टू पुट करें x को आपने एज रखना है - x नॉ की वैल्यू आपने कौन सी लिखी है -1 तो माइनस और माइनस मिलक पॉजिटिव वन हो जाएंगे इसे आपने सॉल्व करना है स्टूडेंट 3 + 2x + 2 तो आंसर क्या बनेगा 2x + 5 तो टेंज लाइन की कौन सी इक्वेशन बनी है हमारे पास टेंज लाइन कौन सी है y = 2x + 5 इज द टेंज तो मैं लिखता हूं इज दी टेंज टू दी पॉइंट p -1 और 3 ये इक्वेशन किस पॉइंट के ऊपर टेंज एट होगी p -1 और 3 पॉइंट के ऊपर टेंज होगी बताऊंगा आगे मैं ग्राफ में जैसे जनरेट करूंगा ना कुछ चीजें आपको ग्राफ में क्लियर होंगी स्टूडेंट ठीक है अब हम इसके ग्राफ को जनरेट करेंगे तो ग्राफ को जनरेट करने के लिए सबसे पहले हमने उसमें इंटी जर रियल नंबर वैल्यूज पुट करेंगे ठीक है पहले फंक्शन का कर्व जनरेट करेंगे फंक्शन का कर्व जो आ जाएगा ना फिर उस फंक्शन के कर्व के ऊपर इस टेंज लाइन को ड्रा कर देंगे फाइनल हमारा काम हो जाएगा बस खत्म ठीक है तो अब हम इसको सॉल्व करें हैं देखें तो स्टूडेंट अब यहां पर आप देखें तो देखें स्टूडेंट आपके पास मैं आपको सिखा रहा हूं कि आपने फंक्शन का ग्राफ किस तरह से जनरेट करना है x प्लेन के ऊपर ठीक है तो आपके पास फंक्शन कौन सा गिवन था y = 4 - x स् था ना तो सबसे पहले आपने ना चूज करना है एट तो मैं लिखता हूं सबसे पहले ट x = 0 के ऊपर y का कौन सा आंसर बनेगा यहां से देखें y इ x को आपने रो पुट किया तो रो का स्क्वा 0 तो y का आंसर क्या बनता है स्टूडेंट फोर बनता है इट मीन आपके पास एक पॉइंट मिल गया हमें कौन सा पॉइंट मिल गया है जब x 0 है तब y का आंसर क्या है फोर है एक पॉइंट आपके पास ये है अब इसी तरह स्टूडेंट एक और पुट करें जब ट x = अब x को बढ़ाएं रो से लेकर वन फिर उसके बाद y का क्या आंसर बनेगा y इ टू जब x को आपने रखा वन तो वन का स्क्वायर वन तो y का आंसर क्या आया 4 - 1 3 तो y का आंसर आया थी तो एक और पॉइंट हमें मिला है p जब x1 है और y का आंसर क्या है आपके पास थ्री है ठीक है इसी तरह एक और पॉइंट भी लिख रहे हैं स्टूडेंट एट x = 2 तो y का आंसर क्या बनेगा देखिए जब आप x को टू पुट करेंगे 2 टू 4 - 4 0 तो y का आंसर क्या बन रहा है स्टूडेंट्स जीरो बन रहा है तो इसमें एक और पॉइंट मिल गया जब x2 है तो y क्या बनता है जीरो इसी तरह आपके पास और भी बहुत सारे पॉइंट मिलेंगे आपके पास ठीक है समझाने के लिए इतने पॉइंट मैंने आपको यहां पर लिखे हैं अब अब यहां पर देखें इक्वेशन ऑफ टेंज लाइन में आपने पॉइंट्स को देखें तो यहां पर जब आपके पास x क्या है जीरो है स्टूडेंट तोत y का आंसर क्या बनता है जब x 0 है तो y का आंसर क्या बनेगा फव बनेगा यहां से तो क्या बनता है देखें पॉइंट कौन सा मिलेगा आपके पास जब x जीरो है तो y का आंसर क्या बनता है फ बनता है कौन सा y टेंज लाइन टेंज लाइन का आंसर ठीक है इसी तरह स् आगे देखें जब x को आप बढ़ाए x = 1 रखें तो y का आंसर क्या बनेगा x को मैंने रख दिया वन तो y का आंसर क्या बनेगा 2 प् 4 7 तो इसका आंसर एक और पॉइंट मिल गया हमें p इन जब x1 है तब y का आंसर क्या है स्टूडेंट सेन है ठीक है अब इसी तरह देखें स्टूडेंट तो अब आप एक एलिमेंट पॉजिटिव रखा है एक एलिमेंट नेगेटिव रखें x = -1 के ऊपर y का आंसर क्या बनता है देखिए -1 रखेंगे तो ये क्या बनेगा -2 प् 5 तो बनता है क्या 3 तो एक पॉइंट हमें मिल गया p -1 और 3 इसी तरह अगर हम माइनस अब देखि अगर हम टू रख द अब x = 1 रख दिया फिर माइव भी रख के देखा अगर हम x = 2 रखें तो टू पर क्या बनेगा देखें y का आंसर क्या बनेगा 2 टू 4 प् 5 9 बनता है तो एक पॉइंट हमें मिल गया p इन कौन सा 2 और 9 का इसी तरह एक और पॉइंट रख क्या माइनस वाला तो क्या बनता है x इ ट अगर मैं -2 रखूं तो y का आंसर क्या बनेगा -2 -4 + 5 तो ये क्या बनता है वन बनता है तो एक पॉइंट हमें मिल गया p -2 और वन का पॉइंट मिल गया स्टूडेंट ठीक है अब ग्राफ जनरेट करते हैं देखें तो स्टूडेंट मैं यहां पर इसके ग्राफ को जनरेट करने के लिए यहां पर मैं एक एक्सवा प्लेन बना लेता हूं आपको समझाने के लिए इस तरह से ये देखें ये मैंने कोई प्लेन बनाया इस तरह से ठीक है अब जरा आपने बात को समझना है स्टूडेंट ये आपके पास इस क्वेश्चन का लास्ट एंड है बस ठीक है अब देखें यहां पर ये आपके पास क्या है जीरो है स्टूडेंट ठीक है ये आपके पास क्या है वन है ये आपके पास क्या है -1 है ये आपके पास क्या है स्टूडेंट टू होगा यहां पर क्या होगा -2 होगा इसी तरह यहां पर आपके पास क्या होगा स्टूडेंट थ्री होगा यहां पर आपके पास क्या होगा फोर होगा अप टू सो ऑन ठीक है अब इसी तरह यहां पर आप देखें स्टूडेंट मैं आपको यहां पर ये दिखा देता हूं ये देखें यहां पर क्या होगा आपके पास -3 होगा अच्छा ये वाला जो है वो ये वाले जो पॉइंट्स निकले हैं किस फंक्शन के लिए निकले हैं जब फंक्शन y इ इक्वल 4 - x किया था और ये किसके लिए निकले हैं इक्वेशन ऑफ टेंज के लिए y = 2x प् 5 तो मैंने नीचे अगेन लिख दिया क्योंकि ऊपर आपको नजर नहीं आ रहे थे तो मैंने एक साथ आपको समझाना है तो इसलिए मैंने अगेन लिख दिया तो ये आपके पास टेंट के लिए है टेंट के लिए पॉइंट्स निकल के आए हैं और ये आपके पास क्या है फंक्शन जो गिवन था उस पर जो आपके पास पॉइंट बन रहे हैं वो ये है ठीक है तो अब हमने हमने यहां पर देखें जीरो के बाद वन पुट किया टू पुट किया आप नेगेटिव भी पुट कर सकते हैं ठीक है पॉजिटिव वन नेगेटिव पुट करें फिर टू फिर नेगेटिव टू पुट करें जितना मैंने यहां पर किया वन फिर -1 फिर टू फिर -2 अप टू सो न इस तरह से ठीक है अब मैं आपको ग्राफ को समझा समझाने की कोशिश करता हूं स्टूडेंट ये देखें आपके पास ना सबसे पहले आप देखें पहले उसने उसने क्या कहा उसने कहा आपने ना फंक्शन का जो कर्व बनेगा वो और टेंज लाइन को टूगेदर एक साथ ड्रा करना है ये क्वेश्चन की स्टेटमेंट में हमसे पूछा गया था ना सबसे पहले हम ये ड्रा करते हैं इसके ऊपर ये इस टेंज लाइन को ड्रा कर देंगे फिर तो सबसे पहले हम इस फंक्शन के कर्व को जनरेट करते हैं तो कार को किस तरह से जनरेट करेंगे जब मैंने x को जीरो रखा ये देखें ये x वाली लाइन है ये कौन सी है y वाली लाइन है ठीक है वा वाली लाइन है ये वाली जब मैंने x को रखा जीरो तो y का आंसर क्या मिला मुझे फोर ये देखें x 0 है y 4 है जब मैंने x को रखा 0 y का आंसर क्या मिला मुझे यहां पर क्या होगा वन यहां पर क्या होगा टू यहां पर क्या होगा थ्री यहां पर क्या होगा फोर तो y का मुझे आंसर मिला फोर एक पॉइंट यहां पर लाई करता है ठीक है एक पॉइंट ये एक पॉइंट ये तो फर्स्ट पॉइंट मैंने ड्रा कर दिया अब नेक्स्ट पॉइंट को देखें जब x = 1 था तो y का आंसर क्या था 3 था जब x = 1 था तोत y का आंसर क्या था स्टूडेंट y का आंसर क्या था थ्री था ये देखें x1 है तो y 3 है तो ये पॉइंट कहां पर लाई करता है जब x देखें वन है y क्या है थ्री है 1 2 3 तो यहां पर लाई करता है पॉइंट देखें यहां पर ये पॉइंट यहां पर लाई करता है ठीक है दूसरा पॉइंट भी हमने ड्रा कर दिया जब x व है और y का आंसर क्या है थ्री तो ये यहां पर लाई करता है पॉइंट अब दूसरे पॉइंट की तरफ आते हैं जब x2 है तब y का आंसर क्या है स्टूडेंट जीरो है यहां पर देखिए जब x क्या है टू है जब x क्या है स्टूडेंट टू है तब वा का आंसर क्या है जीरो है इट मीन एक पॉइंट हमें यहां पर मिलता है स्टूडेंट ठीक है अब इसी तरह आगे आप नेक्स्ट आपने क्या करना है माइनस वाले पॉइंट को पुट करेंगे माइनस वाले तो माइनस वाले पॉइंट किस तरह आएंगे वो इस तरह आएंगे स्टूडेंट इस तरह ये देखें मैं यहां पर इसको लिखता हूं अब क्योंकि ये मैंने पॉजिटिव नंबर ऑफ एलिमेंट्स रखे तो पॉजिटिव एरिया ये देखिए ये वाला एरिया कौन सा है पॉजिटिव और ये वाला कौन सा है नेगेटिव तो मैंने कौन से पॉइंट रखे हुए हैं पॉजिटिव तो पॉजिटिव साइड प मुझे इसके पॉइंट्स का ग्राफ मिलेगा इस तरह ठीक है अब मैंने यहां पर भी मैंने नेगेटिव में रखे तो आप यहां पर भी नेगेटिव रखें तो आप नेगेटिव रखेंगे तो क्या बनेगा देखिए नेगेटिव की साइड पर आए जब x = -1 मैं पुट करता हूं इस फंक्शन में यहां पर इसमें देखिए x को मैं क्या पुट करता हूं -1 तो फिर y का आंसर क्या बनेगा जब मेरे पास पॉइंट कौन सा है यहां पर य मैं लिखि यहां पर लिखता हूं देखि x जब मैं -1 रखता हूं तो फिर y का आंसर क्या बनता है -1 का स्क्वा पॉजिटिव व तो माइनस तो आंसर क्या बनता है थ्री तो जो ये वाला पॉइंट कहां पर नहीं करेगा देखिए माइनस वाला मैंने नहीं रखा तो वो भी रख ले यहां पर तो क्या बनेगा जब x -1 है y का आंसर क्या है थ जब x -1 है तो y का आंसर क्या है थ्री 1 2 3 तो थ पॉइंट यहां पर ला करता है स्टूडेंट ठीक है अब इसी तरह जब x को आप -2 रखेंगे तब आपके पास क्या कौन सा ग्राफ मिलेगा आपको -2 0 मिलेगा तो फिर आपके पास ये ये ये यहां पर पॉइंट लाई करेगा तो अब आपने अगर गौर से देखें ना तो आपके पास ग्राफ किस तरह से जनरेट हो रहागा स्टूडेंट इस तरह से जनरेट हो रहा है देखिए इस तरह से ये वाले पॉइंट को आपने इस तरह से मिलाना है ठीक है यूं और फिर आपने इसको ये मुझे अच्छे अच्छी तरीके से नहीं बन रहा आपने अच्छी तरीके से बनाना है या आपके पास बहुत ही खूबसूरत सा बनेगा ये देखिए इस तरह से ये इस तरह से आपने इस पॉइंट को मिलाना है ठीक है ये इस तरह से आपके पास ग्राफ बनेगा ये ठीक है स्टूडेंट अब आपने ना ये इसका काम खत्म हो गया अब आपने टेंज क बना वो क्वेश्चन के स्टेटमेंट में ये कहा था कि आपके पास जो आंसर है ना उस आपके पास जो ग्राफ बनर है उस ग्राफ के ऊपर जो कर्व के ऊपर आपने इक्वेशन ऑफ टेंज लाइन को एक साथ ड्रा करने है अभी इक्वेशन टेंज लाइन के लिए देखें जब आपने पॉइंट x = 0 रखा स्टूडेंट तो y का आंसर क्या बना 0 और 5 बना तो कहां पर ये लाई करना है देखिए इसके पॉइंट को पहले आप ड्रा करें जब आपने x को रखा रो y का आंसर बना फ जब x क्या है 0 y का आंसर क्या है फाइव तो 1 2 3 4 5 तो फाइव यहां पर पॉइंट तो पॉइंट यहां पर बन रहा है एक टेंज का ठीक है अब नेक्स्ट आगे आए स्टूडेंट जब आपने x को रखा वन तो y का आंसर क्या आया 7 x को रखा वन तो y का क्या है सेन ये वन है y का y2 y3 y4 y5 ठीक है y6 y7 यहां पर होगा तो एक पॉइंट हमें यहां पर मिल मिल रहा है स्टूडेंट ठीक है ना जरा बात को समझे अब इसी तरह अ आगे हमने जब x को हमने माइनस मा रखा तो y का आंसर क्या बना जब x को हमने रखा -1 और y का आंसर क्या बना थ्र तो देखें x जब आपके पास क्या है यहां पर -1 यहां पर है ना जब x आपके पास -1 है तो y का आंसर क्या बन रहा हैथ बन रहा है तो देखें x जब -1 है तो y का आंसर क्या है 1 2 3 तो y का आंसर यहां पर बन रहा है एक पॉइंट यहां पर मि मिल गया हमें एक पॉइंट ये एक पॉइंट ये एक पॉइंट ये ये वाले तीन पॉइंट आपने याद रखने है स्टूडेंट ठीक है तो मैं इसको रेड कलर से डिनोट करता हूं आपको समझाने के लिए तो देखें ये मैं मैंने रेड कलर से लिखने की कोशिश कर तो आपको टेंज का समझ जाए तो ये कौन से वाले पॉइंट है टेंज एट के पॉइंट है स्टूडेंट ठीक है अब उसके बाद आगे देखें जब x को मैंने रखा टू y का आंसर क्या आया नाइन जब x को मैंने रखा टू y का आंसर क्या आया नाइन तो ये x को टू है y1 y2 y3 होगा y4 होगा y5 होगा y6 होगा y7 होगा y8 y9 तो ये यहां पर पॉइंट बनेगा इस तरह से ठीक है ये इस तरह से स्टूडेंट ठीक है अब इसी तरह आगे वाला पॉइंट यहां पर बनेगा फिर यहां पर फिर यहां पर इस तरह चलता जाएगा ऊपर अब नेगेटिव वाला देखते हैं जब x को मैंने रखा -2 y को फिर रखा वन y का आंसर क्या आया वन जब x -2 है तो y क्या है वन है ये वाला पॉइंट जब x क्या है आपके पास -2 है y का आंसर क्या है स्टूडेंट y का आंसर वन है तो इट मीन यहां पर बनता है आपके पास पॉइंट जब x क्या आपके पास - 2y का आंसर क्या है वन ये देखें ये है वन तो इट मीन इसी तरह एक और पॉइंट आएगा आपके पास वो यहां पर होगा इसी तरह एक और पॉइंट आएगा आपके पास वो यहां पर होगा ठीक है इसी तरह चलते जाएंगे एक पॉइंट यहां पर मिलता जाएगा आपके पास तो इसका मतलब ये है यहां से लेकर यहां तक ये आपके पास क्या है टेंज एट लाइन बन रही है आपके पास आपने इसके पॉइंट को यहां से ले यहां तक मिलाने हैं स्टूडेंट तो मैं मिलाता हूं देखें तो इक्वेशन ऑफ टेंज लाइन कौन सी बनी स्टूडेंट ये वाली बनी देखें वो यहीं पास बता रहा था ना कि ये आपके पास क्या है क्वेश्चन के स्टेटमेंट में फंक्शन गिवन था ये हमने इक्वेशन ऑफ टेंज लाइन निकाली थी ये आपके पास क्वेश्चन स्टेटमेंट में फंक्शन गिवन था इस फंक्शन का जो ग्राफ बना है इस ग्राफ के ऊपर हमने इक्वेशन ऑफ इस टेंज एट लाइन को ड्रा कर दिया है इस ग्राफ के ऊपर तो क्या बना कर्व और टेंज लाइन को टूगेदर हमने स्केच कर दिया है ठीक है स्टूडेंट ये कर्व है ये कर्व के ऊपर क्या बन रही है टेंज लाइन बन रही है फॉर एग्जांपल आपके पास ये सर्कल है तो टेंज लाइन किस तरह इस तरह से होती है फॉर एग्जांपल ये आपके पास कर्व है कर्व के ऊपर टेंज लाइन किस तरह से होती है इस तरह होती है या इस तरह से आ रही आ रही होती है उसको टच करके गुजर रही होती है ठीक है तो तो ये आपके पास क्या है ये आपके पास टेंज का कांसेप्ट है तो बेसिकली इस तरह से आपने डिनोट करना है ठीक है वैसे ये वाला पॉइंट आपने यहां पर लिख द तो ये आपके पास क्या है टू 3 4 अप टू सोन यहां पर -1 हो गया -2 हो गया ठीक है अप टू सो ऑन अब इस तर स्टूडेंट नेक्स्ट क्वेश्चन को मैं सॉल्व नहीं करने वाला आपको जस्ट कांसेप्ट बताऊंगा आपने उसी को सॉ खुद से सॉल्व करना है अब इसी तरह स्टूडेंट आप देखें ये आपके पास क्वेश्चन नंबर सिक्स है ठीक है आपके पास ये फंक्शन गिवन है y = x - 1 का स्क्वा प् 1 और आपके पास ये पॉइंट गिवन है ठीक है आपने इसको f एक एक् कहना है क्योंकि आपको पता है स्लोप का ये फार्मूला तो इसके लिए हम सबसे पहले ये चीज चाहिए तो हमने लिख x को x नॉट से रिप्लेस कर दे तो ये बनेगा फिर x नॉट में h को ऐड करेंगे तो ये बनेगा फिर इसमें पुट करेंगे तो m की वैल्यू आएगी ये देखिए मैंने वैल्यू को पुट किया हुआ है सिंपलीफाई किया हुआ है आप यहां से स्क्रीनशॉट भी ले लें तो अगर आप चाहे तो ये मेरे पास इसका पीडीएफ प्रंट में भी है ठीक है तो इसको सॉल्व करेंगे तो आंसर आएगा आपके पास जीरो स्लोप का ठीक है अब इस स्लोप का आंसर आ गया इसी तरह आपने निकालना है इक्वेशन ऑफ टेंज टेंज लाइन को ठीक है तो टेंज लाइन को आपने किस तरह से निकालना है देखिए मैं आपको यहां पर दिखाता हूं ये देखें टेंज लाइन का आपके पास ये फार्मूला है स्टूडेंट y = y नॉ + m x - x नॉ ठीक है तो अब सिंपली फा करना है उसमें ठीक है y नॉट और x नॉ का जो पॉइंट गिवन था वो पुट करना है m का जो आंसर आया था वो पुट करना है तो y = 1 आपके पास कौन सी लाइन है टेंट लाइन है तो अपने सबसे पहले क्या करना है तो मैंने यहां पर पॉइंट को पुट किया हुए देखें तो अब x = 0 पुट करें फिर -1 पुट करें फिर वन पुट करें फिर -2 पुट करें फिर टू पुट करें आप टू सोन चलते हैं इसी तरह y = 1 मेंथ से पहले आप x = 0 पुट करें फिर x = 1 पुट करें x = -1 पुट करें x = 2 पुट करें x = -2 पुट करें तो आपके पास ये ग्राफ बनते आएंगे ठीक है तो इसके बाद आपने ये इस तरह से इसके ग्राफ को जनरेट करना है स्टूडेंट ठीक है तो मैंने यहां पर देखिए पॉइंट को कंप्लीट पुट नहीं किया तो आपने कंप्लीट पुट करना है माइनस वाले भी और पॉजिटिव वाले भी ठीक है अब इसी तरह ये देखिए यहां पर इसकी जो टेंज लाइन बनता है व ये देखि इस कर्व को टच करके गुजर रही है ये टेंज लाइन ठीक है इस तरह से मजीद आप देख ले यहां से ठीक है अब इसी तरह क्वेश्चन नंबर सेवन को मैं आपको दिखाता हूं वो किस तरह से होगा ये देखिए स्टूडेंट आपके पास क्वेश्चन नंबर सेवन है y = 2 x x अंडररूट ठीक है आपके पास ये पॉइंट गिवन है ठीक है तो सबसे पहले आपने इस इसको f x कहना है फिर x को x नॉ से रिप्लेस कर देना है x नॉट में h को ऐड कर देना है फिर स्लोप के लिए आपके अपने ये फार्मूला लगाना है इसको सिंपलीफाई करना है नीचे आपने एच को खत्म करना है स्टूडेंट ठीक है देखिए स्टूडेंट नीचे आपने एच ना एच को किस तरह से आप खत्म करेंगे रेशन आइज करेंगे ठीक है रेशन आइज करने के लिए देखिए इसका स्क्वायर बनेगा इसका इसका h नटू इसको सिंपल नीचे अंदर मल्टीप्लाई हो जाएगा ठीक है इसका स्क्वायर ये अंडर रूट को स्क्वायर कैंसिल आट हो जाएगा फरर का स्क्वायर 2 का स्क्वायर फर बनेगा इसका रूट कैंसिल हो हो जाएगा 4 का स्क 2 तो बनता है आपके पास देखिए सॉरी 2 का स्क्वा 4 बनता है ना तो अब इसको सिंपलीफाई करें नीचे एच को खत्म करने के लिए ऊपर में से मैंने h को निकाला है h से h कैंसिल आउट हो जाएगा ऊपर फोर आ जाएगा यानी कि सिंपल तरीके से यहां पर ना कैलकुलेशन है और कुछ भी नहीं है स्टूडेंट ठीक है तो m का आंसर आ जाएगा वन इसी तरह टेंट लाइन क्या आपके फार्मूला है y = y नॉ + m x - x नॉ m की वैल्यू आपने निका ली है वो पुट करें x नॉ और y नॉट जो क्वेश्चन की स्टेटमेंट में जो पॉइंट गिवन थे वो क्या है x नॉ और y नॉ है वो यहां पर पुट करें तो y = 1 + x आपके पास क्या है चेंज लाइन है तो सबसे पहले आपने नेगेटिव वाले भी पॉइंट्स पुट करने हैं और पॉजिटिव वाले भी पॉइंट्स पुट करने हैं ये आपके पास क्वेश्चन की स्टेटमेंट में फंक्शन गिवन था और ये आपके पास आपने न्यू निकाला है टेंज लाइन का ठीक है तो ये देखें इसकी टेंज लाइन कौन सी बनती है ये वाली टेंज लाइन बनती है y = x + 1 और ये आपके पास कौन सा है अच्छा बेसिकली ना ये टेंज लाइन बन रही है ना तो आपने यहां पर इसके साथ ये भी लिखना है तो फर्स्ट क्वेश्चन में मैंने नहीं लिखा था तो वो आप लिख लें ये जो आपके पास कार् बन रहा है कौन सा फंक्शन का तो y = 2 x रूट ठीक है अब इस स्टूडेंट क्वेश्चन नंबर एट को देखें स्टेप बाय स्टेप हम क्वेश्चन नंबर एट को सॉल्व करते हैं स्टूडेंट ये आपके पास क्वेश्चन नंबर एट है y = 1 अप x स् -1 और वन आपके पास पॉइंट गिवन है ठीक है ये देखिए मजीद अगर आपको यहां पर समझ नहीं आ रही ना तो कमेंट्स बॉक्स में मुझे जरूर बताएं मैं इनका आपको पीडीएफ सेंड कर दूंगा या मैं आपको इनका इन एक्ससाइज से जो रिलेटेड क्वेश्चन है ना ये खुद से मैं आपको समझाऊ अलग से पेज प जिस तरह मैंने क्वेश्चन नंबर फाइव को समझाया अगर आपको समझ नहीं आ रही तो ठीक है तो ये देखें ये बिल्कुल सेम उसी मेथड से सॉल्व होंगे आप यहां से इसे देख रहे हैं स्टेप बाय स्टेप ठीक है इसका मैं आपको ग्राफ भी जनरेट करके दिखा देता हूं ये देखि ग्राफ इसका ये है ये इसका ग्राफ है ठीक है अब इसी तरह क्वेश्चन नंबर नाइन को देखें स्टूडेंट नाइन क्या है y = x की पावर 3 पॉइंट कौन से हैं -2 और -8 पॉइंट आपके पास गिवन हो अच्छा ये पॉइंट्स गिवन होंगे आपके पास ठीक है तो इस तरह आपने सबसे पहले स्लोप को निकालना है स्टूडेंट ये मैं आपको दिखा रहा हूं आपने इसको लिख लेना है अपने पास ठीक है उसके बाद फिर आपके पास कौन सा गिवन है ये देखें ये आपके पास टेंज लाइन आपने निकालनी है फिर पॉइंट्स आपने निकालने हैं ठीक है फंक्शन के पॉइंट्स निकालने हैं टेंज के भी पॉइंट्स आपने अलग-अलग निकालने है आगे देखें तो ये आपके पास फंक्शन के पॉइंट्स हैं ठीक है ये टेंट्स के हैं ठीक है तो इसकी मदद से आपने देखें ग्राफ को जनरेट करना है ग्राफ आपके पास कुछ इस तरह से बनेगा ठीक है ये देखें टेंज लाइन जो है वो ग्राफ को टच करके गुजरी है यहां से देखें अब इसी तरह क्वेश्चन नंबर 10 को देखें स्टूडेंट 10 आपके पास गिवन है y = 1 / x की पावर 3 -2 और - 1.8 आपके पास पॉइंट गिवन है ठीक है तो ये देखिए आप यहां से देख सकते हैं इसके ये देखिए मैंने स्टेप बाय स्टेप सॉल्व कर रखा है थाकि कोई भी बंदा देखे ना उस पहली दफा अगर कोई कोई भी बंदा इसे देखे तो उसे इजली समझ आ जाएगा ठीक है अब इस तरह आगे दिखाता हूं आपको तो यहां से टेंज लाइन के फार्मूला लगाएंगे m की वैल्यू पुट करेंगे सिंपलीफाई करेंगे तो ये पास क्या होगा टेंज लाइन होगा इसी तरह जो पॉइंट होंगे वो फंक्शन के ऊपर पॉइंट्स अलग से बनेंगे टेंज एट के पॉइंट्स अलग से बनेंगे ये देखें ये टेंज एट के पॉइंट्स ये फंक्शन के पॉइंट्स सिंपलीफाई करेंगे यहां पर रखेंगे तो आपके पास ये आंसर आएगा ये आपके पास क्या बनेगा ग्राफ बनेगा ठीक है तो आज की इस वीडियो में स्टूडेंट हमने क्वेश्चन नंबर फव टू 10 को स्टेप बाय स्टेप सॉल्व किया है अगर फिर भी आपके पास क्वेश्चन 6 7 8 9 और 10 में अगर आपको कोई प्रॉब्लम आ रही है तो कमेंट्स बॉक्स में मुझे अपने कंफ्यूजन को लाजमी बताएं ताकि मैं आपको नेक्स्ट वीडियो में उसकी क्लीयरेंस दे दूं ठीक है तो स्टूडेंट अगर आप हमारे चैनल पर नए हैं तो कै चैनल को जरूर सब्सक्राइब करें मिलते हैं नेक्स्ट वीडियो में तब तक के लिए अल्लाह हाफिज |
187815 | https://www.gauthmath.com/solution/1812326852679813/Length-of-a-chord-2rsin-frac-2 | Solved: Length of a chord =2rsin ( θ /2 ) [Calculus]
Drag Image or
Click Here
to upload
Command+to paste
Upgrade
Sign in
Homework
Homework
Assignment Solver
Assignment
Calculator
Calculator
Resources
Resources
Blog
Blog
App
App
Gauth
Unlimited answers
Gauth AI Pro
Start Free Trial
Homework Helper
Study Resources
Calculus
Questions
Question
Length of a chord =2rsin ( θ /2 )
Gauth AI Solution
100%(1 rated)
Answer
The length of chord AB is 2 r sin(θ/2)
Explanation
Draw a circle with center O and radius r.
Draw a chord AB with central angle θ.
Draw a perpendicular from O to AB, intersecting AB at point C.
Triangle OAC is a right triangle with hypotenuse OA = r and angle AOC = θ/2.
Using the sine function in triangle OAC, we have sin(θ/2) = AC/OA = AC/r.
Therefore, AC = rsin(θ/2).
Since AC is half the length of chord AB, the length of chord AB is 2 AC = 2 rsin(θ/2)
Helpful
Not Helpful
Explain
Simplify this solution
Gauth AI Pro
Back-to-School 3 Day Free Trial
Limited offer! Enjoy unlimited answers for free.
Join Gauth PLUS for $0
Previous questionNext question
Related
Geometry Show that the length d of a chord of a circle of radius is given by the formula d=2rsin frac θ 2 where θ is the central angle formed by the radii to the ends of the chord see the figure. Use this result to derive the fact that sin θ < θ where mured in radians.
100% (4 rated)
Given AOB is a sector of a circle with centre O and a radius of r cm where angle AOB= θ rad. a Show that the length of the chord AB is 2rsin frac θ 2. [2 marks] b Given r=5cm and angle AOB=1.2 rad, find the difference in length between the arc length AB and the chord AB. [3 marks] Answer:
100% (4 rated)
Fig.10.4 Let the length of the chord AB=x , then x=2rsin frac θ 2 r=6cm θ =60 ° ∴ frac rho 2=30 ° x=2 6 sin 30 =12 0.500 =6cm
100% (2 rated)
Ungkapkan k dalam sebutan r dan . The chord MN of length k cm forms an angle of square MON= θ rad with O being the center of the circle. Given the radius of a circle is r cm. Express k in terms ofr and . 2rsin frac 90 ° θ π
100% (5 rated)
Show that the length d of a chord of a circle of radius r is given by the formula d=2rsin frac θ 2 where 0 is the central angle formed by the radii to the ends of the chord. See the figure. Use this result to derive the fact that sin θ < θ , where θ >0 is measured in radians. While multiple methods can be used to prove that d=2rsin frac θ 2 , we will use the Law of Cosines for this problem Substitute the appropriate values into the formula for the Law of Cosines. c2=a2+b2-2abcos C d2=square 2+square 2-2square square cos square
100% (2 rated)
art II: Match the ollowing statement in column “A“ with their corresponding formula column “B”. “A” “B” 11. Chord length A 2πr 12. Arc length B 2rsin frac θ 2 13. Area of Secto C frac π r θ 180 ° D 14. Circumference frac π r2 θ 360 ° 15. Area of circle E π r2
100% (4 rated)
If I is the length of a circular arc, a is the length of the chord of the whole arc, and b is the length of the chord of half the arc, show that: a a=2rsin l/2r and b b=2rsin frac l4r' , where r is the radius of the circle. By expanding sin 1/2r and sin l/4r as series, show that I= 8b-a/3 approximately. Page 2
100% (5 rated)
How long are the chords that are joining the radii together? The length of each chord can be found using the formula for the side length 8 of a regular polygon inscribed in a circle with radius r: s=2rsin frac π n Where n is the number of sides. Since this is a pentagon, n=5 and r=4cm . We can calculate the chord length from this formula.
100% (5 rated)
6.5 po= If central angle @ cuts off a chord of length € in a circle of radius f, then the relationship betwern θ, c, and # is given by 2rsin frac θ 2=c Find B,ifc=r. θ =60 ° ,300 ° θ =30 ° ,300 ° θ =60 ° ,330 ° e=30 ° ,330 ° B=60 ° ,180 ° 05 guints
100% (2 rated)
Find the: a length of the minor arc AB _ _ b Perimeter of the sector OAB _ 2 Y _ c Area of the sector OAB _ _ d Area of the shaded segment _ 6. _ _ _ _ _ _ e find the exact length of the chord AB hint: chord length =2rsin [frac θ 2] _ a _ fiiraan
100% (1 rated)
Gauth it, Ace it!
contact@gauthmath.com
Company
About UsExpertsWriting Examples
Legal
Honor CodePrivacy PolicyTerms of Service
Download App |
187816 | http://math.uchicago.edu/~may/REU2017/REUPapers/Zhang,Alec.pdf | POLYA’S ENUMERATION ALEC ZHANG Abstract. We explore Polya’s theory of counting from first principles, first building up the necessary algebra and group theory before proving Polya’s Enumeration Theorem (PET), a fundamental result in enumerative combi-natorics. We then discuss generalizations of PET, including the work of de Bruijn, and its broad applicability.
Contents 1.
Introduction 1 2.
Basic Definitions and Properties 2 3.
Supporting Theorems 4 3.1.
Orbit-Stabilizer Theorem 4 3.2.
Burnside’s Lemma 4 4.
Polya’s Enumeration 5 4.1.
Prerequisites 5 4.2.
Theorem 7 5.
Extensions 11 5.1.
De Bruijn’s Theorem 12 6.
Further Work 15 Acknowledgments 16 References 16 1. Introduction A common mathematical puzzle is finding the number of ways to arrange a necklace with n differently colored beads. Yet (n −1)! and (n−1)!
2 are both valid answers, since the question has not defined what it means for necklaces to be distinct. The former counts the number of distinct necklaces up to rotation, while the latter counts the number of distinct necklaces up to rotation and reflection. Questions like these become more complex when we consider “distinctness” up to arbitrary transformations and with objects of more elements and non-standard symmetries.
The search for a general answer leads us to concepts in group theory and symmetry, and ultimately towards Polya’s enumeration, which we will explore below.
1 2 ALEC ZHANG 2. Basic Definitions and Properties We start with one of the most basic algebraic structures: Definition 2.1. Group. A group is a set G equipped with an operation ∗satisfying the properties of associativity, identity, and inverse: • Associativity: ∀a, b, c ∈G, (a ∗b) ∗c = a ∗(b ∗c).
• Identity: ∃e ∈G|∀a ∈G, e ∗a = a ∗e = a.
• Inverse: ∀a ∈G, ∃a−1 ∈G|a ∗a−1 = a−1 ∗a = e.
Given group elements g, h in group G, we denote g ∗h as gh.
Definition 2.2. Subgroup. A subgroup of a group G is a group under the same operation of G whose elements are all contained in G.
If H is a subgroup of G, we write H ≤G.
One of the most important groups is the symmetric group Sn, whose elements are all permutations of the set {1, ..., n}, and whose operation is composition. Indeed, permutations are associative under composition, there is an identity permutation, and all permutations have an inverse. Every finite set X also has an implied sym-metric group Sym(X), which simply involves all permutations of its elements.
Groups can also “act” on sets in the following manner: Definition 2.3. Group action. Given a group G and a set X, a left group action is a function φ : G×X →X satisfying the properties of left identity and compatibility: • Left Identity: For the identity element e ∈G, for all x ∈X, φ(e, x) = x.
• Left Compatibility: For all g, h ∈G, for all x ∈X, φ(gh, x) = φ(g, φ(h, x)).
A right group action is similarly defined as a function φ : X × G →X satisfying right identity and compatibility: • Right Identity: For the identity element e ∈G, for all x ∈X, φ(x, e) = x.
• Right Compatibility: For all g, h ∈G, for all x ∈X, φ(x, gh) = φ(φ(x, g), h).
Group actions will be left group actions unless specified otherwise, but the defini-tions and properties below apply analogously to right group actions as well. Given a group element g in a group G, an element x in a set X, and a group action φ, we denote φ(g, x) as gx if φ is a left group action and φ(x, g) as xg if φ is a right group action.
In any group action, the group also acts in a bijective manner on the set: Proposition 2.4. Given a group action φ of group G on a set X, the function fφ : x 7→φ(g, x) is bijective for all g ∈G.
Proof. It suffices to find an inverse function. We see that hφ : x 7→g−1x is such an inverse, since fφ(hφ(x)) = fφ(g−1x) = g(g−1x) = (gg−1)x = x, hφ(fφ(x)) = hφ(gx) = g−1(gx) = (g−1g)x = x by compatibility of the group action. □ Thus, one may alternatively view the group action as associating a permutation POLYA’S ENUMERATION 3 pg ∈Sym(X) with every g ∈G, where gx for g ∈G and x ∈X is determined by pg(x), the image of x in pg. Formally, a group action φ is a homomorphism from G to Sym(X). If we actually consider G = Sym(X) as our group acting on X, then G naturally acts on X; that is, for pg ∈G, φ(pg, x) = pg(x) is the natural group action associated with G and X.
Associated with the elements of the set in any group action are two important notions: Definition 2.5. Orbit. Given a group action φ of a group G on a set X, the orbit of a set element x ∈X is orb(x) = Ox = {gx : g ∈G} = {y ∈X|∃g ∈G : y = gx} .
Definition 2.6. Stabilizer. Given a group action φ of a group G on a set X, the stabilizer of a set element x ∈X is stab(x) = Sxx = {g ∈G|gx = x} .
Using the stabilizer notation, we can similarly define the transformer: Definition 2.7. Transformer. Given a group action φ of a group G on a set X, the transformer of two set elements x, y ∈X is trans(x, y) = Sxy = {g ∈G|gx = y} .
Associated with a group action is the set of orbits, called the quotient: Definition 2.8. Quotient. Given a group action φ of a group G on a set X, the quotient of φ is defined as X/G = {Ox : x ∈X} .
As it turns out, the orbits of a set partition it: Proposition 2.9. For any group action φ of a group G on a set X, X/G is a partition of X.
Proof. It is well-known that equivalence classes of a set partition it. Then it suffices to show that the relation x∼y ⇐ ⇒x, y ∈Ox is an equivalence relation. We check the reflexive, symmetric, and transitive properties: • Reflexive: For all x ∈X, x∼x since ex = x ∈Ox for the identity element e ∈G.
• Symmetric: For all x, y ∈X, x∼y clearly implies y∼x.
• Transitive: For all x, y, z ∈X, if x∼y and y∼z, then x, y, z ∈Ox, so x∼z as well. □ It is also worth noting that the stabilizer of any element x ∈X forms a subgroup of G: Proposition 2.10. For any group action φ of a group G on a set X, Sxx ≤G for all x ∈X.
Proof. Associativity is inherited from the group structure of G.
We check the closure, identity, and inverse properties. For gi, gj ∈Sxx and x ∈X: • Closed: Clearly gi(gjx) = gix = x. But by the compatibility property of φ, we must also have (gigj)x = x ⇒gigj ∈Sxx.
4 ALEC ZHANG • Identity: The identity e ∈G is in Sxx since ex = x.
• Inverse: Consider arbitrary gi ∈Sxx. Since gix = x, we also have g−1 i (gix) = g−1 i x ⇒g−1 i x = (g−1 i gi)x = ex = x by compatibility of φ, so g−1 i ∈Sxx. □ 3. Supporting Theorems 3.1. Orbit-Stabilizer Theorem. With our notions of orbits and stabilizers in hand, we prove the fundamental orbit-stabilizer theorem: Theorem 3.1. Orbit −Stabilizer Theorem. Given any group action φ of a group G on a set X, for all x ∈X, |G| = |Sxx||Ox|.
Proof. Let g ∈G and x ∈X be arbitrary. We first prove the following lemma: Lemma 1. For all y ∈Ox, |Sxx| = |Sxy|.
Proof. It suffices to show a bijection between Sxx and Sxy. Let gxx ∈Sxx and gxy ∈Sxy. Clearly gxygxxx = gxyx = y, so gxygxx ∈Sxy. In addition, by definition of Sxy, we have gxyx = y, so multiplying by g−1 xy gives us g−1 xy gxyx = g−1 xy y = ⇒ ex = x = g−1 xy y by compatibility; thus, g−1 xy ∈Syx and so g−1 xy gxy ∈Sxx.
Consider any h ∈Sxy. Since gxygxx ∈Sxy, we may define χ : Sxx →Sxy : gxx 7→ hgxx. Since g−1 xy gxy ∈Sxx, we may define ψ : Sxy →Sxx : gxy 7→h−1gxy, which is also an inverse for χ: χ(ψ(gxy)) = χ(h−1gxy) = hh−1sxy = gxy ψ(χ(gxx)) = ψ(hgxx) = h−1hgxx = gxx.
Thus χ is a bijection and |Sxx| = |Sxy|. □ By Lemma 1, we have |Sxy| = |Sxx| for all y ∈Ox.
Now note that the sets Sxy : y ∈Ox must partition G; this follows from the definition of the orbit and the fact that the group action is a function.1 Thus |G| = |Sxx||Ox|, as desired. ■ 3.2. Burnside’s Lemma. We can now calculate the order of the group, the size of the stabilizer of an arbitrary set element, or the size of that element’s orbit given the other two quantities. However, one quantity of interest, the number of orbits, is still in complete question! The following theorem, now attributed to Cauchy in 1845, determines the number of orbits in terms of the order of the group and the number of fixed elements under the group action: Theorem 3.2. Burnside′s Lemma. Given a finite group G, a finite set X, and a group action φ of G on X, the number of distinct orbits is |X/G| = 1 |G| X g∈G |Xg|, where Xg = {x ∈X|gx = x}, the set of elements of X fixed by action by g.
1It is important to note the difference between this statement and Proposition 2.9. The propo-sition states that the different orbits partition the set, whereas here we state that given any one of those orbits, every group element acting on x gives exactly one element in Ox, and that all elements in Ox are covered.
POLYA’S ENUMERATION 5 Proof. We first note that X g∈G |Xg| = |(g, x) ∈(G, X) : gx = x| = X x∈X |Sxx|, so we just need to show |X/G| = 1 |G| X x∈X |Sxx|.
By the Orbit-Stabilizer Theorem, we have that |Sxx| = |G| |Ox|, so 1 |G| X x∈X |Sxx| = 1 |G| X x∈X |G| |Ox| = X x∈X 1 |Ox|.
Since orbits partition X by Proposition 2.9, we can split up X into disjoint orbits of X/G. Thus, we can rewrite our sum, where A is an orbit in X: X x∈X 1 |Ox| = X A∈X/G X x∈A 1 |A| = X A∈X/G 1 = |X/G|, so |X/G| = 1 |G| P x∈X |Xg|, as desired. ■ 4. Polya’s Enumeration 4.1. Prerequisites. Polya’s enumeration introduces functions f from a finite set X to a new finite set Y . Notation-wise, Y X is the set of all functions f : X →Y , represented as a set of ordered pairs (xi, f(xi)) for xi ∈X. For instance, if Y is a set of colors, then f ∈Y X is a coloring of the elements in X, and Y X/G is the number of distinct colorings of X under some group action of G on Y X.
From this perspective, an action φ of G on X induces a natural group action φ′ of G on Y X, namely: φ′ : (g, f) 7→f ′ = f ◦p−1 g = {(φ(g, x), f(x))|x ∈X} for f ∈Y X. Indeed, φ′ satisfies identity and compatibility: ef = {(ex, f(x))|x ∈X} = {(x, f(x))|x ∈X} = f, g1(g2f) = g1f ′ = g1({(g2x, f(x))|x ∈X}) = {(g1(g2x), f ′(g2x))|x ∈X} = {(g1g2)x, f(x)|x ∈X} = (g1g2)f.
Throughout this section, we assume an implicit group action φ of a group G on a finite set X of size n, where φ is arbitrary. In addition, we assume Y is another finite set. To state Polya’s Enumeration Theorem, we introduce some more machinery: Definition 4.1. Type. Let p be a permutation on X. Then the type of p is the set {b1, ..., bn}, where bi is the number of cycles of length i in the cycle decomposition of p.
6 ALEC ZHANG Definition 4.2. Cycle index polynomial. The cycle index polynomial Zφ of the group action φ is defined as 2 Zφ(x1, ..., xn) = 1 |G| X g∈G n Y i=1 xbi(g) i , where bi(g) is the ith element of the type of the implied permutation pg ∈Sym(X).
Definition 4.3. Function equivalence. Two functions f ∈Y X are said to be equivalent under the action of G (f1 ∼G f2) if they are in the same orbit of φ′, i.e.
there exists g ∈G such that f2 = gf1.
By the proof of Proposition 2.9, function equivalence is an equivalence relation, so Y X will have equivalence classes under function equivalence: Definition 4.4. Configuration. A configuration is an equivalence class of the equivalence relation ∼G on Y X.
Analogously, we have that every configuration c is just an orbit of φ′, and that the set of configurations C is just Y X/G under φ′.
To weight our functions differently, we can assign weights to elements in Y : Definition 4.5. Weight. Let w : Y →R be a weight assignment to each element in Y . 3 Then the weight of a function f ∈Y X is defined as W(f) = Y x∈X w(f(x)).
It follows that all functions in a configuration c have the same common weight, which we call the weight of the configuration W(c): Proposition 4.6. All functions in a configuration have the same common weight.
Proof. Consider arbitrary functions f1, f2 ∈Y X in configuration C. Since f1 ∼G f2, there exists g ∈G such that f1(gx) = f2(x).
In addition, from Proposi-tion 2.4, we know every group element acting on a set permutes it, so W(f) = Q x∈X w(f(x)) = Q x∈X w(f(gx)) for any g ∈G. Thus, W(f1) = Y x∈X w(f1(x)) = Y x∈X w(f1(gx)) = Y x∈X w(f2(x)) = W(f2). □ Definition 4.7. Configuration Generating Function (CGF). Let C be the set of all configurations c. Then the CGF is defined as F(C) = X c∈C W(c).
2The standard notation is ZG, but here we use Zφ to explicitly show that the cycle index polynomial not only depends on the algebraic structure of the group G, but also its induced permutation on X through the group action φ.
3In general, we may replace R with any commutative ring.
POLYA’S ENUMERATION 7 4.2. Theorem. We are now equipped to tackle Polya’s Enumeration Theorem (PET). We cover both the unweighted and weighted versions of the theorem; the first can be proved directly from Burnside’s Lemma: Theorem 4.8. Polya′s Enumeration Theorem (Unweighted).
Let G be a group and X, Y be finite sets, where |X| = n. Then for any group action φ of G on X, the number of distinct configurations in Y X is |C| = 1 |G| X g∈G |Y |c(g), where c(g) denotes the number of cycles in the cycle decomposition of pg ∈Sym(X), the permutation of X associated with the action of g on X.
Proof. Since configurations are orbits of φ′, we have |C| = |Y X/G| under φ′. We apply Burnside’s Lemma to the finite set Y X with group action φ′, which states that |Y X/G| = 1 |G| X g∈G |(Y X)g|.
It remains to show that |(Y X)g| = |Y |c(g). But any function f ∈Y X will remain constant under the action of g if and only if all elements in X in each cycle are assigned the same set element in Y . There are thus |Y | choices of elements in Y for each of the c(g) cycles in the cycle decomposition, and the result follows. ■ We now state the weighted version of PET: Theorem 4.9. Polya′s Enumeration Theorem (Weighted). Let G be a group and X, Y be finite sets, where |X| = n. Let w be a weight function on Y . Then for any group action φ of G on X, the CGF is given by Zφ X y∈Y w(y), X y∈Y w(y)2, ..., X y∈Y w(y)n .
Proof. We first prove the following lemma: Lemma 1. |C| = 1 |G| P g∈G f ∈Y X|(∀x ∈X)(f(gx) = f(x)) .
Proof. Let φ′ R be the right group action on Y X induced by φ: φ′ R : (f, g) 7→f ′ R = f ◦pg = {(x, f(φ(g, x)))|x ∈X} , where f ∈Y X and g ∈G. The result follows by applying Burnside’s Lemma to Y X under φ′ R, as in Theorem 4.8. □ We now take φ′ R to be our group action on Y X. Let A(ω) = {c ∈C|W(c) = ω} be the set of all configurations with common weight ω. Sgg = f ∈Y X|f = fg is the set of all functions stabilized by g; let Sgg(ω) = f ∈Y X|f = fg, W(f) = ω be the set of all functions stabilized by g with common weight ω. Then by Lemma 1, we have |A(ω)| = 1 |G| X g∈G |Sgg(ω)|.
8 ALEC ZHANG We can also group our CGF by weights: CGF = X c∈C W(c) = X ω ω|A(ω)| = 1 |G| X ω X g∈G ω|Sgg(ω)| by the above equality. Since our sum is finite, we can switch the order of summation: CGF = 1 |G| X g∈G X ω ω|Sgg(ω)| = 1 |G| X g∈G X f∈Sgg W(f).
G permutes X through the group action, so the corresponding permutation pg for g ∈G has a cycle decomposition C1, ..., Ck, where k ≤n. It follows that if f ∈Sgg, then f(x) = f(gx) = f(g2x) = ... for all x ∈X, g ∈G and f is constant on each cycle Ci in the cycle decomposition. Then we have X f∈Sgg W(f) = X f∈Sgg Y x∈X w(f(x)) = X f∈Sgg k Y i=1 Y x∈Ci w(f(x)) = X f∈Sgg k Y i=1 w(f(xi))|Ci|, where xi ∈Ci. Let |Y | = m. Since we are summing over all f ∈Sgg, we need to cover all possible assignments of y ∈Y to cycles Ci, so our expression becomes X f∈Sgg W(f) = k Y i=1 w(y1)|Ci| + ... + w(ym) |Ci| = k Y i=1 X y∈Y w(y)|Ci|, and plugging this into the CGF expression gives us CGF = 1 |G| X g∈G k Y i=1 X y∈Y w(y)|Ci| .
Regardless of cycle length, by definition of the type, there will be bj(g) cycles of length j, so our expression is CGF = 1 |G| X g∈G n Y j=1 X y∈Y w(y)j bj(g) = Zφ X y∈Y w(y), X y∈Y w(y)2, ..., X y∈Y w(y)n . ■ Note that setting w(y) = 1 for all y ∈Y makes W(f) = 1 for all f ∈Y X and gives us a CGF of ZG(|Y |, ..., |Y |), so the unweighted version of PET immediately follows.
We summarize the concepts in PET with a concrete example: Example 4.10. Classify the non-isomorphic multigraphs with n = 4 vertices and with up to m = 2 separate edges between two vertices allowed.
Solution. We first clarify our sets, groups and actions.
• Sets: Let V be the set of vertices {V1, ..., Vn}, X be the set of edges E12, E13, ..., E(n−1)(n) of Kn, indicating all possible distinct edges, and Y be the set {y0, ..., ym}, indicating the number of possible edges between two vertices; let the weight of yi be w(yi) = wi.
• Groups: We have SV = Sym(V ) associated with V ; let SX|V be the group of permutations on X induced by SV . Note that this is not the same as SX = Sym(X), since |SX|V | = n! but |SX| = n 2 !.
POLYA’S ENUMERATION 9 • Actions: Let group SV act on set V with the natural group action φV .
Then group SX|V acts on set X through an induced action φ, and acts on set Y X through an induced (right) action φ′ R as shown in the proof of PET.
SV φV V SXjV φ X φ0 R Y X Then Y X represents all possible multigraphs, and |Y X/SX|V | is the number of multigraphs up to isomorphism. For instance, the multigraph V1 V2 V3 V4 is represented by the function f : {E12, E13, E14, E23, E24, E34} 7→{0, 2, 1, 1, 0, 0}.
We first compute the cycle index polynomial Zφ′ R. To do so, we need to deter-mine the corresponding types to the elements of SX|V . SV acting on K4 leads to the following elements in SX|V : • The identity (V1)(V2)(V3)(V4) ∈SV leads to the corresponding identity (E12)(E13)(E14)(E23)(E24)(E34) ∈SX|V with type {6, 0, 0, 0, 0, 0}.
This contributes an x6 1 term to the cycle index.
• There are 4 2 = 6 elements of the form (VaVb)(Vc)(Vd) ∈SV leading to the corresponding element (Eab)(Ecd)(EacEbc)(EadEbd) ∈SX|V with type {2, 2, 0, 0, 0, 0}. Each of the 6 elements contributes an x2 1x2 2 term to the cycle index.
• There are ( 4 2) 2 = 3 elements of the form (VaVb)(VcVd) ∈SV leading to the corresponding element (Eab)(Ecd)(EacEbd)(EadEbc) ∈SX|V with type {2, 2, 0, 0, 0, 0}. Each of the 3 elements contributes an x2 1x2 2 term to the cycle index.
• There are 4 3 ∗2 = 8 elements of the form (VaVbVc)(Vd) ∈SV (note that (123)(4) is different from (132)(4)) leading to the corresponding element (EabEbcEac)(EadEbdEcd) ∈SX|V with type {0, 0, 2, 0, 0, 0}. Each of the 8 elements contributes an x2 3 term to the cycle index.
• There are 3! = 6 elements of the form (VaVbVcVd) ∈SV leading to the corre-sponding element (EabEbcEcdEad)(EacEbd) ∈SX|V with type {0, 1, 0, 1, 0, 0}.
Each of the 6 elements contributes an x2x4 term to the cycle index.
10 ALEC ZHANG Thus, the cycle index is Zφ′ R(x1, x2, x3, x4, x5, x6) = 1 24(x6 1 + 9x2 1x2 2 + 8x2 3 + 6x2x4).
Now the weighted version of PET tells us that the CGF of Y X/G is Zφ′ R X y∈Y w(y), ..., X y∈Y w(y)6 = Zφ′ R (w0 + w1 + w2), ..., (w6 0 + w6 1 + w6 2) = 1 24 (w0 + w1 + w2)6 + 9(w0 + w1 + w2)2(w2 0 + w2 1 + w2 2)2 + 8(w3 0 + w3 1 + w3 2)2 + 6(w2 0 + w2 1 + w2 2)(w4 0 + w4 1 + w4 2) = w6 0 + w5 0w1 + 2w4 0w2 1 + 3w3 0w3 1 + 2w2 0w4 1 + w0w5 1 + w6 1 + w5 0w2 + 2w4 0w1w2 + 4w3 0w2 1w2 + 4w2 0w3 1w2 + 2w0w4 1w2 + w5 1w2 + 2w4 0w2 2 + 4w3 0w1w2 2 + 6w2 0w2 1w2 2 + 4w0w3 1w2 2 + 2w4 1w2 2 + 3w3 0w3 2 + 4w2 0w1w3 2 + 4w0w2 1w3 2 + 3w3 1w3 2 + 2w2 0w4 2 + 2w0w1w4 2 + 2w2 1w4 2 + w0w5 2 + w1w5 2 + w6 2.
The CGF then completely classifies the non-isomorphic multigraphs of degree n = 4 with up to m = 2 separate edges between two vertices allowed. For instance, the term 4w3 0w2 1w2 indicates that there are four non-isomorphic multigraphs with 3 absent edges, 2 edges, and 1 double-edge, namely the multigraphs below: If we set w0 = w1 = 1, w2 = 0, we just get the number of non-isomorphic graphs of 4 vertices: Zφ′ R(2, 2, 2, 2, 2, 2) = 11.
If we set w0 = w1 = w2 = 1, we get the total number of non-isomorphic multigraphs: Zφ′ R(3, 3, 3, 3, 3, 3) = 66.
If we set w0 = 0, w1 = 1, w2 = 2, we get the total number of edges among all non-isomorphic multigraphs: Zφ′ R(1 + 2, 1 + 22, ..., 1 + 26) = 163.
Other quantities of interest may be found by substituting different values for wi. □ POLYA’S ENUMERATION 11 5. Extensions Up until now, we have considered a group action φ with group G acting on set X, inducing an action φ′ on the set Y X. However, recall that φ′ : (g, f) 7→f ′ = {(φ(g, x), f(x))} permutes both X and f(X) through φ. More generally, we can permute the set Y independently of φ; that is, we can consider an action ψ on set Y with another group H. We now define a more general equivalence between functions: Definition 5.1. Generalized function equivalence. Two functions f1, f2 ∈Y X are equivalent (f1 ∼gh f2) if ∃g ∈G, h ∈H such that for all x ∈X, f1(gx) = hf2(x).
For this definition to be compatible with our definitions of configuration, CGF, etc., ∼gh must be an equivalence relation. Proving this is left as an exercise to the reader.
Note that our previous proposition that equivalent functions have the same weight does not necessarily hold with generalized function equivalence; it is a require-ment on ψ. However, assuming that generalized-equivalent functions have the same weight, we then have analogous results to the ones in section 4: Theorem 5.2. Generalized PET (Weighted.) Let groups G, H act on the sets X, Y through the group actions φ and ψ, respectively. Let w : Y →R be a weight function for Y . Using generalized function equivalence, the CGF is CGF = 1 |G||H| X (g,h)∈G×H X f∈S(g,h) W(f), where S(g,h) is the set of functions f ∈Y X stabilized by (g, h), i.e.
S(g,h) = f ∈Y X|(∀x ∈X)(f(gx) = hf(x)) .
Proof. The proof follows exactly the same way as in the proof of PET (Weighted.) Note that function equivalence can also be written in the following form: f1 ∼gh f2 ⇐ ⇒(∃(g, h) ∈G × H)(∀x ∈X)(h−1f1(gx) = f2(x)).
Then we can define a right group action χ of group G×H on set Y X in the following manner: χ(f, (g, h)) = h−1fg, where g ∈G acts on f ∈Y X through the induced right group action φ′ R : (f, g) 7→ f ′ φR = {(x, f(φ(g, x)))} and h ∈H acts on f ∈Y X through the induced left group action ψ′ : (h, f) 7→f ′ ψ = {(x, ψ(h, f(x)))}.
We then have that configurations are the orbits of f ∈Y X under χ: f2 = {(x, f2(x))} = f1(g, h) = h−1f1g = h−1({(x, f1(gx))}) = (x, h−1f1(gx)) ⇐ ⇒f1 ∼gh f2.
Taking χ as our group action on Y X, we again let A(ω) = {c ∈C|W(c) = ω} , S(g,h)(ω) = f ∈Y X|f(g, h) = f, W(f) = ω .
By Burnside’s Lemma with the group action χ, we have |A(ω)| = 1 |G × H| X (g,h)∈G×H |S(g,h)(ω)|.
12 ALEC ZHANG Finally, recall that the CGF can be grouped into weights: CGF = X c∈C W(c) = X ω ω|A(ω)| = 1 |G × H| X ω X (g,h)∈G×H ω|S(g,h)(ω)| = 1 |G||H| X (g,h)∈G×H X f∈S(g,h) W(f). ■ 5.1. De Bruijn’s Theorem. We finally arrive at the problem of counting the number of orbits, which involves weighting each function (up to generalized equiv-alence) with the weight 1. Before, we could simply substitute w(y) = 1 to get an answer of Zφ(|Y |, ..., |Y |). Here, however, we no longer have a concise expression for the answer in terms of the CGF; we are looking for the quantity P (g,h)∈G×H |f ∈Y X : f(g, h) = f| |G||H| , which simply follows from Burnside’s Lemma. We turn to de Bruijn’s theorem: Theorem 5.3. (de Bruijn.) Let group G act on finite set X through group action φ, and let group H act on finite set Y through group action ψ. Then the number of functions up to function equivalence is Zφ ∂ ∂z1 , ∂ ∂z2 , ∂ ∂z3 , ...
Zψ e P k zk, e2 P k z2k, e3 P k z3k, ...
{zi}=0.
Proof. Let bi(g), cj(h) be the types of g ∈G, h ∈H, respectively, and let |X| = n, |Y | = m. For ease of notation, let bi(g) = 0, cj(h) = 0 for all g ∈G, h ∈H if i > n and j > m, respectively, where i, j are taken over the positive integers Z+.
We first prove the following lemmas: Lemma 1. If f ∈S(g,h), then f(x) = y implies f(gix) = hiy for all i.
Proof. Since fg = hf, we have fg2 = (fg)g = (hf)g = h(fg) = h(hf) = h2f.
The result easily follows by induction on i: fgi−1 = hi−1f ⇒fgi = (fgi−1)g = hi−1fg = hi−1hf = hif. □ Lemma 2. If f ∈S(g,h), then each cycle CX in X is mapped by f to a cycle CY in Y where |CY | divides |CX|.
Proof. Consider any f ∈S(g,h), where fg = hf for some g ∈G, h ∈H. Let x ∈X belong to cycle Cg,x of length j. Then we have Cg,x = x, gx, ..., gj−1x , where gjx = x. By Lemma 1, we have f(gix) = hif(x) for all positive integers i.
Thus, the image of Cg,x under f is f(Cg,x) = f(x), hf(x), h2f(x), ..., hj−1f(x) , and we have hjf(x) = f(gjx) = f(x), so the cycle CY in Y containing f(x) must have a length that divides j. □ POLYA’S ENUMERATION 13 Lemma 3. The total number of functions stabilized by (g, h) is X f∈S(g,h) W(f) = Y i X j|i jcj(h) bi(g) .
(Recall that we are setting W(f) = 1.) Proof. We count the number of functions using the condition in Lemma 2. Let f ∈S(g,h). For each cycle Cg,i in the cycle decomposition of pg ∈Sym(X), pick an arbitrary element xi ∈Cg,i. Since there are cj(h) cycles of length j in the cycle decomposition of ph ∈Sym(Y ), and Cg,x can only map to cycles whose length divides its own by Lemma 2, xi can map to y ∈Y under f in P j|i jcj(h) ways.
But note that after the mapping xi 7→f(xi) has been determined, the rest of the mappings in Cg,i are determined as well, due to the condition in Lemma 1. Thus, the number of functions is the number of ways to choose mappings for each cycle in pg; since pg has bi(g) cycles of length i, the result follows. □ By Lemma 3, we have X f∈S(g,h) W(f) = (c1(h))b1(g) · (c1(h) + 2c2(h))b2(g) · (c1(h) + 3c3(h))b3(g)...
But note that each of the terms of the form ab in this product can be written as the partial derivative expression ∂b ∂zb eaz z=0: X j|i jcj(h) bi(g) = ∂bi(g) ∂zbi(g) i e(P j|i jcj(h))zi zi=0.
More generally, ab = ∂b ∂zb ea(P i zi) {zi}=0, so we can write our expression as X f∈S(g,h) W(f) = Y i ( ∂ ∂zi )bi(g) !
e(P j|i jcj(h))(P i zi) {zi}=0.
We can also express the exponent solely in terms of j: X j|i jcj(h) X i zi !
= X j jcj(h) ∞ X k=1 zkj Let ri = ∂ ∂zi , sj = ej P k zkj. Then we have 1 |G| X g∈G Y i ( ∂ ∂zi )bi(g) = Zφ(r1, r2, ...), 1 |H| X h∈H e(P j|i jcj(h))(P i zi) = 1 |H| X h∈H e P j jcj(h) P k=1 zkj = 1 |H| X h∈H Y j (ejcj(h) P k zkj) = 1 |H| X h∈H Y j (ej P k zkj)cj(h) = Zψ(s1, s2, ...).
14 ALEC ZHANG Finally, by Generalized PET, we have CGF = 1 |G||H| X (g,h)∈G×H X f∈S(g,h) W(f) = 1 |G| X g∈G Y i ( ∂ ∂zi )bi(g) 1 |H| X h∈H e(P j|i jcj(h))(P i zi) !
= Zφ( ∂ ∂z1 , ∂ ∂z2 , ∂ ∂z3 , ...)Zψ(e P k zk, e2 P k z2k, e3 P k z3k, ...) {zi}=0. ■ Here is a simple example that highlights the differences between PET and its gen-eralizations: Example 5.4. Compute the number of distinct colorings of the vertices of a square with 3 colors, under the following equivalencies: • 1. Rotations are not distinct.
• 2. Rotations and reflections are not distinct.
• 3. Rotations, reflections, and color permutations are not distinct.
Solution. The first two cases can be solved using Burnside’s Lemma and PET; the third case involves generalized PET and de Bruijn’s Theorem. Our sets are X = {V1, V2, V3, V4}, the set of vertices, and Y = {R, G, B} , our three colors.
Let r = (V1V2V3V4) be a clockwise 90-degree rotation and s = (V1V2)(V3V4) be a reflection across the vertical axis. The corresponding groups for each case are • 1. G = C4 = e, r, r2, r3 , the cyclic group on the vertices, • 2. G = D4 = e, r, r2, r3, s, sr, sr2, sr3 , the dihedral group on the vertices, • 3. G = D4 and H = S3, the symmetric group on the colors, where the group actions are the natural group actions.
For the sake of demonstration, we solve case 1 and 2 in two different ways. By Burnside’s Lemma, the answer to the first case is |Y X/G| = 1 |G| X g∈G (Y X)g = 1 4(34 + 31 + 32 + 31) = 24.
For the second case, we compute the cycle index for the natural group action φ2 : D4 × X →X: Zφ2(x1, x2, x3, x4) = 1 |D4| X g∈D4 4 Y i=1 x(bi(g)) i = 1 8(x4 1+x4+x2 2+x4+x2 2+x2 1x2+x2 2+x2 1x2) = 1 8(x4 1 + 2x2 1x2 + 3x2 2 + 2x4).
The number of distinct colorings is then equal to Zφ2(3, 3, 3, 3) = 1 8(34 + 2(32)(3) + 3(32) + 2(3)) = 21.
The three coloring-pairs distinct under rotation but not rotation and reflection are shown below: POLYA’S ENUMERATION 15 Finally, for the last case, we use de Bruijn’s theorem. We have Zφ(x1, x2, x3, x4) = 1 4(x4 1 + x2 2 + 2x4) Zψ(x1, x2, x3) = 1 6(x3 1 + 3x1x2 + 2x3), so the number of distinct colorings is equal to Zφ( ∂ ∂z1 , ∂ ∂z2 , ∂ ∂z3 , ...)Zψ(e P i=1 zi, e2 P i=1 z2i, e3 P i=1 z3i, ...) {zi}=0 = 1 24( ∂4 ∂z4 1 + ∂2 ∂z2 2 + 2 ∂ ∂z4 )(e3(z1+z2+z3+z4) + 3ez1+z2+z3+z4e2z2+2z4 + 2e3z3) {zi}=0 = 1 24(34 + (3) + 0 + 32 + (3)(32) + 0 + (2)(3) + (2)(3)(3) + 0) = 6. □ The 6 distinct colorings are shown below: where all of the colorings below are equivalent to one another: rotation reflection permutation (RGB) 6. Further Work Multiple generalizations of Polya’s Enumeration Theorem exist, most coming from the work of de Bruijn, that are not fully addressed in this paper. As more generalized theorems of PET are developed, the most meaningful work on the subject may certainly come from clever substitutions or cases of these theorems.
More modern applications of the theorem can be found in the fields of analytic combinatorics and random permutation statistics, among others.
16 ALEC ZHANG Acknowledgments. It is a pleasure to thank my mentor Nat Mayer, who pa-tiently guided me through unfamiliar concepts and sat through oft-disorganized presentations. Great thanks also go to Professor Laci Babai for a rigorous and engaging apprentice class, and to Peter May for hosting this REU and giving me the opportunity to meet many bright minds.
References G. Polya. Kombinatorische Anzahlbestimmungen fr Gruppen, Graphen und chemische Verbindungen. Acta Math., 68. 1937. Pages 145-254.
Matias von Bell. Polya’s Enumeration Theorem and its Applications. 2015. Pages 17-23.
N. G. de Bruijn. Polya’s Theory of Counting. Applied Combinatorial Mathematics. Wiley, New York. 1964. Pages 144-184. |
187817 | https://www.cs.utep.edu/vladik/2018/tr18-14.pdf | Optimization of Quadratic Forms and t-norm Forms on Interval Domain and Computational Complexity Milan Hlad´ ık Charles University Faculty of Mathematics and Physics Department of Applied Mathematics Malostransk´ e n´ am. 25 11800, Prague, Czech Republic milan.hladik@matfyz.cz Michal ˇ Cern´ y Department of Econometrics University of Economics, Prague W. CHurchill’s sq. 4 13067 Prague, Czech Republic cernym@vse.cz Vladik Kreinovich Department of Computer Science University of Texas at El Paso 500 W. University El Paso, Texas 79968, USA vladik@utep.edu Abstract—We consider the problem of maximization of a quadratic form over a box. We identify the NP-hardness bound-ary for sparse quadratic forms: the problem is polynomially solvable for O(log n) nonzero entries, but it is NP-hard if the number of nonzero entries is of the order nε for an arbitrarily small ε > 0. Then we inspect further polymonially solvable cases. We define a sunflower graph over the quadratic form and study efficiently solvable cases according to the shape of this graph (e.g. the case with small sunflower leaves or the case with a restricted number of negative entries). Finally, we define a generalized quadratic form, called t-norm form, where the quadratic terms are replaced by t-norms. We prove that the optimization problem remains NP-hard with an arbitrary Lipschitz continuous t-norm.
I. INTRODUCTION In this paper we elaborate the problems outlined in in more details. In that work we studied processing imprecise data from multiple sources which interact together. The inter-action among inputs x1, . . . , xn is formalized by a function f(x1, . . . , xn) which cannot be written in a separable form as ∑n i=1 fi(xi) (for some functions fi). An example is a quadratic form xT Ax with nonzero off-diagonal entries, which is studied in this paper. Then we consider a more general form of pairwise interactions: formally, we replace the bilinear terms xixj (i ̸= j) from xT Ax by so-called t-norms (which can be regarded as generalizations of the “AND” logical connective).
The general question is: when the inputs x1, . . . , xn are imprecise but are known to be in given compact intervals x1 = [x1, x1], . . . , xn = [xn, xn], and we are given a function f : Rn →R, can we find tight bounds for f(x1, . . . , xn)? Formally, denoting x = x1 × · · · × xn, (1) the problem reduces to the computation of supx∈Rn{f(x) | x ∈x} and infx∈Rn{f(x) | x ∈x}. Here, the expression “to find the bounds” refers to computational complexity: We are to determine under which conditions the bounds can be evaluated in polynomial time and when the computation is NP-hard.
Recall that in general, finding the tight bounds for a general function f need not be recursive. This is why various classes of functions of interest in data processing need to be studied separately.
In this text, bold symbols—such as x—refer to n-dimensional intervals of the form (1). The real n-vectors of lower and upper bounds are denoted by x and x, respectively, and we write x = [x, x] for short.
Basics in computations complexity and interval computing can be found e.g. in .
II. QUADRATIC FORMS ON INTERVAL DOMAIN Consider a quadratic form f : Rn →R f(x) = bT x + xT Ax = n ∑ i=1 bixi + n ∑ i,j=1 aijxixj restricted to a given interval domain x = [x, x]. It is known that computing the range of f on x, i.e., f := min f(x) subject to x ∈x, f := max f(x) subject to x ∈x is an NP-hard problem. This is true even for A positive definite, in which case computing f is polynomial whereas computing f is NP-hard.
Assumption. For simplicity of exposition, we focus only on computation of f in the sequel. We will also assume for the remainder of the paper that f(x) is convex (i.e., that A is positive semidefinite).
III. SPARSE QUADRATIC FORMS Suppose that the matrix A is sparse, that is, most of the off-diagonal entries are zero. Becomes the problem of computing the range tractable?
In this section it is sufficient to fix x = [0, 1]n.
Proposition 1. The problem of computing f remains NP-hard even when the number of off-diagonal non-zeros in A is bounded by O(n1/k).
Proof: Let f(x) = bT x + xT Ax be a quadratic function on Rn. Consider the quadratic form g(x, y) := f(x) + m ∑ i=1 (2yi −1)2.
Then the maximum of g(x, y) on [0, 1]n+m is the same as the maximum of f(x), shifted by the amount of m. That is, g = f + m.
(2) Putting m := n2k we get that the quadratic form g(x, y) of dimension d = n + m has O(d1/k), non-zero off-diagonal en-tries in the corresponding matrix. Since f(x) was an arbitrary quadratic form, computing the range of g(x, y) is NP-hard, too.
Corollary (to the proof). Under the assumption of Proposi-tion 1, it is NP-hard to approximate f with a given (arbitrarily large) absolute error. This follows from the fact that the maximum of a quadratic form is known to be NP-hard to approximate with an absolute error , and (2) does not change the absolute error.
On the other hand, approximating f with a relative error can be done efficiently via semidefinite ralaxation even for a nonconvex f(x), see .
Proposition 2. The problem becomes polynomial if the num-ber of off-diagonal non-zeros in A is bounded by O(log n).
Proof: Denote I := {i = 1, . . . , n | ∃j ̸= i : aij ̸= 0}.
Now, f(x) can be expressed as f(x) = ∑ i̸∈I (bixi + aiix2 i ) + ∑ i,j∈I aijxixj, and its maximum as f = max x∈x ∑ i̸∈I (bixi + aiix2 i ) +max x∈x ∑ i,j∈I aijxixj .
(3) The first term in (3) is computed easily as max x∈x ∑ i̸∈I (bixi + aiix2 i ) = ∑ i̸∈I max xi∈xi(bixi + aiix2 i ), and maximizing a univariate quadratic function is a trivial task.
The second term in (3) requires maximizing a quadratic function on an interval domain in dimension O(log n). Hence, by brute force, we find the maximum in exponential time w.r.t. O(log n), which is polynomial w.r.t. n.
IV. POLYNOMIAL CASES BASED ON SUNFLOWER GRAPHS Without loss of generality assume that A is upper triangular and that x = [0, 1]n. Consider the graph G = (V, E), where V = {x1, . . . , xn} and {xi, xj} is an edge of G if and only if aij ̸= 0. So we are in fact maximizing a quadratic form f(x) on the graph G (see Chapter 10 of ).
Let D ⊆V be a vertex cut such that the graph G′ = (V \ D, E′) after removing the cut D consists of connected components of vertex size O(log n). Suppose further that the size of the cut is |D| = O(log n). (A graph with such cut is sometimes called sunflower graph, see Figure 1.) Then the cut is associated with |D| variables. Hence we can process all 0/1-assignments of these variables. There are at most 2|D| such assignments. For every such assignments, we resolve the problem by brute force in each of the components. Therefore, the overall time complexity is 2|D|(T1 + T2 + · · · + Tk) ≤2O(log n)(2O(log n) + 2O(log n) + · · · + 2O(log n)) ≤poly(n), where Ti is time complexity of maximization over ith com-ponent, k ≤n is the number of components, and poly(n) is a polynomial in n.
x1 x3 x8 Component 1 Component 2 Component 3 Component 4 · · · · · · cut Fig. 1.
A sunflower graph with a cut of size O(log n) and components of size O(log n).
A problem. How to find a suitable cut? This is an open challenging question. Notice that minimum cut splitting graph G into two components can be found efficiently by means of linear programming. Nevertheless, incorporating restrictions on size of the components seems a hard problem.
Special graphs The above reasoning can be extended even to the compo-nents larger than O(log n), but having a special structure. So, we will now discuss a few of special graphs possessing a suitable structure. For the sake of simplicity of exposition, we will illustrate it on the graph G = (V, E).
Few negative coefficients: Provided that all coefficients are nonnegative, that is, bi ≥ 0 and aij ≥ 0 for all i, j ∈{1, . . . , n}, then the optimal solution is simply x = (1, . . . , 1)T . If it is not the case, we can still effectively compute an optimal solution as long as the number of negative coefficients bi and aij is small. Define a cut D to contain all variables incident with negative coefficients: D := {xi; bi < 0 or aij < 0 for some j}.
If |D| = O(log n), then we are done since applying the cut we obtain a subproblem with nonnegative coefficients and the 0/1-variables in D can be tested brute-force in time 2|D| = poly(n).
Other special graphs Assume now for simplicity in the remainder of this section that the domain of variables is x = [−1, 1]n. Further assume that bi = 0 for every i.
Trees: If G is a tree, then maximizing the quadratic function on G is easy: Take an arbitrary vertex in xi ∈G as a root, and distinguish two assignments xi = ±1. For each assignment, the remaining variables associated with G have determined values. Sorting the vertices according to some tree search algorithm, we put xj := sgn(aijxi) when xi precedes xj.
Planar graphs: The above class can be extended to planar graphs with O(log n) faces because by removing O(log n) vertices we obtain a tree.
Bipartite graphs: Complete bipartite graphs Km,n and their subgraphs are also efficiently processed provided aij ≤0 for i ̸= j. The variables associated with the first set of vertices will be set as xi := 1, and the others xi := −1.
If the assumption aij ≤0 for i ̸= j is not satisfied, then the bipartite graph is still efficiently processed as long as m = O(log n), in which case the vertex cut D is the smaller of those two subsets.
Remark. For related results see .
V. T-NORM FORMS Recall that a t-norm is a function T : [0, 1]2 →[0, 1] satisfying: • commutativity: T(a, b) = T(b, a), • monotonicity: a ≤c, b ≤d ⇒T(a, b) ≤T(c, d), • associativity: T(a, (T(b, c))) = T(T(a, b), c), • 1 is identity element: T(a, 1) = a.
From the definition, we immediately have T(0, 0) = T(0, 1) = T(1, 0) = 0, T(1, 1) = 1.
(4) Given t-norms Tij, the question is how easy is evaluation of the t-norm form fT (x) = n ∑ i=1 (bixi + aiix2 i ) + ∑ i̸=j aijTij(xi, xj) on a given interval domain x.
Proposition 3. Maximizing a t-norm form on x = [0, 1]n is NP-hard even if we choose and fix for every Tij a Lipschitz continuous t-norm, that is, |Tij(x) −Tij(x′)| ≤α · ∥x −x′∥, where α is a Lipschitz constant and ∥· ∥is any vector norm.
Proof: Let f(x) = bT x + xT Ax be a convex quadratic function on Rn. Consider the t-norm form fT (x) := bT x + n ∑ i=1 aiix2 i + β n ∑ i=1 (2xi −1)2 + ∑ i̸=j aijTij(xi, xj).
By the Lipschitz continuity assumption, for sufficiently large β the function fT (x) is convex. Thus the maximum of fT (x) is attained in a vertex of x. However, on a set of vertices x ∈{0, 1}n, fT (x) = f(x) + βn since Tij(xi, xj) = xixj and (2xi −1)2 = 1. This means that the maximum of fT (x) is the same as the maximum of f(x), shifted by the amount of βn. Since maximizing f(x) on x is NP-hard, maximizing t-norm forms on x is NP-hard, too.
Remark 1. It is interesting that the proof does not require all the axioms of a t-norm. Basically, we used (4) only. Thus the statement holds true for any Lipschitz continuous functions Tij satisfying (4).
Notice that the commonly used t-norms satisfy the assump-tion of the proposition: • product t-norm T(x, y) = xy (in this case, the t-norm form is a quadratic form), • minimum t-norm T(x, y) = min{x, y}, • Łukasiewicz t-norm T(x, y) = max{0, x + y −1}, • nilpotent minimum t-norm T(x, y) = { min{x, y} if x + y > 1, 0 otherwise, • Hamacher product t-norm T(x, y) = { 0 if x = y = 0, xy x+y−xy otherwise.
On the other hand, the drastic t-norm defined as T(x, y) = { min{x, y} if max{x, y} = 1, 0 otherwise does not satisfy the assumption.
ACKNOWLEDGMENTS M. Hlad´ ık was supported by the Czech Science Foundation Grant P403-18-04735S. M. ˇ Cern´ y was supported by the Czech Science Foundation Grant P402/12/G097. V. Kreinovich was supported in part by the National Science Foundation Grant HRD-1242122 (Cyber-ShARE Center of Excellence).
REFERENCES M. ˇ Cern´ y M and M. Hlad´ ık M. The complexity of computation and approximation of the t-ratio over one-dimensional interval data.
Computational Statistics & Data Analysis 80, 2014, 26–43.
J.-A. Ferrez, K. Fukuda, and T. Liebling. Solving the fixed rank convex quadratic maximization in binary variables by a parallel zonotope construction algorithm. European Journal of Operational Research 166, 2005, 35–50.
B. G¨ artner and J. Matouˇ sek. Approximation Algorithms and Semidefi-nite Programming. Springer, 2012.
M. Hlad´ ık, M. ˇ Cern´ y, and V. Kreinovich, “When Is Data Processing under Interval and Fuzzy Uncertainty Feasible: What If Few Inputs Interact? Does Feasibility Depend on How We Describe Interaction?” These Proceedings.
Nesterov Yu. Semidefinite relaxation and nonconvex quadratic opti-mization. Optimization Methods & Software 9 (1–3), 1998, 141–160.
V. Kreinovich, A. Lakayev, J. Rohn, and P. Kahl, Computational Com-plexity and Feasibility of Data Processing and Interval Computations, Kluwer, Dordrecht, 1998.
Yu G.-D. (2014). Quadratic forms on graphs with applications to minimizing the least eigenvalue of signless Laplacian over bicyclic graphs. Electronic Journal of Linear Algebra 27, article 13. |
187818 | https://math.stackexchange.com/questions/4623654/define-if-two-planes-are-parallel | Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
define if two planes are parallel
Ask Question
Asked
Modified 2 years, 8 months ago
Viewed 152 times
0
$\begingroup$
I am trying to develop a function to determine if 2 planes are parallel or not.
I have their equations :
P1 : a1.x + b1.y + c1.z + d1 = 0 P2 : a2.x + b2.y + c2.z + d1 = 0
I check :
a1=0 & a2=0 => parallel b1=0 & b2=0 => parallel c1=0 & c2=0 => parallel else, if a1/a2 = b1/b2 = c1/c2 => parallel
I have some big doubts about it, because I read on courses that none of parameters may be equal to 0, but if I have a plane on $(x,y)$, with $z=10$, the equation may be :
$z-10 = 0$, so $a, b =0$, right?
How could I check parallelism, taking in account each particular case?
After thinking, I guess it may be something like that :
(a1=0 & a2=0) => no need to check equality (b1=0 & b2=0) => no need to check equality (c1=0 & c2=0) => no need to check equality if (a1=0 & a2!=0) or (a1!=0 & a2=0) => not parallel if (b1=0 & b2!=0) or (b1!=0 & b2=0) => not parallel if (c1=0 & c2!=0) or (c1!=0 & c2=0) => not parallel
Where != means different
Then check if a1/a2 = b1/b2 = c1/c2 (if I need to check)
Aside : How can I insert here some mathematical signs, I cannot find?
geometry
3d
plane-geometry
Share
edited Jan 22, 2023 at 16:32
whoisit
4,05911 gold badge99 silver badges2929 bronze badges
asked Jan 22, 2023 at 15:22
Siegfried.VSiegfried.V
33633 silver badges1313 bronze badges
$\endgroup$
8
1
$\begingroup$ To avoid worrying about zero, you could check whether $(a1.b2=a2.b1) and (a1.c2=a2.c1) and (b1.c2=b2.c1)$ $\endgroup$
Empy2
– Empy2
2023-01-22 15:27:09 +00:00
Commented Jan 22, 2023 at 15:27
$\begingroup$ In fact, I just found a better course, that is giving the definition. In fact, I can have 1a=0, or b1=0, or c1=0, but never 3 of thm = 0, right? $\endgroup$
Siegfried.V
– Siegfried.V
2023-01-22 15:32:14 +00:00
Commented Jan 22, 2023 at 15:32
$\begingroup$ Yes, check for that too $\endgroup$
Empy2
– Empy2
2023-01-22 16:25:57 +00:00
Commented Jan 22, 2023 at 16:25
$\begingroup$ You should check mathjax - where we write math within $...$ for typing out math signs. Math Meta Stackexchange has a mathjax tutorial, search for it $\endgroup$
whoisit
– whoisit
2023-01-22 16:27:06 +00:00
Commented Jan 22, 2023 at 16:27
$\begingroup$ @Empty thanks that helped. if you write as answer, I could accept it, or may I accept jp bouchon's answer, as it is good explained too, don't know what may be the correct way. $\endgroup$
Siegfried.V
– Siegfried.V
2023-01-22 16:29:38 +00:00
Commented Jan 22, 2023 at 16:29
| Show 3 more comments
1 Answer 1
Reset to default
2
$\begingroup$
If a plane $P$ has $ax+by+cz+d=0$ as cartesian equation with $(a,b,c)\ne(0,0,0)$, then the vector $\left(\begin{smallmatrix} a\ b\ c \end{smallmatrix}\right)$ is orthogonal to $P$.
Now two planes $P_1$ and $P_2$ are parallel iff $\left(\begin{smallmatrix} a_1\ b_1\ c_1 \end{smallmatrix}\right)$ and $\left(\begin{smallmatrix} a_2\ b_2\ c_2 \end{smallmatrix}\right)$ are proportional, i.e., linearly dependent, which is in turn equivalent to $$\begin{vmatrix} a_1&a_2\ b_1&b_2 \end{vmatrix} = \begin{vmatrix} a_1&a_2\ c_1&c_2 \end{vmatrix} = \begin{vmatrix} b_1&b_2\ c_1&c_2 \end{vmatrix}=0.$$ That is: $a_1b_2-a_2b_1 =a_1c_2-a_2c_1 =b_1c_2-b_2c_1=0$.
Share
edited Jan 22, 2023 at 16:27
Apass.Jack
13.5k11 gold badge2222 silver badges3333 bronze badges
answered Jan 22, 2023 at 16:15
jp boucheronjp boucheron
83622 silver badges77 bronze badges
$\endgroup$
3
$\begingroup$ you wrote with (a,b,c) (0,0,0). Could you explain why? As I understood, a,b,c are not necessarly 0 ? $\endgroup$
Siegfried.V
– Siegfried.V
2023-01-23 17:05:14 +00:00
Commented Jan 23, 2023 at 17:05
$\begingroup$ This is because the cartesian equation $0.x + 0.y + 0.z + d=0$ doesn't define a plane. According to the value of $d$, this equation defines $\emptyset$ (for $d 0$) or $\Bbb{R}^3$ (for $d=0$). To define a plane at least one of the coefficients must be ${}\neq0$. Put otherwise: the affine plane P has for direction a vector space $\vec P$, which is defined as the kernel of a non-zero linear form $\ell$, and $P$ has $\ell(x,y,z)+d=0$ for equation (for some $d\in\Bbb{R}$. Such a linear form is of the type $\ell(u,v,w)=au+bv+cw$ for some $(a,b,c)\neq(0,0,0)$. $\endgroup$
jp boucheron
– jp boucheron
2023-01-23 17:23:17 +00:00
Commented Jan 23, 2023 at 17:23
$\begingroup$ Sorry, I missunderstood the comment. I believed none of them may be equal to 0. Thanks for your answer, it helped a lot :). $\endgroup$
Siegfried.V
– Siegfried.V
2023-01-23 18:16:29 +00:00
Commented Jan 23, 2023 at 18:16
Add a comment |
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
geometry
3d
plane-geometry
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Related
0 Obtaning a quality intersection line in 3D and variance-covariances contribution
Is there any good reason for a programmer to study geometry?
1 I have some problems with straight lines and planes
1 simple 3d space collision check algorithm
How can I prove that 3 planes are arranged in a triangle-like shape without calculating their intersection lines?
1 Prerequisites for the book "Three-dimensional geometry and topology"
Are there affine planes with no abelian group of translations?
Hot Network Questions
в ответе meaning in context
Real structure on a complex torus
How to locate a leak in an irrigation system?
What "real mistakes" exist in the Messier catalog?
What meal can come next?
Separating trefoil knot on torus
Do sum of natural numbers and sum of their squares represent uniquely the summands?
Are there any alternatives to electricity that work/behave in a similar way?
Why is the fiber product in the definition of a Segal spaces a homotopy fiber product?
Storing a session token in localstorage
Spectral Leakage & Phase Discontinuites
The rule of necessitation seems utterly unreasonable
Determine which are P-cores/E-cores (Intel CPU)
Can I go in the edit mode and by pressing A select all, then press U for Smart UV Project for that table, After PBR texturing is done?
Riffle a list of binary functions into list of arguments to produce a result
Verify a Chinese ID Number
Proof of every Highly Abundant Number greater than 3 is Even
How to home-make rubber feet stoppers for table legs?
Two calendar months on the same page
Can peaty/boggy/wet/soggy/marshy ground be solid enough to support several tonnes of foot traffic per minute but NOT support a road?
Transforming wavefunction from energy basis to annihilation operator basis for quantum harmonic oscillator
Is existence always locational?
Another way to draw RegionDifference of a cylinder and Cuboid
Is direct sum of finite spectra cancellative?
more hot questions
Question feed |
187819 | https://www.sea-astronomia.es/glosario/medio-interplanetario | medio interplanetario | Sociedad española de astronomía
Pasar al contenido principal
Acceso SEAMOS
La SEA
Quiénes somos
Área de socios
Reuniones Científicas
Boletín
Comisiones
Eventos Sinergias SEA
Investigación
Centros de investigación
Observatorios
Estudios RRHH
Publicaciones
Tesis
Enseñanza y divulgación
Enseñanza
Divulgación
TERMINOLOGÍA
Libros miembros SEA
Astronomía y Cultura
Mujeres y Astronomía
Pro-Am
Agenda
Empleos / Becas
Noticias
Contacto
medio interplanetario
Material y campos magnéticos que pueblan el Sistema Solar en el espacio entre los cuerpos celestes (planetas, asteroides y cometas). Este medio está formado por polvo interplanetario, rayos cósmicos, plasma proveniente del viento solar y la combinación de los campos magnéticos del Sol y los planetas. La temperatura del medio interplanetario es aproximadamente de 100 000 K y su densidad muy baja, del orden de cinco partículas por centímetro cúbico en la vecindad de la Tierra. Esta densidad disminuye conforme aumenta la distancia al Sol (la relación es de proporcionalidad inversa del cuadrado de la distancia). El reflejo de la luz solar en las partículas pulverulentas del medio interplanetario da lugar a un resplandor difuso que recibe el nombre de luz zodiacal y luz antisolar (Gegenschein).
Volver al índice
Inicio
Acceso web
Política de cookies
SÍGUENOS |
187820 | https://deutsch.heute-lernen.de/grammatik/der-die-das/verstaerker | Heißt es der, die oder das Verstärker?
Direkt zum Inhalt
Deutsch lernen
„Der”, „die” oder „das”? Das Genus im Deutschen
Verstärker
Heißt es der, die oder das Verstärker?
Das grammatikalische Geschlecht (Genus) von Verstärker ist maskulin Der Artikel im Nominativ ist deswegen der. Deutsche sagen also: der Verstärker.
Der, die oder das? Welche Regeln gibt es?
Du weißt jetzt, dass Verstärker maskulin ist. Aber wie ist es mit all den anderen deutschen Wörtern? Musst du den Artikel für jedes einzelne Wort lernen?
Leider gibt es wirklich nicht sehr viele universelle Regeln, die dir bei dem Problem helfen. Denn das grammatikalische Geschlecht im Deutschen ist oft sehr unlogisch. Das natürliche Geschlecht hilft dir selten weiter. Mädchen sind zum Beispiel per Definition weiblich. Aber das Genus des Wortes Mädchen ist trotzdem neutral: es heißt das Mädchen.
Ganz sicher kannst du beim Genus von einem Wort also nur sein, wenn du es auswendig gelernt hast.
Teste dein Wissen mit unserem Quiz
Wie gesagt: Logisch ist das Genus im Deutschen nicht. Aber ein paar Regeln gibt es zum Glück doch:
Maskuline Wörter
Sehr oft (aber leider nicht immer) maskulin sind Wörter, mit denen man über Zeit und Datum spricht, also zum Beispiel die Tageszeiten, die Wochentage, Monate und auch Jahreszeiten. Die vier Himmelsrichtungen sind maskulin. Auch Vokabeln, mit denen man das Wetter beschreibt, brauchen oft den Artikel der: der Wind, der Schnee, der Regen. Und auch wenn das berühmteste deutsche Getränk – das Bier – neutral ist; die meisten anderen Getränke mit Alkohol sind maskulin. Außerdem gibt es bestimmte Wortendungen, die man fast nur bei maskulinen Wörtern findet; zum Beispiel: -ig, -ich, -ling oder -en.
Feminine Wörter
Der Apfel ist die wichtigste Ausnahme. Aber fast alle anderen Obstsorten – die Kiwi, die Orange, die Traube – sind feminin. Namen für Schiffe und Motorräder benutzt man im Deutschen auch immer mit die. Es gibt auch einige Suffixe, die zeigen, dass ein Wort sehr wahrscheinlich feminin ist: -in, -keit, -heit, -ung, -schaft oder -ei.
Neutrale Wörter
Die Suffixe -ment, -tum und -chen sind typisch für Wörter, die den Artikel das brauchen. Außerdem benutzen Deutsche das, wenn sie über Farben (das Rot) oder Biersorten sprechen.
Was ist der richtige unbestimmte Artikel zu Verstärker?
Wenn du der Verstärker sagst, heißt das, dass dein Gesprächspartner wissen sollte, worüber genau du sprichst. Willst du weniger konkret über etwas sprechen, benutzt du stattdessen die unbestimmten Artikel ein und eine. Es gibt nur diese beiden Formen: eine für feminine Nomen und ein für maskuline und neutrale Wörter. Verstärker ist maskulin, die korrekte Form ist also: ein Verstärker.
Was ist der Plural von Verstärker?
der Verstärker=> die Verstärker
ein Verstärker=> viele Verstärker
Im Plural sind die deutschen Artikel sehr viel weniger problematisch. Der bestimmte Artikel ist im Plural immer die, egal ob der Begleiter im Singular der, die oder das heißt. Unbestimmte Artikel gibt es im Deutschen im Plural nicht. Du solltest dann also einfach die Pluralform ohne einen Artikel benutzen.
Ein bisschen komplizierter sind die Pluralformen des Nomens. Bei der Bildung des Plurals gibt es nämlich auch einige Ausnahmen, die du lernen musst.
Und wie dekliniert man Verstärker?
Alles verstanden?
Dann teste doch gleich dein Deutsch-Wissen:
QUIZ
Klicke auf den richtigen Artikel:
React App
Polizei
der die das
Sagt man
der, die, oder das...
Sonderheft
Seenotretter
Wiesnanstich
Zeitungsjunge
Handschlag
Nachrichtenpodcast
Wiesnhendl
Internetauftritt
Eingabefehler
Dokumentarfilm
Käserinde
Coworker
Gummi
Grillhendl
Baumschule
Nach Artikeln suchen
© ZEIT SPRACHEN GmbH
Datenschutz
Impressum
AGB |
187821 | https://mathoverflow.net/questions/109582/seeking-a-geometric-proof-of-a-generalized-alternating-series-convergence | mathematics education - Seeking a Geometric Proof of a Generalized Alternating Series' Convergence - MathOverflow
Join MathOverflow
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
MathOverflow helpchat
MathOverflow Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Thanks for your vote!
You now have 5 free votes weekly.
Free votes
count toward the total vote score
does not give reputation to the author
Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation.
Got it!Go to help center to learn more
Seeking a Geometric Proof of a Generalized Alternating Series' Convergence
Ask Question
Asked 12 years, 11 months ago
Modified12 years, 9 months ago
Viewed 2k times
This question shows research effort; it is useful and clear
12
Save this question.
Show activity on this post.
Let z∈C∖{1}z∈C∖{1} with |z|=1|z|=1. We consider the following infinite series, which necessarily converges:
S(z):=∑n=1∞z n n S(z):=∑n=1∞z n n
Note that S(−1)S(−1) is the alternating harmonic series.
A straightforward application of the Dirichlet Convergence Test proves any such series converges, but I feel this is a bit like killing a fly with a sledgehammer. (I realize some of you might not think this test is a sledgehammer; I wonder also whether this series is a fly.) In any event, I'm wondering whether there is a way to prove convergence using only a simple geometric argument (with some basic analysis).
For example, we can think of S(i)S(i) as taking steps in the plane of length 1/n 1/n, but turning ninety degrees after each one. Then the partial sums correspond to a nested sequence of squares, where the area of the squares is clearly converging to 0 0. Thus, an argument using the Nested Interval Property (or really its corresponding 2 D 2 D version) indicates that the series converges.
More generally, I'd think that because we are taking steps of size decreasing to 0 0 and rotating by the same amount after each step, there should be a general geometric argument for why S(z)S(z) will converge. Ideally, I'd like to have a proof that could be made accessible to early Calculus students, even if not every step is presented in fully rigorous form.
For clarity's sake, I will directly state my question: How does one prove S(z)S(z) converges using a simple geometric argument that relies at most on basic analysis (e.g., makes no appeal to stronger theorems from Complex Analysis)?
mathematics-education
mg.metric-geometry
cv.complex-variables
real-analysis
Share
Share a link to this question
Copy linkCC BY-SA 3.0
Cite
Improve this question
Follow
Follow this question to receive notifications
edited Dec 29, 2012 at 5:48
Benjamin DickmanBenjamin Dickman
asked Oct 14, 2012 at 5:46
Benjamin DickmanBenjamin Dickman
8,096 2 2 gold badges 51 51 silver badges 96 96 bronze badges
2
2 Your question is interesting, but since Abel summation is the discrete-time analogue of integration by part, in a calculus course where both series and integrals are discussed I would rather use it to have one more link between the two theories.Benoît Kloeckner –Benoît Kloeckner 2012-10-14 07:48:15 +00:00 Commented Oct 14, 2012 at 7:48
I'm not suggesting a geometric proof for this particular problem be presented in lieu of introducing other topics; instead, I agree with your (apparent) fondness for linking different theories, so I enjoy when proofs can be carried out in multiple ways. That said, for better or worse, often a student's first Calculus course that covers Leibniz's Alternating Series Test will not get into Abel summation, Dirichlet series, etc.Benjamin Dickman –Benjamin Dickman 2012-10-14 08:02:12 +00:00 Commented Oct 14, 2012 at 8:02
Add a comment|
4 Answers 4
Sorted by: Reset to default
This answer is useful
17
Save this answer.
Show activity on this post.
Here is what I think you are looking for:
First, note that if you take steps of fixed length ℓ ℓ, and keep rotating by an angle of θ≠0 θ≠0, then you will stay inside a circle of radius r=ℓ 2 cos(θ/2)−1 r=ℓ 2 cos(θ/2)−1. In fact, the steps will all land on the circle, and you can calculate its center as the point of distance r r from one of the steps, at an angle which bisects the angle θ θ. Now if at a certain step you decide to change the length of your steps to ℓ′<ℓ ℓ′<ℓ, the new circle will have radius r′=ℓ′2 cos(θ/2)−1 r′=ℓ′2 cos(θ/2)−1, with center the point of distance r′r′ away from the current step in the same direction as the old center. From this description it's clear that the new circle is contained in the old circle. Now you can apply the nested interval property.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Improve this answer
Follow
Follow this answer to receive notifications
answered Oct 14, 2012 at 8:17
Kevin VentulloKevin Ventullo
4,840 1 1 gold badge 32 32 silver badges 42 42 bronze badges
0
Add a comment|
This answer is useful
5
Save this answer.
Show activity on this post.
This proof is not really much more than a rewriting of Dirichlet's test, but here goes:
(z−1)∑n=1 N z n n=∑n=2 N+1 z n n−1−∑n=1 N z n n=z N+1 N−z 1+∑n=2 N z n n(n−1)(z−1)∑n=1 N z n n=∑n=2 N+1 z n n−1−∑n=1 N z n n=z N+1 N−z 1+∑n=2 N z n n(n−1)
which converges as N→∞N→∞.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Improve this answer
Follow
Follow this answer to receive notifications
answered Oct 14, 2012 at 7:15
Robert IsraelRobert Israel
54.7k 1 1 gold badge 79 79 silver badges 156 156 bronze badges
1
Isn't it more like convergence acceleration?ACL –ACL 2018-03-21 08:40:41 +00:00 Commented Mar 21, 2018 at 8:40
Add a comment|
This answer is useful
3
Save this answer.
Show activity on this post.
Here's an idea. Group the series into blocks
∑n=d k d(k+1)−1 z n n∑n=d k d(k+1)−1 z n n
where d d is fixed and large enough that the complex numbers 1,z,z 2,...z d−1 1,z,z 2,...z d−1 are approximately uniformly distributed across the unit circle. The terms in each block should then be approximately uniformly distributed in phase across the unit circle, and in particular each term should be pairable with a term approximately the negative of it, the two of them canceling to order O(1 n 2)O(1 n 2).
But making this precise seems to me to require more effort than proving that Dirichlet's convergence test works.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Improve this answer
Follow
Follow this answer to receive notifications
answered Oct 14, 2012 at 6:55
Qiaochu YuanQiaochu Yuan
124k 42 42 gold badges 466 466 silver badges 764 764 bronze badges
Add a comment|
This answer is useful
1
Save this answer.
Show activity on this post.
Think of an alternating series
∑n≥0(−1)n a n,a n>a n+1>0∑n≥0(−1)n a n,a n>a n+1>0
as describing the motion of a hesitating person, who starts at a 0 a 0, and goes alternatively, one step backward, one step forward. If we denote by S n S n its location after n n-steps
S n=∑k=0 n(−1)k a k,S n=∑k=0 n(−1)k a k,
then we observe that
S 0>S 1>0 S 0>S 1>0
and we deduce inductively
S 2 n>S 2 n+2>S 2 n+1>0 S 2 n>S 2 n+2>S 2 n+1>0
Thus during the travel, the person is always on the right-hand side of the origin, and every two steps he gets closer to the origin. Mathematically, this means that the subsequence S 2 n S 2 n is decreasing and positive thus it has a limit. The odd subsequence S 2 n+1 S 2 n+1 converges to the same limit since
S 2 n−S 2 n+1=a 2 n+1→0.S 2 n−S 2 n+1=a 2 n+1→0.
We can visualize this process by considering the continuous function S:[0,∞)→R S:[0,∞)→R which is linear on each of the intervals [n,n+1][n,n+1], n n nonegative integer, and such that S(n)=S n S(n)=S n. It can be visualized as a zig-zag, down-up-down-up, that stays above the x x-axis, while the peaks are getting shorter and shorter.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Improve this answer
Follow
Follow this answer to receive notifications
edited Oct 14, 2012 at 13:08
answered Oct 14, 2012 at 12:54
Liviu NicolaescuLiviu Nicolaescu
35.7k 2 2 gold badges 96 96 silver badges 172 172 bronze badges
Add a comment|
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
mathematics-education
mg.metric-geometry
cv.complex-variables
real-analysis
See similar questions with these tags.
Featured on Meta
Spevacus has joined us as a Community Manager
Introducing a new proactive anti-spam measure
Linked
70Behaviour of power series on their circle of convergence
10Power series with funny behavior at the boundary
Related
7completeness axiom for the real numbers
6Puiseux series expansion for space curves?
3Simple bound for generalized geometric series
1Abscissa of convergence for a very specific Dirichlet series / Euler product
26What is the origin/history of the following very short definition of the Lebesgue integral?
1Exponent of convergence of sequence of zeros of e P(z)+Q(z)e P(z)+Q(z)
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
MathOverflow
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings |
187822 | https://brainly.com/question/24222111 | [FREE] b. Compare the similar triangle proof from question 3 with the inscribed square proof. How are they - brainly.com
5
Search
Learning Mode
Cancel
Log in / Join for free
Browser ExtensionTest PrepBrainly App Brainly TutorFor StudentsFor TeachersFor ParentsHonor CodeTextbook Solutions
Log in
Join for free
Tutoring Session
+33,8k
Smart guidance, rooted in what you’re studying
Get Guidance
Test Prep
+13,1k
Ace exams faster, with practice that adapts to you
Practice
Worksheets
+5,6k
Guided help for every grade, topic or textbook
Complete
See more
/
Mathematics
Expert-Verified
Expert-Verified
b. Compare the similar triangle proof from question 3 with the inscribed square proof. How are they different? Which method was easier for you to understand? (1 point)
1
See answer Explain with Learning Companion
NEW
Asked by Bayabbay5180 • 07/12/2021
0:00
/
--
Read More
Community
by Students
Brainly
by Experts
ChatGPT
by OpenAI
Gemini
Google AI
Community Answer
This answer helped 23216 people
23K
4.5
9
Upload your school material for a more relevant answer
i might be wrong but this is what i put
Explanation
In question 3 it was comparing three triangles where now it is using the triangles to find the area of a square instead of proving that they are the same.
Answered by 1036059 •51 answers•23.2K people helped
Thanks 9
4.5
(10 votes)
Expert-Verified⬈(opens in a new tab)
This answer helped 23216 people
23K
4.4
8
Upload your school material for a more relevant answer
The proof of similar triangles focuses on the equality of angles and proportionality of sides, while the inscribed square proof emphasizes area relations and geometric fitting. Understanding may vary, with some students finding triangle properties more straightforward. Both proofs highlight distinct geometric principles at play.
Explanation
To compare the proof of similar triangles with the proof of an inscribed square, we can follow a structured approach:
Understanding the Concepts:
Similar Triangles: These triangles have the same shape, which means that their corresponding angles are equal and the ratios of their corresponding sides are proportional. The proof often involves showing that these conditions hold true using various methods such as side ratios or angle comparisons.
Inscribed Square: This involves positioning a square within a larger geometric shape (like a circle or triangle) and proving its dimensions in relation to that shape. For example, finding the area of the square relative to the circle's radius.
Differences in Proofs:
The proof for similar triangles centers around angle and side length comparisons, while the inscribed square proof is more about area relations and how the square fits within another shape.
In the case of similar triangles, one might use the properties of triangles and angles, while for an inscribed square, calculations might involve using formulas related to area and geometry (like the area of a square being A = side²).
Personal Understanding:
Some may find the similar triangle proof easier to understand because it mainly deals with fundamental properties of triangles that are more straightforward. In contrast, the inscribed square proof might involve more steps or require visualizing the geometric arrangement more clearly, which could complicate understanding.
To summarize, comparing these two proofs highlights the differences in geometry's approach to proving relationships between shapes. The triangle proof focuses on angle and side ratios while the square proof involves area and fitting shapes, which may determine ease of understanding based on a student's proficiency with spatial reasoning.
Examples & Evidence
For instance, similar triangles can be proven by showing that two triangles with one angle equal will lead to the remaining angles being equal and the sides being in proportion. An inscribed square could be demonstrated by showing how the square's area relates to the enclosing circle's radius.
This comparison is based on established geometric principles, where similar triangles rely on angle and side relationships established by Euclidean geometry, and the inscribed square's area relations are derived from basic area formulas of square and shapes like circles.
Thanks 8
4.4
(9 votes)
Advertisement
Bayabbay5180 has a question! Can you help?
Add your answer See Expert-Verified Answer
### Free Mathematics solutions and answers
Community Answer 4.6 12 Jonathan and his sister Jennifer have a combined age of 48. If Jonathan is twice as old as his sister, how old is Jennifer
Community Answer 11 What is the present value of a cash inflow of 1250 four years from now if the required rate of return is 8% (Rounded to 2 decimal places)?
Community Answer 13 Where can you find your state-specific Lottery information to sell Lottery tickets and redeem winning Lottery tickets? (Select all that apply.) 1. Barcode and Quick Reference Guide 2. Lottery Terminal Handbook 3. Lottery vending machine 4. OneWalmart using Handheld/BYOD
Community Answer 4.1 17 How many positive integers between 100 and 999 inclusive are divisible by three or four?
Community Answer 4.0 9 N a bike race: julie came in ahead of roger. julie finished after james. david beat james but finished after sarah. in what place did david finish?
Community Answer 4.1 8 Carly, sandi, cyrus and pedro have multiple pets. carly and sandi have dogs, while the other two have cats. sandi and pedro have chickens. everyone except carly has a rabbit. who only has a cat and a rabbit?
Community Answer 4.1 14 richard bought 3 slices of cheese pizza and 2 sodas for $8.75. Jordan bought 2 slices of cheese pizza and 4 sodas for $8.50. How much would an order of 1 slice of cheese pizza and 3 sodas cost? A. $3.25 B. $5.25 C. $7.75 D. $7.25
Community Answer 4.3 192 Which statements are true regarding undefinable terms in geometry? Select two options. A point's location on the coordinate plane is indicated by an ordered pair, (x, y). A point has one dimension, length. A line has length and width. A distance along a line must have no beginning or end. A plane consists of an infinite set of points.
Community Answer 4 Click an Item in the list or group of pictures at the bottom of the problem and, holding the button down, drag it into the correct position in the answer box. Release your mouse button when the item is place. If you change your mind, drag the item to the trashcan. Click the trashcan to clear all your answers. Express In simplified exponential notation. 18a^3b^2/ 2ab
New questions in Mathematics
Solve for x: 3 2x+5=3
Solve for d. −2 d−8=8+8+4 d
Solve for w in the proportion. 42 w=6 1w=
Find the sum. 5 2+(−5 3)=□
The equation of a circle is x 2+(y−10)2=16. The radius of the circle is □ units. The center of the circle is at □ .
Previous questionNext question
Learn
Practice
Test
Open in Learning Companion
Company
Copyright Policy
Privacy Policy
Cookie Preferences
Insights: The Brainly Blog
Advertise with us
Careers
Homework Questions & Answers
Help
Terms of Use
Help Center
Safety Center
Responsible Disclosure Agreement
Connect with us
(opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab)
Brainly.com
Dismiss
Materials from your teacher, like lecture notes or study guides,
help Brainly adjust this answer to fit your needs.
Dismiss |
187823 | https://www.youtube.com/watch?v=p0ABG7y9jXw | The Orthocenter of a Triangle Using Vectors
Mike, the Mathematician
19000 subscribers
35 likes
Description
2038 views
Posted: 31 Mar 2023
We use vectors to find the representation of the orthocenter of a triangle.
mikethemathematician, #mikedabkowski, #profdabkowski, #calc3
5 comments
Transcript:
hello students in this video we'll see how to find the orthocenter of a triangle using vectors if we're given a triangle a b c we choose the circumcenter to be the origin so in other words we have is this we have a triangle a b C and the center of the circumcircle is going to be the origin for our Vector configuration so with this choice of origin we know the distance the length of any Vector will be R so in other words the length of the vector B is equal to the length of the vector a is equal to the length of the vector C with the origin being the circum Center and that's equal to R the circum radius now we claim that we can find the orthocenter so our claim is at the orthocenter which is the intersection of the altitudes is given by H which is a plus b plus c okay so to show this is true it suffices to show so this suffice is a show that if I look at a h Point H so if I look at ah the vector a to H is perpendicular to BC and with all the other relationships the same so let's look at AA so if I look at a h minus a and I dot product that with C minus B what will we get well what is h i claim that H is a plus b plus C so if H is a plus b plus C then H minus H is just going to be B plus c I'm going to dot that with C minus B so what this will be is this is going to be B dot C minus C dot b Plus the length of c squared would you just see that c and then minus the length of B squared now B dot C and C dot b are the same so those are going to cancel out and the length of C and the length of B are both equal to R so this is r squared minus r squared and that's equal to zero so that says that the vector h a or a to H is perpendicular to CB in other words this point H is on the altitude that goes from a to H they go from a to the side like BC similarly we have that H minus B dot C minus a is equal to zero and H minus C dot B minus a is equal to zero so that shows me that this point H is on every altitude so H is not only on each altitude all the altitudes are concurrent at this point H so the orthocenter of a triangle is in Vector form is a plus b plus C if we choose the circumcenter to be the origin thank you very much |
187824 | https://en.wikipedia.org/wiki/Template:Logical_connectives | Template:Logical connectives - Wikipedia
Jump to content
[x] Main menu
Main menu
move to sidebar hide
Navigation
Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Contribute
Help
Learn to edit
Community portal
Recent changes
Upload file
Special pages
Search
Search
[x] Appearance
Appearance
move to sidebar hide
Text
Small Standard Large
This page always uses small font size
Width
Standard Wide
The content is as wide as possible for your browser window.
Color (beta)
Automatic Light Dark
This page is always in light mode.
Donate
Create account
Log in
[x] Personal tools
Donate
Create account
Log in
Pages for logged out editors learn more
Contributions
Talk
[x] Toggle the table of contents
Contents
move to sidebar hide
(Top)
1 See also
Template:Logical connectives
[x] 19 languages
العربية
Azərbaycanca
Eesti
Emiliàn e rumagnòl
فارسی
Français
한국어
Հայերեն
Italiano
עברית
Македонски
日本語
ਪੰਜਾਬੀ
Português
Русский
Українська
اردو
Tiếng Việt
中文
Edit links
Template
Talk
[x] English
Read
Edit
View history
[x] Tools
Tools
move to sidebar hide
Actions
Read
Edit
View history
General
What links here
Related changes
Upload file
Permanent link
Page information
Get shortened URL
Download QR code
Expand all
Edit interlanguage links
Print/export
Download as PDF
Printable version
In other projects
Wikidata item
From Wikipedia, the free encyclopedia
| hide v t e Common logical connectives |
| Tautology/True⊤{\displaystyle \top } | |
| Alternative denial(NAND gate)∧¯{\displaystyle {\overline {\wedge }}} Converse implication⇐{\displaystyle \Leftarrow } Implication(IMPLY gate)⇒{\displaystyle \Rightarrow } Disjunction(OR gate)∨{\displaystyle \lor } |
| Negation(NOT gate)¬{\displaystyle \neg } Exclusive or(XOR gate)⊕{\displaystyle \oplus } Biconditional(XNOR gate)⊙{\displaystyle \odot } Statement(Digital buffer) |
| Joint denial(NOR gate)∨¯{\displaystyle {\overline {\vee }}} Nonimplication(NIMPLY gate)⇏{\displaystyle \nRightarrow } Converse nonimplication⇍{\displaystyle \nLeftarrow } Conjunction(AND gate)∧{\displaystyle \land } |
| Contradiction/False⊥{\displaystyle \bot } |
| Philosophy portal |
Template documentation[view] [edit] [history] [purge]
‹The template below (Collapsible option) is being considered for merging with Navbox documentation. See templates for discussion to help reach a consensus.›
This template's initial visibility currently defaults to autocollapse, meaning that if there is another collapsible item on the page (a navbox, sidebar, or table with the collapsible attribute), it is hidden apart from its title bar; if not, it is fully visible.
To change this template's initial visibility, the |state=parameter may be used:
{{Logical connectives|state=collapsed}} will show the template collapsed, i.e. hidden apart from its title bar.
{{Logical connectives|state=expanded}} will show the template expanded, i.e. fully visible.
See also
[edit]
| show v t e Logic templates |
| Types of logic | Classical logic Mathematical logic Metalogic Non-classical logic Philosophical logic |
| Other templates | Common logical symbols Logical connectives Logical paradoxes Logical truth Set theory Transformation rules Normal forms Diagrams |
| Logic navbar |
The above documentation is transcluded from Template:Logical connectives/doc. (edit | history)
Editors can experiment in this template's sandbox (create | mirror) and testcases (create) pages.
Add categories to the /doc subpage. Subpages of this template.
Retrieved from "
Categories:
Logic navigational boxes
Philosophy and thinking navigational boxes
WikiProject Logic
This page was last edited on 17 April 2025, at 10:23(UTC).
Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
Privacy policy
About Wikipedia
Disclaimers
Contact Wikipedia
Code of Conduct
Developers
Statistics
Cookie statement
Mobile view
Edit preview settings
Search
Search
[x] Toggle the table of contents
Template:Logical connectives
19 languagesAdd topic |
187825 | https://www.lcgdbzz.org/custom/news/id/7440 | 如何根据变量类型选择合适的统计分析方法?
中文|English
ISSN 1001-5256 (Print)
ISSN 2097-3497 (Online)
CN 22-1108/R
首页
关于本刊
本刊简介
编委会
审稿专家
编辑部风采
广告合作
联系我们
投稿指南
本刊稿约
本刊规范
开放获取声明
出版伦理规范
稿件出版流程
同行评议流程
网络首发
论文模板
在线期刊
网络首发
当期目录
过刊浏览
F5000论文
肝胆学院
国外期刊导读
行业资讯
视频课堂
指南共识
全部指南
国内指南
国外指南
指南翻译
指南解读
Menu
首页
关于本刊
本刊简介
编委会
审稿专家
编辑部风采
广告合作
联系我们
投稿指南
本刊稿约
本刊规范
开放获取声明
出版伦理规范
稿件出版流程
同行评议流程
网络首发
论文模板
在线期刊
网络首发
当期目录
过刊浏览
F5000论文
肝胆学院
国外期刊导读
行业资讯
视频课堂
指南共识
所有指南
国内指南
国外指南
指南翻译
指南解读
留言板
尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!
| 姓名 | |
| 邮箱 | |
| 手机号码 | |
| 标题 | |
| 留言内容 | |
| 验证码 | |
如何根据变量类型选择合适的统计分析方法?
分类
阅读次数:1698
分享到:
用微信扫码二维码
分享至好友和朋友圈
发布日期:2015-12-23
来源:MedSci
把握两个关键
1、抓住业务问题不放松。你想解决什么问题?这是核心,是方向。
2、全面理解数据。须把握三大关键:变量、数据分析方法、变量和方法的关联。
认识变量
认识数据分析方法
其一
其二
分享到:
用微信扫码二维码
分享至好友和朋友圈
阅读次数:1698
相关指南
医学中常用计数资料的统计描述
如何使用SPSS软件进行正态分布检验
SPSS软件如何实现t检验
浅谈医学科研中计数资料的统计学处理原则
【SPSS软件应用】两指标间的关系分析
如何根据变量类型选择合适的统计分析方法?
第一类错误和第二类错误
单因素方差分析(ANOVA):两两比较检验Post-Hoc选项详解
统计方法选用手册
计量资料的统计描述
关于卡方检验与单因素logistic回归的个人看法
全部指南
国内指南
国外指南
指南翻译
指南解读
1 病毒性肝炎 436
1.1 乙型肝炎 229
1.2 丙型肝炎 163
1.3 甲型肝炎 5
1.4 戊型肝炎 10
1.5 其他肝炎 29
2 肝硬化及并发症 157
3 酒精性肝病 40
4 代谢相关脂肪性肝病 91
5 肝衰竭/肝性脑病/人工肝 89
6 肝肿瘤 284
7 自身免疫性肝病 53
8 药物性肝病 46
9 肝移植 103
10 其他肝病 94
10.1 遗传及代谢性肝病 46
10.2 胆汁淤积性肝病 28
10.3 肝脏血管病 20
11 一般肝病/肝脏检查 146
12 胆道疾病 206
13 胰腺疾病 358
14 全身疾病与肝病/内镜 101
15 肝胆胰疾病相关评分系统汇总 18
推荐指南
2020年欧洲肝病学会推荐意见:丙型肝炎的治疗(最终更新版)
慢性乙型肝炎防治指南(2019年版)
中国门静脉高压经颈静脉肝内门体分流术临床实践指南(2019年版)
2019 LICAGE意见书:肝硬化患者围手术期止血管理
肝纤维化中西医结合诊疗指南(2019年版)
《2019年欧洲肝脏重症监护组专家共识:肝硬化患者围手术期止血管理》摘译
《2019年美国胃肠病学会临床实践更新:肝硬化患者凝血治疗》摘译
AI 助手
中国首个肝胆病专业杂志
主管单位:中华人民共和国教育部
主办单位:吉林大学
学术支持:中华医学会肝病学分会
通信地址:长春市新疆街461号
投稿咨询:0431-88782044
审稿咨询:0431-88783542
电子信箱:lcgdb@vip.163.com
关于本刊
本刊简介
编委会
审稿专家
编辑部风采
广告合作
联系我们
投稿指南
本刊稿约
本刊规范
开放获取声明
出版伦理规范
稿件出版流程
同行评议流程
网络首发
论文模板
在线期刊
网络首发
当期目录
过刊浏览
F5000论文
肝胆学院
国外期刊导读
行业资讯
视频课堂
指南共识
全部指南
国内指南
国外指南原文
指南翻译
指南解读
昨日IP昨日PV今日IP今日PV当前在线
网站设计 © 2020 《临床肝胆病杂志》编辑部 吉ICP备10000617号-1技术支持: 仁和软件
AI助手(RAG)
Hi,AI搜索已经支持Deepseek
检索增强生成
(Retrieval-augmented Generation) |
187826 | https://en.wikipedia.org/wiki/Cycloalkane | Jump to content
Cycloalkane
Afrikaans
العربية
Azərbaycanca
Беларуская (тарашкевіца)
Български
Bosanski
Català
Čeština
Deutsch
Eesti
Ελληνικά
Español
Euskara
فارسی
Français
한국어
Հայերեն
हिन्दी
Hrvatski
Bahasa Indonesia
Italiano
עברית
ქართული
Қазақша
Lietuvių
Magyar
Македонски
Монгол
Nederlands
日本語
Norsk bokmål
Oʻzbekcha / ўзбекча
Polski
Português
Romnă
Русский
Shqip
Simple English
Slovenčina
Slovenščina
کوردی
Српски / srpski
Srpskohrvatski / српскохрватски
Suomi
Svenska
Татарча / tatarça
Türkçe
Українська
Tiếng Việt
粵語
中文
Edit links
From Wikipedia, the free encyclopedia
Saturated alicyclic hydrocarbon
In organic chemistry, the cycloalkanes (also called naphthenes, but distinct from naphthalene) are the monocyclic saturated hydrocarbons. In other words, a cycloalkane consists only of hydrogen and carbon atoms arranged in a structure containing a single ring (possibly with side chains), and all of the carbon-carbon bonds are single. The larger cycloalkanes, with more than 20 carbon atoms are typically called cycloparaffins. All cycloalkanes are isomers of alkenes.
The cycloalkanes without side chains (also known as monocycloalkanes) are classified as small (cyclopropane and cyclobutane), common (cyclopentane, cyclohexane, and cycloheptane), medium (cyclooctane through cyclotridecane), and large (all the rest).
Besides this standard definition by the International Union of Pure and Applied Chemistry (IUPAC), in some authors' usage the term cycloalkane includes also those saturated hydrocarbons that are polycyclic.
In any case, the general form of the chemical formula for cycloalkanes is CnH2(n+1−r), where n is the number of carbon atoms and r is the number of rings. The simpler form for cycloalkanes with only one ring is CnH2n.
Examples
[edit]
Cyclopropane
Cyclobutane
Cyclopentane
Cyclohexane
Cycloheptane
Cyclooctane
Nomenclature
[edit]
See also: IUPAC nomenclature
See also: Alkanes
Unsubstituted cycloalkanes that contain a single ring in their molecular structure are typically named by adding the prefix "cyclo" to the name of the corresponding linear alkane with the same number of carbon atoms in its chain as the cycloalkane has in its ring. For example, the name of cyclopropane (C3H6) containing a three-membered ring is derived from propane (C3H8) - an alkane having three carbon atoms in the main chain.
The naming of polycyclic alkanes such as bicyclic alkanes and spiro alkanes is more complex, with the base name indicating the number of carbons in the ring system, a prefix indicating the number of rings ( "bicyclo-" or "spiro-"), and a numeric prefix before that indicating the number of carbons in each part of each ring, exclusive of junctions. For instance, a bicyclooctane that consists of a six-membered ring and a four-membered ring, which share two adjacent carbon atoms that form a shared edge, is [4.2.0]-bicyclooctane. That part of the six-membered ring, exclusive of the shared edge has 4 carbons. That part of the four-membered ring, exclusive of the shared edge, has 2 carbons. The edge itself, exclusive of the two vertices that define it, has 0 carbons.
There is more than one convention (method or nomenclature) for the naming of compounds, which can be confusing for those who are just learning, and inconvenient for those who are well-rehearsed in the older ways. For beginners, it is best to learn IUPAC nomenclature from a source that is up to date, because this system is constantly being revised. In the above example [4.2.0]-bicyclooctane would be written bicyclo[4.2.0]octane to fit the conventions for IUPAC naming. It then has room for an additional numerical prefix if there is the need to include details of other attachments to the molecule such as chlorine or a methyl group. Another convention for the naming of compounds is the common name, which is a shorter name and it gives less information about the compound. An example of a common name is terpineol, the name of which can tell us only that it is an alcohol (because the suffix "-ol" is in the name) and it should then have a hydroxyl group (–OH) attached to it.
The IUPAC naming system for organic compounds can be demonstrated using the example provided in the adjacent image. The base name of the compound, indicating the total number of carbons in both rings (including the shared edge), is listed first. For instance, "heptane" denotes "hepta-", which refers to the seven carbons, and "-ane", indicating single bonding between carbons. Next, the numerical prefix is added in front of the base name, representing the number of carbons in each ring (excluding the shared carbons) and the number of carbons present in the bridge between the rings. In this example, there are two rings with two carbons each and a single bridge with one carbon, excluding the carbons shared by both the rings. The prefix consists of three numbers that are arranged in descending order, separated by dots: [2.2.1]. Before the numerical prefix is another prefix indicating the number of rings (e.g., "bicyclo+"). Thus, the name is bicyclo[2.2.1]heptane.
Cycloalkanes as a group are also known as naphthenes, a term mainly used in the petroleum industry.
Properties
[edit]
Containing only C–C and C–H bonds, cycloalkanes are similar to alkanes in their general properties. Cycloalkanes with high angle strain, such as cyclopropane, have weaker C–C bonds, promoting ring-opening reactions.
Cycloalkanes have higher boiling points, melting points, and densities than alkanes. This is due to stronger London forces because the ring shape allows for a larger area of contact.
Even-numbered cycloalkanes tend to have higher melting points than odd-numbered cycloalkanes. While variations in enthalpy and orientational entropy of the solid-phase crystal structure largely explain the odd-even alternation found in alkane melting points, conformational entropy of the solid and liquid phases has a large impact on cycloalkane melting points.: 98 For example, cycloundecane has a large number of accessible conformers near room temperature, giving it a low melting point,: 22 whereas cyclododecane adopts a single lowest-energy conformation: 25 (up to chirality) in both the liquid phase and solid phase (above 199 K),: 32–34 and has a high melting point. These trends are broken from cyclopentadecane onwards, due to increasing variation in solid-phase conformational mobility, though higher cycloalkanes continue to display large odd-even fluctuations in their plastic crystal transition temperatures.: 99–100 Sharp plastic crystal phase transitions disappear from C48H96 onwards, and sufficiently high molecular weight cycloalkanes, such as C288H576, have similar crystal lattices and melting points to high-density polyethylene.: 27, 37
Table of cycloalkanes
[edit]
| Alkane | Formula | Melting point [°C] | Boiling point [°C] | Liquid density [g·cm−3] (at 20 °C) |
--- ---
| Cyclopropane | C3H6 | −127.6: 27 | −33 | |
| Cyclobutane | C4H8 | −90.7: 27 | 12.5 | 0.720 |
| Cyclopentane | C5H10 | −93.4: 27 | 49.2 | 0.751 |
| Cyclohexane | C6H12 | 6.7: 27 | 80.7 | 0.778 |
| Cycloheptane | C7H14 | −8.0: 27 | 118.4 | 0.811 |
| Cyclooctane | C8H16 | 14.5 | 151.2 | 0.840 |
| Cyclononane | C9H18 | 10–11: 262 | 178: 265 | 0.8534 |
| Cyclodecane | C10H20 | 9.9: 27 | 201 | 0.871 |
| Cycloundecane | C11H22 | −7.2: 1613 | 179–181: 142 | 0.81: 142 |
| Cyclododecane | C12H24 | 60.4 | 244.0 | 0.855 (extrapolated) |
| Cyclotridecane | C13H26 | 24.5: 27 | | 0.861: 143[a] |
| Cyclotetradecane | C14H28 | 56.2: 27 | | |
| Cyclopentadecane | C15H30 | 63.5: 27 | | |
| Cyclohexadecane | C16H32 | 60.6: 27 | 319 | |
| Cycloheptadecane | C17H34 | 64–67 | | |
| Cyclooctadecane | C18H36 | 74–75 | | |
| Cyclononadecane | C19H38 | 79–82 | | |
| Cycloeicosane | C20H40 | 49.9: 27[b] | | |
Conformations and ring strain
[edit]
Main article: Strain (chemistry)
In cycloalkanes, the carbon atoms are sp3 hybridized, which would imply an ideal tetrahedral bond angle of 109° 28′ whenever possible. Owing to evident geometrical reasons, rings with 3, 4, and (to a small extent) also 5 atoms can only afford narrower angles; the consequent deviation from the ideal tetrahedral bond angles causes an increase in potential energy and an overall destabilizing effect. Eclipsing of hydrogen atoms is an important destabilizing effect, as well. The strain energy of a cycloalkane is the increase in energy caused by the compound's geometry, and is calculated by comparing the experimental standard enthalpy change of combustion of the cycloalkane with the value calculated using average bond energies. Molecular mechanics calculations are well suited to identify the many conformations occurring particularly in medium rings.: 16–23
Ring strain is highest for cyclopropane, in which the carbon atoms form a triangle and therefore have 60° C–C–C bond angles. There are also three pairs of eclipsed hydrogens. The ring strain is calculated to be around 120 kJ mol−1.
Cyclobutane has the carbon atoms in a puckered square with approximately 90° bond angles; "puckering" reduces the eclipsing interactions between hydrogen atoms. Its ring strain is therefore slightly less, at around 110 kJ mol−1.
For a theoretical planar cyclopentane the C–C–C bond angles would be 108°, very close to the measure of the tetrahedral angle. Actual cyclopentane molecules are puckered, but this changes only the bond angles slightly so that angle strain is relatively small. The eclipsing interactions are also reduced, leaving a ring strain of about 25 kJ mol−1.
In cyclohexane the ring strain and eclipsing interactions are negligible because the puckering of the ring allows ideal tetrahedral bond angles to be achieved. In the most stable chair form of cyclohexane, axial hydrogens on adjacent carbon atoms are pointed in opposite directions, virtually eliminating eclipsing strain.
In medium-sized rings (7 to 13 carbon atoms) conformations in which the angle strain is minimised create transannular strain or Pitzer strain. At these ring sizes, one or more of these sources of strain must be present, resulting in an increase in strain energy, which peaks at 9 carbons (around 50 kJ mol−1). After that, strain energy slowly decreases until 12 carbon atoms, where it drops significantly; at 14, another significant drop occurs and the strain is on a level comparable with 10 kJ mol−1. At larger ring sizes there is little or no strain since there are many accessible conformations corresponding to a diamond lattice.
Ring strain can be considerably higher in bicyclic systems. For example, bicyclobutane, C4H6, is noted for being one of the most strained compounds that is isolatable on a large scale; its strain energy is estimated at 267 kJ mol−1.
Reactions
[edit]
Cycloalkanes, referred to as naphthenes, are a major substrate for the catalytic reforming process. In the presence of a catalyst and at temperatures of about 495 to 525 °C, naphthenes undergo dehydrogenation to give aromatic derivatives:
The process provides a way to produce high octane gasoline.
In another major industrial process, cyclohexanol is produced by the oxidation of cyclohexane in air, typically using cobalt catalysts:
: 2 C6H12 + O2 → 2 C6H11OH
This process coforms cyclohexanone, and this mixture ("KA oil" for ketone-alcohol oil) is the main feedstock for the production of adipic acid, used to make nylon.
The small cycloalkanes – in particular, cyclopropane – have a lower stability due to Baeyer strain and ring strain. They react similarly to alkenes, though they do not react in electrophilic addition, but in nucleophilic aliphatic substitution. These reactions are ring-opening reactions or ring-cleavage reactions of alkyl cycloalkanes.
Preparation
[edit]
Many simple cycloalkanes are obtained from petroleum. They can be produced by hydrogenation of unsaturated, even aromatic precursors.
Numerous methods exist for preparing cycloalkanes by ring-closing reactions of difunctional precursors. For example, diesters are cyclized in the Dieckmann condensation:
The acyloin condensation can be deployed similarly.
For larger rings (macrocyclizations) more elaborate methods are required since intramolecular ring closure competes with intermolecular reactions.
The Diels-Alder reaction, a [4+2] cycloaddition, provides a route to cyclohexenes:
The corresponding [2+2] cycloaddition reactions, which usually require photochemical activation, result in cyclobutanes.
See also
[edit]
Prelog strain
Conformational isomerism
Alkane
Cycloalkene
Cycloalkyne
Notes
[edit]
^ Liquid-phase density measured for a sample melting at 18 °C. The same sample was measured to have a solid density of 0.864 g·cm−3 at 16 °C.
^ Dale et al. (1963) instead gives a melting point of 61–62 °C.
References
[edit]
^ IUPAC, Compendium of Chemical Terminology, 5th ed. (the "Gold Book") (2025). Online version: (2014) "Cycloalkane". doi:10.1351/goldbook.C01497
^ Jump up to: a b "Alkanes & Cycloalkanes". www2.chemistry.msu.edu. Retrieved 2022-02-20.
^ "Blue Book". iupac.qmul.ac.uk. Retrieved 2023-04-01.
^ Fahim, MA, et al. (2010). Fundamentals of Petroleum Refining. p. 14. doi:10.1016/C2009-0-16348-1. ISBN 978-0-444-52785-1.
^ Boese, Roland; Weiss, Hans-Christoph; Bläser, Dieter (1999-04-01). "The Melting Point Alternation in the Short-Chain n-Alkanes: Single-Crystal X-Ray Analyses of Propane at 30 K and of n-Butane to n-Nonane at 90 K". Angewandte Chemie International Edition. 38 (7): 988–992. doi:10.1002/(SICI)1521-3773(19990401)38:7<988::AID-ANIE988>3.0.CO;2-0. ISSN 1433-7851.
^ Brown, RJC; Brown, RFC (June 2000). "Melting Point and Molecular Symmetry". Journal of Chemical Education. 77 (6): 724. doi:10.1021/ed077p724.
^ Jump up to: a b Dale, Johannes (1963). "15. Macrocyclic compounds. Part III. Conformations of cycloalkanes and other flexible macrocycles". Journal of the Chemical Society (Resumed): 93–111. doi:10.1039/JR9630000093.
^ Jump up to: a b c d e f g h i j k l m n Wunderlich, Bernhard; Möller, Martin; Grebowicz, Janusz; Baur, Herbert (1988). "Condis crystals of cyclic alkanes, silanes and related compounds". Conformational Motion and Disorder in Low and High Molecular Mass Crystals. Berlin, Heidelberg: Springer-Verlag Springer e-books. pp. 26–44. doi:10.1007/BFb0008610. ISBN 978-3-540-38867-8.
^ Jump up to: a b c d Dragojlovic, Veljko (2015). "Conformational analysis of cycloalkanes" (PDF). Chemtexts. 1 (3): 14. Bibcode:2015ChTxt...1...14D. doi:10.1007/s40828-015-0014-0. S2CID 94348487.
^ "ECHA CHEM". chem.echa.europa.eu.
^ "ECHA CHEM". chem.echa.europa.eu.
^ "ECHA CHEM". chem.echa.europa.eu.
^ Jump up to: a b Kaarsemaker, Sj.; Coops, J. (January 1952). "Thermal quantities of some cycloparaffins. Part III. results of measurements". Recueil des Travaux Chimiques des Pays-Bas. 71 (3): 261–276. doi:10.1002/recl.19520710307.
^ Ruzicka, L; Plattner, PA; Wild, H (January 1946). "209. Zur Kenntnis des Kohlenstoffringes. (40. Mitteilung). Über die Schmelzpunkte in der Reihe der Polymethylen-Kohlenwasserstoffe von Cyclo-propan bis Cyclo-octadecan" [209. On carbon rings. (Part 40). On the melting points in the series of polymethylene hydrocarbons from cyclopropane to cyclooctadecane]. Helvetica Chimica Acta (in German). 29 (6): 1611–1615. doi:10.1002/hlca.19460290631.
^ Jump up to: a b c Egloff, Gustav (1940). Physical constants of hydrocarbons. Vol. 2 : Cyclanes, cyclenes, cyclynes, and other alicyclic hydrocarbons. Reinhold Publishing Corporation.
^ "ECHA CHEM". chem.echa.europa.eu.
^ "ECHA CHEM". chem.echa.europa.eu.
^ "ECHA CHEM". chem.echa.europa.eu. The density of cyclododecane was measured at 8 temperatures between 66 and 134 °C with a dilatometer. Extrapolation to 20 °C leads to 0.855 g·cm−3.
^ Jump up to: a b c d Dale, Johannes; Hubert, A. J.; King, G. S. D. (1963). "13. Macrocyclic compounds. Part I. Synthesis of macrocyclic polyynes: conformational effects in ring formation and in physical properties". Journal of the Chemical Society (Resumed): 77. doi:10.1039/JR9630000073.
^ McMurry, John (2000). Organic chemistry (5th ed.). Pacific Grove, CA: Brooks/Cole. p. 126. ISBN 0534373674.
^ Wiberg, K. B. (1968). "Small Ring Bicyclo[n.m.0]alkanes". In Hart, H.; Karabatsos, G. J. (eds.). Advances in Alicyclic Chemistry. Vol. 2. Academic Press. pp. 185–254. ISBN 9781483224213.
^ Wiberg, K. B.; Lampman, G. M.; Ciula, R. P.; Connor, D. S.; Schertler, P.; Lavanish, J. (1965). "Bicyclo[1.1.0]butane". Tetrahedron. 21 (10): 2749–2769. doi:10.1016/S0040-4020(01)98361-9.
^ Irion, Walther W.; Neuwirth, Otto S. (2000). "Oil Refining". Ullmann's Encyclopedia of Industrial Chemistry. doi:10.1002/14356007.a18_051. ISBN 3-527-30673-0.
^ Michael Tuttle Musser "Cyclohexanol and Cyclohexanone" in Ullmann's Encyclopedia of Industrial Chemistry, Wiley-VCH, Weinheim, 2005.
IUPAC, Compendium of Chemical Terminology, 5th ed. (the "Gold Book") (2025). Online version: (1995) "Cycloalkanes". doi:10.1351/goldbook.C01497
Organic Chemistry IUPAC Nomenclature. Rule A-23. Hydrogenated Compounds from Fused Polycyclic Hydrocarbons
Organic Chemistry IUPAC Nomenclature.Rule A-31. Bridged Hydrocarbons: Bicyclic Systems.
Organic Chemistry IUPAC Nomenclature.Rules A-41, A-42: Spiro Hydrocarbons
Organic Chemistry IUPAC Nomenclature.Rules A-51, A-52, A-53, A-54:Hydrocarbon Ring Assemblies
External links
[edit]
"Cycloalkanes" at the online Encyclopædia Britannica
| v t e Hydrocarbons | |
--- |
| Saturated aliphatic hydrocarbons | | | | | | | | --- --- --- | | Alkanes CnH2n + 2 | | | | --- | | Linear alkanes | Methane Ethane Propane Butane Pentane Hexane Heptane Octane Nonane Decane | | Branched alkanes | Isobutane Isopentane 3-Methylpentane Neopentane Isohexane Isoheptane Isooctane Isononane Isodecane | | | Cycloalkanes | Cyclopropane Cyclobutane Cyclopentane Cyclohexane Cycloheptane Cyclooctane Cyclononane Cyclodecane | | Alkylcycloalkanes | Methylcyclopropane Methylcyclobutane Methylcyclopentane Methylcyclohexane Isopropylcyclohexane | | Bicycloalkanes | Housane (bicyclo[2.1.0]pentane) Norbornane (bicyclo[2.2.1]heptane) Decalin (bicyclo[4.4.0]decane) | | Polycycloalkanes | Adamantane Diamondoid Perhydrophenanthrene Sterane Cubane Prismane Dodecahedrane Basketane Churchane Pagodane Twistane | | Other | Spiroalkanes | |
| Unsaturated aliphatic hydrocarbons | | | | | | | | --- --- --- | | Alkenes CnH2n | | | | --- | | Linear alkenes | Ethene Propene Butene Pentene Hexene Heptene Octene Nonene Decene | | Branched alkenes | Isobutene Isopentene Isohexene Isoheptene Isooctene Isononene Isodecene | | | Alkynes CnH2n − 2 | | | | --- | | Linear alkynes | Ethyne Propyne Butyne Pentyne Hexyne Heptyne Octyne Nonyne Decyne | | Branched alkynes | Isopentyne Isohexyne Isoheptyne Isooctyne Isononyne Isodecyne | | | Cycloalkenes | Cyclopropene Cyclobutene Cyclopentene Cyclohexene Cycloheptene Cyclooctene Cyclononene Cyclodecene | | Alkylcycloalkenes | Methylcyclopropene Methylcyclobutene Methylcyclopentene Methylcyclohexene Isopropylcyclohexene | | Bicycloalkenes | Norbornene | | Cycloalkynes | Cyclopropyne Cyclobutyne Cyclopentyne Cyclohexyne Cycloheptyne Cyclooctyne Cyclononyne Cyclodecyne | | Dienes | Propadiene Butadiene Pentadiene Hexadiene Heptadiene Octadiene Nonadiene Decadiene | | Other | Alkatriene Alkadiyne Cumulene Cyclooctatetraene Cyclododecatriene Enyne | |
| Aromatic hydrocarbons | | | | | | | | --- --- --- | | PAHs | | | | --- | | Acenes | Naphthalene Anthracene Tetracene Pentacene Hexacene Heptacene Octacene | | Other | Azulene Fluorene Helicenes Circulenes Butalene Phenanthrene Chrysene Pyrene Corannulene Kekulene | | | Alkylbenzenes | Toluene | | | | | | | --- --- --- | | C2-Benzenes | | | | --- | | Xylenes | o-Xylene m-Xylene p-Xylene | | Other | | | | C3-Benzenes | | | | --- | | Trimethylbenzenes | Mesitylene Pseudocumene Hemellitene | | Other | Cumene n-Propylbenzene 4-Ethyltoluene | | | C4-Benzenes | | | | --- | | Cymenes | o-Cymene m-Cymene p-Cymene | | Tetramethylbenzenes | Durene Prehnitene Isodurene | | Other | n-Butylbenzene sec-Butylbenzene tert-Butylbenzene Isobutylbenzene | | | Other | Hexamethylbenzene 2-Phenylhexane 1,3,5-Triethylbenzene 1,3,5-Triheptylbenzene | | | Vinylbenzenes | Styrene Divinylbenzene 4-Vinyltoluene | | Other | Benzene Cyclopropenylidene Phenylacetylene trans-Propenylbenzene | |
| Other | Annulenes Annulynes Alicyclic compounds Petroleum jelly |
| v t e | |
--- |
| Cyclopropane Cyclobutane Cyclopentane Cyclohexane Cycloheptane Cyclooctane Cyclononane Cyclodecane Cycloundecane Cyclododecane Cyclotridecane Cyclotetradecane Cyclopentadecane Cyclohexadecane | |
| Authority control databases | |
--- |
| National | Germany United States France BnF data Israel |
| Other | Yale LUX |
Retrieved from "
Category:
Cycloalkanes
Hidden categories:
CS1 German-language sources (de)
Articles with short description
Short description matches Wikidata |
187827 | https://www.youtube.com/watch?v=kECiI8D6j7k | 3D to 2D Projection Formula: PROOF
Math Easy Solutions
56800 subscribers
36 likes
Description
3392 views
Posted: 1 Nov 2023
In this video I derive the formula for projecting 3D coordinates onto a 2D screen using similar triangles. First I plot out the coordinates in 3D and then draw a straight line from the point where we are projecting the coordinates and onto the x = 0 plane. The point of projection can be thought of as the point at which our camera or eyes are located at, thus the resulting 2D projection maintains 3D perspective from that specific point. Drawing a second line passing through the z coordinates of the 3D line, we can see two similar triangles. Either of the similar triangles can be used to obtain identical formulas for the y and z projection coordinates. Note that in this projection derivation, I project the x coordinates to x = 0 and the camera or eyes location is at x = 1000. Epic stuff!
The timestamps of key parts of the video are listed below:
Projecting 3D coordinates to 2D coordinates: 0:00
Two Similar Triangles: 2:45
Determining the formula for the y and z projection: 4:41
3D to 2D projection formula: 8:53
This video was taken from my earlier video listed below:
Laboratory Project: Putting 3D in Perspective:
HIVE video notes:
Video sections playlist:
Related Videos:
Sequences and Series playlist: .
Become a MES Super Fan!
DONATE! ʕ •ᴥ•ʔ
SUBSCRIBE via EMAIL:
MES Links:
MES Truth:
Official Website:
Hive:
Email me: contact@mes.fm
Free Calculators:
BMI Calculator:
Grade Calculator:
Mortgage Calculator:
Percentage Calculator:
Free Online Tools:
iPhone and Android Apps:
4 comments
Transcript:
Projecting 3D coordinates to 2D coordinates yeah to determine what happens or how to project it first let's consider how the X Y and Zed coordinates get projected from 3D onto the 2D screen all right so let's say we have our camera here so here's our camera and this is going to be let's say it's at a distance X or let's move this uh down further all right so here's how they get projected so we have let's say this is this this is the distance X Al so this distance X here and there's the origin right here called the O and now what we're going to have is this is the screen so the screen looks something like this and then it goes like this and like that and then let's say we are projecting this point onto the screen so we're going to have to draw a perfect straight line from it to the screen so it has to connect it all the way to the screen all right so let's say uh instead of this so let's say we have this distance here to here is a th000 so this is uh the Thousand that we're looking at erase this little thingy all right so from here to here is a th000 but then from this point here let's say this coordinates of this point are uh it goes from um let's draw this all out like this uh this this coordinates has I'll put put a dot dot dot this way I'll go in the wide side up then the the then downwards like this or let's erase this so let's say from here to here is X then that's the x coordinate of it and let's just say this point is x y z then it gets to this point where this point is let's say it's projected as X prime or just x with a dash yeah X Prime y Prime and this is the projected coordinates of that point like that so we just projected through a straight line all right so that's the X this is the Zed and this is the Y so that's the Y is this setup like that all right and now the next setup is uh let's say that's the um projected line let now I mean this is the this is the point let's see the projected point now is going to be and also finish this off let's finish the screen off like that all right like this all right so finish that off and now this part right here uh this is going to be it's going to have a new Y and a new Zed this is z Prime and this is going to be y Prime like that actually I put the Y on top there all right so that is the projection so we Two Similar Triangles have we can have two similar triangles and we can see that here I'm going to draw this thin line across like that so now we have two similar triangles we have the Zed so the Zed projects up to the Z Prime and we have the y projects up to the Y Prime uh so I'll write this one as one this one as two so let's write these out so we have the first triangle so for the Z how it gets projected uh this goes from uh just like this all right right here let fix that up let's move this over so this the one and this The Zed uh parts so we have the point here this is at Zed and then this this distance is X and then this uh this height is just Z Prime and then this full length is a th000 all right and then similarly for the the Y we have the top triangle like that and this this goes from uh from here from y to Y Prime is the exact same thing but we're doing a with a Y so in fact I'll just copy this exact same thing here copy and then paste then move this here so we have the exact same thing then remove this with a y y and Y like this yes so Y and Y Prime let's go y Prime all right so Y and Y Prime and then lastly those are that's how the z and y get projected the X gets projected wherever the x is just goes to zero so we just have uh X goes to is to Zer all right three x goes to zero so X goes to zero like that so we have the the all the X's goes to zero Determining the formula for the y and z projection all right so now we can determine a formula for the projection of both the Y and Zed coordinates using similar triangles while the x coordinate just becomes equal to zero so X goes to zero and these ones goes to here so we could use similar triangles and let's solve that out let's just do it for the Zed and then will be the same for the uh uh y like this so say we have this set this up better so like this all right and this is going to be our uh Point here I'm going to write this dash dash like that or instead of dash line put a straight line like this whoops all right here fix that up anyway so this is going to be a zed and now what we'll do is we need a similar triangle set up and this is going to be our angle Theta this angle Theta and this is going to be a right angle this going be right angle um let's remove that so it's just common sense right angle and this is going to be this height here is our Zed Prime so then this height here is going to be Zed Prime minus Z and likewise this is going to be X and now this this distance here is going to be 1,000 - x that's because this whole thing is a th000 all right all right so thus we could apply similar triangles so similar triangles like this and for completeness we could even write just write this out as tan theta equals to well you could have similar triangle Z over 1 - x this also equals to well this angle you could use tan of this one so Zed Prime minus Z over uh this is going to be X like that all right so let's do some simplification um let's move this over to this side and then this over to the other side so we get a z x z x equals to now we have a zed uh like this or actually instead of that uh instead of this we'll move the we'll move the X over to the side and then this part and then this separately would keep the 1,00 overx so that's what we'll do that that's better setup there so we'll have a z x move this up top z x and then this is going to be divided by 1, - x then plus Z equal to Z Prime and I'll just move that over uh move the Z Prime further so that we can multiply the top and bottom by 1X like that and then simp move this even further equal Zed Prime all right so we do that so we don't change anything just we have the same common denominator and this is going to be now uh multiply that inside and then add those all up so Z X plus uh Z 1 1000us z x over 1,000 - x = 2 to Z Prime all right so what we have now is yeah what we have now is z x plus Z th000 well these ones cancel z x z x all right so we're just going to be left with zed time a th000 or th000 uh yeah let put it like that over 1,00 - x = to Z prime or what we can do is well I'll move this over yeah I'll move this over to the other side just put Z Prime equals and then just simplifications okay move this over to this side so we get a z Prime is equal to is equal to Z 100000 / 1,000 - x this can be times it by 1 over a th000 simplified and then div by 1 over a th000 so we get rid of the thousands so 3D to 2D projection formula then what we end up having is final equation Z Prime equal 2 well 1,000 can cancel we have a z on top over th000 th000 is 1us then X over a th000 so we just have 1,000 there as opposed to 2 THS there yes epic epic stuff there so there is our our projection and similarly I'll write down similarly for the Y exact same uh triangles so y Prime is equal to Y over 1us it's it's still going to be the same X so 1,000 x over 1,000 like that epic stuff |
187828 | https://beyondtheworksheet.com/effective-strategies-for-teaching-decimal-operations/ | Skip to content
Effective Strategies for Teaching Decimal Operations in Middle School Math
Effective Strategies for Teaching Decimal Operations in Middle School Math
Hey teachers! I’m excited to share some effective and engaging strategies for teaching decimal operations to your middle school math students. Decimal concepts can be tricky, but with the right approach, you can make them accessible and fun for your students.
The Decimal Point Anchor Method
One common challenge students face is aligning decimals correctly during operations like subtraction. They might align the numbers by their end digits, which can lead to errors. To combat this, I recommend using the “Decimal Point Anchor” method. In this approach, encourage students to view the decimal point as an anchor, stabilizing the numbers in the right place.
Before students begin subtracting, have them draw vertical lines to connect the decimal points across the numbers. This visual aid ensures that every digit is correctly placed in its corresponding place value column. Practice this technique in class by solving examples together and correcting intentionally misaligned problems. This method not only helps students understand decimal alignment but also builds their confidence in handling decimal operations.
Culinary Math Chef Challenge
To make multiplying decimals more engaging, introduce the “Culinary Math Chef” challenge. Turn your classroom into a cooking show where recipes typically serve four people. Challenge your students to adjust the recipes to serve different numbers, like 10 or 5, using decimal multiplication. This real-world application of decimals makes the lesson memorable and shows the practical use of math in everyday life.
After completing the challenge, have students explain their process as if they were on a cooking show. This reinforces their learning and helps them articulate their understanding of decimals.
Decimal Detectives Activity
Another hands-on activity is the “Decimal Detectives” activity. Place various items around the classroom, each tagged with a price that includes decimals. Assign each student a “case file” with a specific amount of money to spend. Their task is to choose items that collectively stay within their budget without going over. This activity teaches students to add and subtract decimals accurately and encourages them to apply these skills strategically.
Following the activity, hold a “detective debrief” where students can explain their choices and discuss how they used decimal operations to stay within their budget. This debrief is a great opportunity for peer learning and helps reinforce mathematical concepts in a fun, interactive way.
Avoiding Decimal Drudgery
It’s important to remember that engagement is key in learning. Relying solely on worksheets full of decimal problems can quickly disengage students. Instead, incorporate decimals into activities and engaging, authentic scenarios that show their application in real life. By doing so, we not only help students understand the mechanics of decimals but also appreciate their relevance and importance.
I’m all about making math both understandable and enjoyable. By integrating these strategies into your lessons, you can help your students master decimal operations and see the real-world applications of their math skills.
Follow along on Instagram and/or Facebook for daily math content!
Could you use FREE Resources?
Provide your email address and instantly receive access to a free resource library and stay up to date with tips, products and insider information!
Thanks! Now, head to your email to get access to the free resources!
Share this:
Click to share on X (Opens in new window) X
Click to share on Facebook (Opens in new window) Facebook
Click to email a link to a friend (Opens in new window) Email
Click to share on Pinterest (Opens in new window) Pinterest
Click to share on LinkedIn (Opens in new window) LinkedIn
Related
PrevPreviousBringing Ratios, Rates, and Proportions to Life in the Middle School Classroom
NextGaps in Learning Identified in High School GeometryNext
Hi, I'm Lindsay!
I create ready to go resources for middle school math teachers, so they can get back what matters most – their time!
Search By Topic
Shop TPT
INTERVENTION
FULL CURRICULUM
ACTIVITIES
STATIONS
Exclusive Freebie
If you’d like a free spin and solve game, click the image below.
BLOG EXCLUSIVE!
Facebook-f Pinterest Instagram
Linkedin Youtube Envelope
COPYRIGHT © 2025 · Beyond the Worksheet Inc.
Loading Comments... |
187829 | https://www.amazon.com/Books-Vasile-Cirtoaje/s?rh=n%3A283155%2Cp_27%3AVasile%2BCirtoaje | Delivering to Ashburn 20149 Update location
Hello, sign in
Account & Lists
Returns & Orders
Amazon Haul
Early Prime Deals
Medical Care
Luxury Stores
Best Sellers
New Releases
Amazon Basics
Smart Home
Gift Cards
Customer Service
Sports & Outdoors
Pharmacy
Shop By Interest
Amazon Home
Fashion
Sell
Beauty & Personal Care
Automotive
Toys & Games
Home Improvement
Computers
Prime Big Deal Days is October 7-8
Categories
Recently Visited
Featured
Kindle eBooks
Kindle Deals
Best Books of 2025 So Far
Top Categories
Romance
Science Fiction & Fantasy
Mystery, Thriller & Suspense
Self-help
History
Children's Books
Fiction
Comics & Manga
LGBTQIA+
Literature & Fiction
Mystery, Thriller & Suspense
Romance
Science Fiction & Fantasy
Teen & Young Adult
Nonfiction
Arts & Photography
Biographies & Memoirs
Business & Money
Calendars
Computers & Technology
Cookbooks, Food & Wine
Crafts, Hobbies & Home
Education & Teaching
Engineering & Transportation
Health, Fitness & Dieting
History
Humor & Entertainment
Law
LGBTQIA+
Medical Books
Parenting & Relationships
Politics & Social Sciences
Reference
Religion & Spirituality
Science & Math
Self-help
Sports & Outdoors
Test Preparation
Travel
Children's Books
All Children's Books
Editors' Picks
Teacher's Picks
Award Winners
Shorts
Amazon Original Stories
Short Reads
More Categories
Amazon Classics
Amazon First Reads
Textbooks
Magazines
Great on Kindle
All Categories
Top Categories
Romance
Science Fiction & Fantasy
Mystery, Thriller & Suspense
Self-help
History
Children's Books
Fiction
Comics & Manga
LGBTQIA+
Literature & Fiction
Mystery, Thriller & Suspense
Romance
Science Fiction & Fantasy
Teen & Young Adult
Nonfiction
Arts & Photography
Biographies & Memoirs
Business & Money
Calendars
Computers & Technology
Cookbooks, Food & Wine
Crafts, Hobbies & Home
Education & Teaching
Engineering & Transportation
Health, Fitness & Dieting
History
Humor & Entertainment
Law
LGBTQIA+
Medical Books
Parenting & Relationships
Politics & Social Sciences
Reference
Religion & Spirituality
Science & Math
Self-help
Sports & Outdoors
Test Preparation
Travel
Children's Books
All Children's Books
Editors' Picks
Teacher's Picks
Award Winners
Shorts
Amazon Original Stories
Short Reads
More Categories
Amazon Classics
Amazon First Reads
Textbooks
Magazines
Great on Kindle
New & Trending
Recently Visited
Featured
Kindle eBooks
Kindle Deals
Best Books of 2025 So Far
New & Trending
New Releases
Editors' Picks of the Month
Amazon First Reads
Best of #BookTok books
Comics & Manga New Releases
Deals & Rewards
Recently Visited
Featured
Kindle eBooks
Kindle Deals
Best Books of 2025 So Far
Deals & Rewards
Print Deals
Kindle Deals
Audible Deals
Comics & Manga Deals
Kindle Rewards
Best Sellers & More
Recently Visited
Featured
Kindle eBooks
Kindle Deals
Best Books of 2025 So Far
Best Sellers
Amazon Best Sellers
New York Times Best Sellers
Amazon Charts
Acclaimed
Award Winners
Goodreads Choice Winners
From Our Editors
Editors' Picks of the Month
Amazon Book Review
Best Books of 2025 So Far
Memberships
Recently Visited
Featured
Kindle eBooks
Kindle Deals
Best Books of 2025 So Far
Memberships
Unlimited access to over 4 million digital books, audiobooks, comics, and magazines. Read or listen anywhere, anytime.
Access over 700,000 audiobooks and listen across any device. Prime members new to Audible get 2 free audiobooks with trial.
Explore over 45,000 comics, graphic novels, and manga from top publishers including Marvel, DC, Kodansha, Dark Horse, Image, and Yen Press.
Prime members can access a curated catalog of eBooks, audiobooks, magazines, comics, and more, that offer a taste of the Kindle Unlimited library.
Amazon Kids+ provides unlimited access to ad-free, age-appropriate books, including classic chapter books as well as graphic novel favorites.
Communities
Recently Visited
Featured
Kindle eBooks
Kindle Deals
Best Books of 2025 So Far
Communities
Amazon Book Clubs
Amazon Books Live
Goodreads
More
Recently Visited
Featured
Kindle eBooks
Kindle Deals
Best Books of 2025 So Far
Best Sellers
Amazon Best Sellers
New York Times Best Sellers
Amazon Charts
Acclaimed
Award Winners
Goodreads Choice Winners
From Our Editors
Editors' Picks of the Month
Amazon Book Review
Best Books of 2025 So Far
Memberships
Communities
Amazon Book Clubs
Amazon Books Live
Goodreads
More
Manage Content and Devices
Author Follow
Buy a Kindle
Improve Your Recommendations
Your Company Bookshelf
Advanced Search
More
Recently Visited
Featured
Kindle eBooks
Kindle Deals
Best Books of 2025 So Far
More
Manage Content and Devices
Author Follow
Buy a Kindle
Improve Your Recommendations
Your Company Bookshelf
Advanced Search
Your Books
Your Books
Library
Saved Books
Find Books Like Yours
11 results
Results
Check each product page for other buying options.
## Mathematical Inequalities Volume 4: Extensions and Refinements of Jensen's Inequality
by Vasile Cirtoaje | Sep 26, 2018
Paperback
Price, product page$74.00$74.00
FREE delivery Fri, Oct 3
Or fastest delivery Tue, Sep 30
## Mathematical Inequalities Volume 3: Cyclic and Noncyclic Inequalities
by Vasile Cirtoaje | Sep 3, 2018
Paperback
Price, product page$73.00$73.00
FREE delivery Fri, Oct 3
Or fastest delivery Tue, Sep 30
## Mathematical Inequalities Volume 2: Symmetric Rational and Nonrational Inequalities
by Vasile Cirtoaje | Aug 10, 2018
Paperback
Price, product page$69.00$69.00
FREE delivery Fri, Oct 3
Or fastest delivery Tue, Sep 30
More results
## Mathematical Inequalities Volume 1: Symmetric Polynomial Inequalities
by Vasile Cirtoaje | Jul 27, 2018
Paperback
Price, product page$61.00$61.00
FREE delivery Fri, Oct 3
Or fastest delivery Tue, Sep 30
## Mathematical Inequalities Volume 3
by Vasile Cirtoaje | Jul 21, 2025
Paperback
Price, product page$135.00$135.00
FREE delivery Oct 6 - 8
Or fastest delivery Oct 3 - 4
More results
## Mathematical Inequalities Volume 1
by Vasile Cirtoaje | Jul 21, 2025
Paperback
Price, product page$135.00$135.00
FREE delivery Oct 6 - 8
Or fastest delivery Oct 3 - 4
More results
## Mathematical Inequalities Volume 5
by Vasile Cirtoaje | Jul 21, 2025
Paperback
Price, product page$135.00$135.00
FREE delivery Mon, Oct 6
Or fastest delivery Oct 3 - 4
## Mathematical Inequalities Volume 2
by Vasile Cirtoaje | Jul 21, 2025
Paperback
Price, product page$135.00$135.00
FREE delivery Mon, Oct 6
Or fastest delivery Oct 3 - 4
## Cyclic Inequalities
by Vasile Cirtoaje | Jan 1, 2016
Textbook Binding
No featured offers available$49.95(1 new offer)
Ages: 12 years and up
## Algebraic Inequalities - Old and New Methods
by Vasile Cirtoaje | Jan 1, 2006
Paperback
Currently unavailable.
See options
## Inequalities with Beautiful Solutions
by Vasile Cirtoaje, Vo Quoc Ba Can, et al. | Jan 1, 2009
Paperback
Currently unavailable.
See options
Need help?
Visit the help section or contact us
Go back to filtering menu
Skip to main search results
Selected filters: Books
Eligible for Free Shipping
Free Shipping by Amazon
Get FREE Shipping on eligible orders shipped by Amazon
Books
Science & Math
New
Used
English
Paperback
Series Status
Not in a Series
New Releases
Last 90 days
| |
| |
Your recently viewed items and featured recommendations
›
View or edit your browsing history
After viewing product detail pages, look here to find an easy way to navigate back to pages you are interested in.
Back to top
Get to Know Us
Careers
Amazon Newsletter
About Amazon
Accessibility
Sustainability
Press Center
Investor Relations
Amazon Devices
Amazon Science
Make Money with Us
Sell on Amazon
Sell apps on Amazon
Supply to Amazon
Protect & Build Your Brand
Become an Affiliate
Become a Delivery Driver
Start a Package Delivery Business
Advertise Your Products
Self-Publish with Us
Become an Amazon Hub Partner
›See More Ways to Make Money
Amazon Payment Products
Amazon Visa
Amazon Store Card
Amazon Secured Card
Amazon Business Card
Shop with Points
Credit Card Marketplace
Reload Your Balance
Gift Cards
Amazon Currency Converter
Let Us Help You
Your Account
Your Orders
Shipping Rates & Policies
Amazon Prime
Returns & Replacements
Manage Your Content and Devices
Recalls and Product Safety Alerts
Registry & Gift List
Help
United States
##### Amazon Music
Stream millionsof songs
##### Amazon Ads
Reach customerswherever theyspend their time
##### 6pm
Score dealson fashion brands
##### AbeBooks
Books, art& collectibles
##### ACX
Audiobook PublishingMade Easy
##### Sell on Amazon
Start a Selling Account
##### Veeqo
Shipping SoftwareInventory Management
##### Amazon Business
Everything ForYour Business
##### Amazon Fresh
Groceries & MoreRight To Your Door
##### AmazonGlobal
Ship OrdersInternationally
##### Home Services
Experienced ProsHappiness Guarantee
##### Amazon Web Services
Scalable CloudComputing Services
##### Audible
Listen to Books & OriginalAudio Performances
##### Box Office Mojo
Find MovieBox Office Data
##### Goodreads
Book reviews& recommendations
##### IMDb
Movies, TV& Celebrities
##### IMDbPro
Get Info EntertainmentProfessionals Need
##### Kindle Direct Publishing
Indie Digital & Print PublishingMade Easy
##### Amazon Photos
Unlimited Photo StorageFree With Prime
##### Prime Video Direct
Video DistributionMade Easy
##### Shopbop
DesignerFashion Brands
##### Amazon Resale
Great Deals onQuality Used Products
##### Whole Foods Market
America’s HealthiestGrocery Store
##### Woot!
Deals and Shenanigans
##### Zappos
Shoes &Clothing
##### Ring
Smart HomeSecurity Systems
##### eero WiFi
Stream 4K Videoin Every Room
##### Blink
Smart Securityfor Every Home
##### Neighbors App
Real-Time Crime& Safety Alerts
##### Amazon Subscription Boxes
Top subscription boxes – right to your door
##### PillPack
Pharmacy Simplified
##### Amazon Renewed
Like-new productsyou can trust
Conditions of Use
Privacy Notice
Consumer Health Data Privacy Disclosure
Your Ads Privacy Choices
© 1996-2025, Amazon.com, Inc. or its affiliates |
187830 | https://math.ipm.ac.ir/combin/IPMCCC2019/slides/Karasev.pdf | Fair partition of a convex planar pie Roman Karasev 1 joint work with Arseniy Akopyan 2 and Sergey Avvakumov 2 1Moscow Institute of Physics and Technology 2IST Austria Tehran, April, 2019 The problem statement Question (Nandakumar and Ramana Rao, 2008) Given a positive integer m and a convex body K in the plane, can we cut K into m convex pieces of equal areas and perimeters?
Previously known results For m = 2 it can be done by a simple continuity argument.
Split in two parts of equal area and rotate. At some point the perimeters must be equal.
Previously known results For m = 2 it can be done by a simple continuity argument.
Split in two parts of equal area and rotate. At some point the perimeters must be equal.
A generalization of the continuity argument through an appropriate Borsuk–Ulam-type theorem yields a proof for m = pk with p prime. The topological tool was used previously by Viktor Vassiliev for a different problem (1989). The fair partition result for m = 2k was proved explicitly by Mikhail Gromov (2003).
Previously known results For m = 2 it can be done by a simple continuity argument.
Split in two parts of equal area and rotate. At some point the perimeters must be equal.
A generalization of the continuity argument through an appropriate Borsuk–Ulam-type theorem yields a proof for m = pk with p prime. The topological tool was used previously by Viktor Vassiliev for a different problem (1989). The fair partition result for m = 2k was proved explicitly by Mikhail Gromov (2003).
For m, which is not a prime power, this direct technique fails.
A classical example: the ham sandwich theorem Theorem Any 3 sufficiently nice probability measures in R3 can be simultaneously equipartitioned by a plane.
Scheme of proof The configuration space is the sphere S3, which naturally parametrizes (oriented) planes in R3.
Scheme of proof The configuration space is the sphere S3, which naturally parametrizes (oriented) planes in R3.
The test map f : S3 →R3 sends an oriented plane u ∈S3 to the point f (u) ∈R3 whose i-th coordinate is the difference of the values of the i-th measure on the two corresponding halfspaces.
Scheme of proof The configuration space is the sphere S3, which naturally parametrizes (oriented) planes in R3.
The test map f : S3 →R3 sends an oriented plane u ∈S3 to the point f (u) ∈R3 whose i-th coordinate is the difference of the values of the i-th measure on the two corresponding halfspaces.
Solutions are in f −1(0).
Scheme of proof The configuration space is the sphere S3, which naturally parametrizes (oriented) planes in R3.
The test map f : S3 →R3 sends an oriented plane u ∈S3 to the point f (u) ∈R3 whose i-th coordinate is the difference of the values of the i-th measure on the two corresponding halfspaces.
Solutions are in f −1(0).
This map is Z2-equivariant, i.e., f (−u) = −f (u), and the classical Borsuk–Ulam theorem guarantees that any such map must have a zero, which yields the desired equipartition.
Convex fair partitions for prime power Theorem (Karasev, Hubard, Aronov, Blagojevi´ c, Ziegler, 2014) If m is a power of a prime then any convex body K in the plane can be partitioned into m parts of equal area and perimeter.
The case m = 3 was done first by B´ ar´ any, Blagojevi´ c, and Sz˝ ucs. In dimension n ≥3 a similar result with equal volumes and equal n −1 other continuous functions of m convex parts was also established for m = pk.
Configuration space F(m) is the space of m-tuples of pairwise distinct points in R2.
Given ¯ x ∈F(m) we can use Kantorovich theorem on optimal transportation to equipartition K into m parts of equal area. The partition is a weighted Voronoi diagram with centers in ¯ x.
¯ x ∈F(3).
No need to consider partitions not in F(m) The space F(m) is smaller than the space E(m) of all equal are convex partitions. However, there is an Sm-equivariant map F(m) →E(m), given by the Kantorovich theorem, and an Sm-equivariant map E(m) →F(m), sending a partition into its centers of mass. The maps do not commute, but show that the spaces are equivalent from the points of view of plugging them into a Borsuk–Ulam-type theorems.
Further simplification of F(m) The dimension of F(m) is 2m. We can further simplify it.
Lemma (Blagojevi´ c and Ziegler, 2014) Space F(m) retracts Sm-equivariantly to its subpolyhedron P(m) ⊂F(m) with dim(P(m)) = m −1.
This lemma allows to imagine how the solution changes if we consider a family of problems depending on a parameter.
Cellular decomposition of F(3) A 6-cell.
A 5-cell.
A 4-cell.
Equivariant map Let the map f : P(m) →Rm send a generalized Voronoi equal area partition into the perimeters of the m parts. The test map is Sm-equivariant, if Sm acts on Rm by permutations of the coordinates.
A partition corresponding to u ∈P(m) solves the problem if f (u) ∈∆:= {(x, x, . . . , x) ∈Rm}.
Homology of the solution set Theorem (Blagojevi´ c and Ziegler) If m = pk is a prime power and f : P(m) →Rm is an Sm-equivariant map in general position, then f −1(∆) is a non-trivial 0-dimensional cycle modulo p in homology with certain twisted coefficients.
If m is not a prime power then there exists an Sm-equivariant map f : P(m) →Rm with f −1(∆) = ∅.
Our main result Theorem (Akopyan, Avvakumov, K.) For any m ≥2 any convex body K in the plane can be partitioned into m parts of equal area and perimeter.
Our main result Theorem (Akopyan, Avvakumov, K.) For any m ≥2 any convex body K in the plane can be partitioned into m parts of equal area and perimeter.
When m is not a prime power, the theorem was unknown even for simplest K, e.g., for generic triangles.
“Naive” continuity argument “Naive” argument for m = 6 (the smallest non-prime-power): “Naive” continuity argument “Naive” argument for m = 6 (the smallest non-prime-power): Pick a direction and a halving line in that direction.
“Naive” continuity argument “Naive” argument for m = 6 (the smallest non-prime-power): Pick a direction and a halving line in that direction.
Fair partition each half into 3 pieces.
“Naive” continuity argument “Naive” argument for m = 6 (the smallest non-prime-power): Pick a direction and a halving line in that direction.
Fair partition each half into 3 pieces.
Rotate the direction.
“Naive” continuity argument “Naive” argument for m = 6 (the smallest non-prime-power): Pick a direction and a halving line in that direction.
Fair partition each half into 3 pieces.
Rotate the direction.
There are difficulties arguing this way, because the partitions in three parts may not depend continuously on parameters of the half subproblem.
Proof sketch for m = 6 As we rotate the direction, plot the perimeters of all the solutions, the language of multivalued functions must be useful.
Solid and dashed are perimeters on the left and right, resp. Solid/dashed intersections are fair partitions.
Number of solutions In this particular example the number of solutions, with signs, is 0!
Proof sketch for m = 6 Solid graph separates the bottom from the top, from the homology modulo 3 description of the solution set by Blagojevi´ c and Ziegler.
Proof sketch for m = 6 After choosing an appropriate subgraph of the multivalued function, bold solid and bold dashed curves intersect at 1 point, modulo 2.
Plan of the proof for arbitrary m Decompose into primes m = p1 . . . pk, then consider iterated partitions, first cut into p1 parts, then cut every part into p2 parts, and so on.
Plan of the proof for arbitrary m Decompose into primes m = p1 . . . pk, then consider iterated partitions, first cut into p1 parts, then cut every part into p2 parts, and so on.
Parameterize the partitions on each stage by P(pi) and assume the areas equalized by the weighted Voronoi argument.
Plan of the proof for arbitrary m Decompose into primes m = p1 . . . pk, then consider iterated partitions, first cut into p1 parts, then cut every part into p2 parts, and so on.
Parameterize the partitions on each stage by P(pi) and assume the areas equalized by the weighted Voronoi argument.
Argue from bottom to top: Assume that the perimeters are equalized in all parts of every i-th stage region and thus form a multivalued function of the corresponding region.
Plan of the proof for arbitrary m Decompose into primes m = p1 . . . pk, then consider iterated partitions, first cut into p1 parts, then cut every part into p2 parts, and so on.
Parameterize the partitions on each stage by P(pi) and assume the areas equalized by the weighted Voronoi argument.
Argue from bottom to top: Assume that the perimeters are equalized in all parts of every i-th stage region and thus form a multivalued function of the corresponding region.
Establish the top-from-bottom separation property for this multivalued function, using the modulo pi homology argument.
Plan of the proof for arbitrary m Decompose into primes m = p1 . . . pk, then consider iterated partitions, first cut into p1 parts, then cut every part into p2 parts, and so on.
Parameterize the partitions on each stage by P(pi) and assume the areas equalized by the weighted Voronoi argument.
Argue from bottom to top: Assume that the perimeters are equalized in all parts of every i-th stage region and thus form a multivalued function of the corresponding region.
Establish the top-from-bottom separation property for this multivalued function, using the modulo pi homology argument.
Choose a “nice” multivalued subfunction, for which a Borsuk–Ulam-type theorem holds.
Plan of the proof for arbitrary m Decompose into primes m = p1 . . . pk, then consider iterated partitions, first cut into p1 parts, then cut every part into p2 parts, and so on.
Parameterize the partitions on each stage by P(pi) and assume the areas equalized by the weighted Voronoi argument.
Argue from bottom to top: Assume that the perimeters are equalized in all parts of every i-th stage region and thus form a multivalued function of the corresponding region.
Establish the top-from-bottom separation property for this multivalued function, using the modulo pi homology argument.
Choose a “nice” multivalued subfunction, for which a Borsuk–Ulam-type theorem holds.
Step i →i −1 equializing the perimeter values in parts of (i −1)th stage region, keeping the separation property for the new multivalued function, the common value of the perimeter.
Summary Generalizations: “Area” can be any finite Borel measure, zero on hyperplanes.
“Perimeter” can be any Hausdorff-continuous function on convex bodies (e.g., diameter).
Unknown, if we replace “area” with an arbitrary (i.e., non-additive) rigid-motion-invariant continuous function of convex bodies.
If we want to equalize the volumes and two perimiter-like functions in R3, then it is possible for m = pk (K., Aronov, Hubard, Blagojevi´ c, Ziegler), but our current method does not work already for m = 2pk.
Full version is arXiv:1804.03057.
Summary Thank you for your attention!
Full version is arXiv:1804.03057. |
187831 | https://mathmaine.com/2010/05/27/function-translations/ | Skip to content
Function Translations: How to recognize and analyze them
For the approach I now prefer to this topic, using transformation equations, please follow this link: Function Transformations: Translation
A function has been “translated” when it has been moved in a way that does not change its shape or rotate it in any way. A function can be translated either vertically, horizontally, or both. Other possible “transformations” of a function include dilation, reflection, and rotation.
Imagine a graph drawn on tracing paper or a transparency, then placed over a separate set of axes. If you move the graph left or right in the direction of the horizontal axis, without rotating it, you are “translating” the graph horizontally. Move the graph straight up or down in the direction of the vertical axis, and you are translating the graph vertically.
In the text that follows, we will explore how we know that the graph of a function like
which is the blue curve on the graph above, can be described as a translation of the graph of the green curve above:
Describing as a translation of a simpler-looking (and more familiar) function like makes it easier to understand and predict its behavior, and can make it easier to describe the behavior of complex-looking functions. Before you dive into the explanations below, you may wish to play around a bit with the green sliders for “h” and “k” in this Geogebra Applet to get a feel for what horizontal and vertical translations look like as they take place (the “a” slider dilates the function, as discussed in my Function Dilations post).
Vertical Translation
Consider the equation that describes the line that passes through the origin and has a slope of two:
What happens to the graph of this line if every value of has three added to it? The function is defined as the result of with three added to each result. If we then substitute the definition of from above for , we get:
Since produces the y-coordinate corresponding to x for every point on the original graph, adding 3 to each value moves every point on the graph up by 3.
Adding “+3” to the definition of causes the entire function to be “translated vertically” by a positive three.
This process works for any function, and is usually thought through in the reverse order: when looking at a more complex function, do you see a constant added or subtracted? If so, you can think of it as a vertical translation of the rest of the function:
Another example:
Horizontal Translation
Consider the same function described at the beginning of the Vertical Translation section, which describes a line that passes through the origin with a slope of two:
What happens to the graph of this equation if every “x” in the equation is replaced by a value that is 4 less? We can describe this algebraically by evaluating instead of , and let’s call this new function :
Now let’s compare the behaviors of and :
produces the same results as , but only when its input values are four greater than the input to . Comparing the graphs of the two functions, the graph of will have the same shape as , but that shape has been shifted four units to the right along the x-axis.
A helpful way to think about the above (thanks to Michael Paul Goldenberg’s 2016 comment below) is to think of the independent variable “x” as measuring time in seconds. Therefore, “x-4” is 4 seconds earlier then “x”, and evaluating produces a result from 4 seconds earlier than time “x”. When we graph , all of the results will appear to be 4 seconds later (to the right) than those on the graph of .
The fact that substituting “x-4” for “x” produces a horizontal translation of +4 (not -4) is a source of errors when people get horizontal and vertical translation behaviors confused. One way to address this is to use a procedural approach whenever you see a variable with a constant added or subtracted (often together in a set of parentheses). To find the direction of the translation, set the transformation expression equal to zero and solve:
The result will always give your the magnitude and direction of the translation (see Keep Your Eye On The Variable). This process works for any function:
so set and solve for x. The graph of is the same as that of translated horizontally by -3. Or for
the graph of is the same as that of translated horizontally by -5. Note that requires everyinstance of “x” in to have (x+5) substituted for it. So a function like will only be a horizontal translation of if every instance of “x” has the same constant added or subtracted. The notation expresses this idea compactly and elegantly.
One last example:
so the graph of is the same as that of translated horizontally by .
Reconciling Horizontal And Vertical Translations
Let’s re-examine why translates a function in a positive vertical direction, yet translates the function in a negative horizontal direction.
This apparent difference in the way we analyze horizontal and vertical translations can be reconciled by treating both independent and dependent variables in the same manner. If
and we subtract 7 from both sides, it becomes:
Since every instance of occurs as a , and every instance of “x” occurs as , you may treat both and “x” as having been translated relative to a parent function, and you may analyze them both in exactly the same manner:
– what value of makes ? Positive 7. So the translation in the direction, along the vertical axis, is positive 7.
– what value of “x” makes ? Negative 5. So the translation in the “x” direction, along the horizontal axis, is negative 5.
Therefore, if we define as shown below, a can be created which is translated horizontally by -5 and vertically by +7 when compared to :
Equivalent Translations
In mathematics, it is often (but not always) possible to produce the same end result in different ways. When working with linear equations and using the approach described in the last section above, you may have wondered how to handle a situation such as:
The above describes a horizontal translation by +4, but if we subtract 4 from both sides the equation becomes:
which describes a vertical translation by -4. Are they both valid interpretations?
Since both of the above are valid algebraic manipulations of the same equation, they must both have the same graph. Imagine the graph of , which will be a line with a slope of one that passes through the origin. Now translate the graph vertically by +4. This translation will also cause the x-intercept to move… four to its left.
Equivalent translations do not always translate by the same distance. If the slope of the line is not 1, we need to translate by different amounts:
The first representation of above is a horizontal translation of by +2, while the last one is a vertical translation by -4. Yet, they both describe the same graph. We could be even trickier if we wished to:
So, we can choose to describe g(x) as either:
– f(x) translated horizontally by +2 (1st line)
– f(x) translated vertically by -4 (2nd line)
– f(x) translated vertically by -2 and horizontally by +1 (4th line)
Just as there are often multiple ways of describing something using English, a particular situation can often be described in more than one mathematically too.
Share this:
Click to print (Opens in new window)
Print
Click to email a link to a friend (Opens in new window)
Email
Tweet
Save
Click to share on Reddit (Opens in new window)
Reddit
Like Loading...
Related
Function Dilations: How to recognize and analyze themIn "Concepts"
Function Transformations: TranslationIn "Concepts"
Using Corresponding Points to Determine Dilation Factors and Translation AmountsIn "Concepts"
By Whit Ford
Math tutor since 1992. Former math teacher, product manager, software developer, research analyst, etc.
View all of Whit Ford's posts.
6 comments
Interesting timing. I was reading about transformation of axes and matrix multiplication in Lillian R. Lieber’s book on Lattice Theory last night.
Reply
cool
Reply
2. I am learning about this right now in Geometry Class. 🙂
Reply
3. Since reading this post and commenting on it, I’ve had an epiphany of sorts regarding horizontal transformations and why they seem to behave counterintuitively. Take as an example function f(x) = |x|. Compare the position of the vertex of the parent graph with that of g(x) = |x – 4|. As noted with other functions, the second graph is translated four units to the right. One way to think about this is to consider the independent variable, x, to be time. In that case, subtracting 4 from x DELAYS the outcome by 4 units of time. That means that things happen LATER than they would in the case of f(x), a shift of 4 units to the right.
On the other hand, h(x) = |x + 4| would entail adding 4 units to x, ACCELERATING the outcome by 4 units of time (4 units EARLIER), and hence shifting the graph 4 units to the left.
In the last couple of years, the vast majority of my students have found that a helpful metaphor that makes sense of the horizontal shifts of the graphs of various functions.
Reply
I have taken the occasion to rewrite (and hopefully further clarify) much of this post, and have included your approach with attribution as well. Thank you!
Reply
Glad you liked that idea.
Leave a comment Cancel reply
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Comment
Reblog
Subscribe
Subscribed
MathMaine Already have a WordPress.com account? Log in now.
MathMaine
Subscribe
Subscribed
Sign up
Log in
Copy shortlink
Report this content
View post in Reader
Manage subscriptions
Collapse this bar |
187832 | https://www.quora.com/In-frogs-fertilisation-is-external-then-what-is-the-significance-of-copulation | In frogs, fertilisation is external then, what is the significance of copulation? - Quora
Something went wrong. Wait a moment and try again.
Try again
Skip to content
Skip to search
Sign In
Biology
Amphibians as Pets
External Fertilization
Copulation (zoology)
Bell Frogs
Animal Reproduction
Mating
Examples of Amphibians
Frog Tadpoles
5
In frogs, fertilisation is external then, what is the significance of copulation?
All related (32)
Sort
Recommended
Suresh Katare
BSC agri from Vasantrao Naik Marthwada Krishi Vidyapeeth Parbhani
·7y
This process increases the metabolic activity and the rate of protein synthesis of the egg
Fertilization provide new Genetic constitution to the zygote
Upvote ·
9 1
Related questions
More answers below
What is the reason behind external fertilization in frogs and fish? Why can’t sperm pass inside their ovaries?
Why Do fish, frogs show external fertilisation?
Why is the fertilization in frog called external fertilization?
Which animal vagina is most similar to that of a human?
What would happen if a female was filled with animal semen?
David Kirshner
Zoologist, wildlife interpreter, wildlife illustrator and photographer · Author has 903 answers and 3.3M answer views
·8y
Related
How do land frogs reproduce?
One of the interesting things about frogs is that they have the most variation in reproductive modes among all of the back-boned, land-dwelling animals (i.e. tetrapod vertebrates).
Most terrestrial (land-dwelling) frogs return to water to mate and lay eggs.
However, many have evolved all sorts of weird and wonderful ways to get around this.
Some frogs lay eggs on land which skip the tadpole stage and develop straight into tiny frogs.
A number of different frogs have developed skin folds which act much like a marsupial’s pouch and the eggs (or tadpoles, depending on which type of frog) develop insi
Continue Reading
One of the interesting things about frogs is that they have the most variation in reproductive modes among all of the back-boned, land-dwelling animals (i.e. tetrapod vertebrates).
Most terrestrial (land-dwelling) frogs return to water to mate and lay eggs.
However, many have evolved all sorts of weird and wonderful ways to get around this.
Some frogs lay eggs on land which skip the tadpole stage and develop straight into tiny frogs.
A number of different frogs have developed skin folds which act much like a marsupial’s pouch and the eggs (or tadpoles, depending on which type of frog) develop inside those.
Others carry the developing tadpoles in their vocal pouch (the skin on the throat which fills with air when they call).
Two species of (now extinct) frogs in Australia, called gastric brooding frogs, used to swallow their eggs and have the young develop inside the stomach (which temporarily stopped producing acid).
There are also many terrestrial (land) frogs whose eggs and/or tadpoles still require water, but the frogs still have an unusual way of getting them there. Many poison dart frogs, for example, carry their young on their back up into the canopy of a rainforest tree and deposit them in a tiny pools of water found in the centre of plants such as bromeliads. The female may even lay extra eggs as food for the developing tadpoles. On the other hand, many tree frogs lay their eggs on the underside of leaves overhanging a small pond so the hatching tadpoles drop into the water.
These are just a small sampling of the huge variety of ways that land frogs deal with reproduction.
Upvote ·
9 8
9 1
9 1
PVishal Naidu
Studied at Birla Institute of Technology and Science, Pilani · Author has 904 answers and 2.2M answer views
·7y
Frogs don’t copulate.
The male and female get into a mating posture called amplexus, in which the male climbs onto the female's back and clasps his forelegs around her middle. Then the female releases her eggs and the male releases his sperm at the same time.
Upvote ·
9 1
Abhisek Mallick
Studied at National Institute of Technology, Raipur · Author has 242 answers and 552.1K answer views
·3y
Related
How do frogs reproduce?
The vocal sacs are present only in male and during the breeding season a nuptial pad is developed at the base of first finger of the male frog. the vocal sacs increas e the pitch of sound while nuptial pads help in grasping the female during amplexus. germinal epithelium of seminiferous tubule produce sperms which are transferred to bidders canal via vasa efferentia from there sperms are carried to transverse collecting tubules, longitudinal tubules and then to urinogenital duct. the latter carries sperms to seminal vesicle where they are stored temporarily. then they are transferred to cloacc
Continue Reading
The vocal sacs are present only in male and during the breeding season a nuptial pad is developed at the base of first finger of the male frog. the vocal sacs increas e the pitch of sound while nuptial pads help in grasping the female during amplexus. germinal epithelium of seminiferous tubule produce sperms which are transferred to bidders canal via vasa efferentia from there sperms are carried to transverse collecting tubules, longitudinal tubules and then to urinogenital duct. the latter carries sperms to seminal vesicle where they are stored temporarily. then they are transferred to cloacca and then they are shed into female ova through amplexus
Upvote ·
9 4
Related questions
More answers below
Which other animals apart from human beings will do mating (have sexual intercourse) for pleasure and not for reproduction purposes?
Is it possible for humans to have sex with frogs?
Why is animal intercourse so rapid as opposed to humans?
Do animals other than hominids experience any orgasm when they procreate?
Do animals contract diseases through sex like humans?
Bir Bahadur
Professor Retd, working as Visiting Professor, at Kakatiya University (1970–present) · Author has 2K answers and 998.3K answer views
·4y
Related
Why Do fish, frogs show external fertilisation?
According to Giese and Kanatani 1987 external fertilization is a common and widespread reproductive strategy in the aquatic environmental ecosystem, and is generally thought to be ancestral to internal modes of reproduction (Jägersten 1972; Parker 1984; Wray 1995; but see Rouse and Fitzhugh 1994). Therefore, estimates of male and female fertilization success in external fertilizers may provide not only information on sperm competition for the majority of animal phyla but also an insight into the evolution of sexual dimorphism and internal fertilization.
Despite the need to understand the patter
Continue Reading
According to Giese and Kanatani 1987 external fertilization is a common and widespread reproductive strategy in the aquatic environmental ecosystem, and is generally thought to be ancestral to internal modes of reproduction (Jägersten 1972; Parker 1984; Wray 1995; but see Rouse and Fitzhugh 1994). Therefore, estimates of male and female fertilization success in external fertilizers may provide not only information on sperm competition for the majority of animal phyla but also an insight into the evolution of sexual dimorphism and internal fertilization.
Despite the need to understand the patterns and consequences of variation in both male and female fertilization success, little is known about the fate of gametes released in aquatic environments. Historically, discussions about reproductive success in external fertilizers were based on speculation or laboratory studies (reviewed by Levitan 1995a). It has only been in the last decade that some of the practical obstacles associated with ‘chasing’ gametes in an aquatic medium have been overcome. Estimates of gametes concentration and fertilization have been made, but there is still no direct information on sperm competition and multiple paternity.. To be brief, External fertilization is when the sperm and eggs are released outside of the body and meet outside of the body. External fertilization is limited essentially to animals living in aquatic environments
In contrast to most other organisms, the available evidence on external fertilizers suggests that sperm is limiting. Evidence from field experiments and natural observations of spawning demonstrate that the proportion of a female's eggs that are fertilized is often much less than 100%, and a majority of the variation in female fertilization success can be explained by male abundance, proximity, or synchrony. This somewhat different view of sexual selection has implications for the generality of Bateman's principle (Bateman 1948) and the evolution of sexual dimorphism in this presumptive ancestral reproductive strategy.
Disadvantages of External Fertilization,A large quantity of gametes is wasted and left unfertilized. Chances of fertilization are diminished by environmental hazards and Predators. Eggs and sperms, essentially, may not come in contact. Among vertebrates, external fertilization is most common in amphibians and fish. Invertebrates utilizing external fertilization are mostly benthic, sessile, or both, including animals such as coral, sea anemones, and tube-dwelling polychaetes. Benthic marine plants also use external fertilization to reproduce
External fertilization in an aquatic environment protects the desiccation of eggs. Broadcast spawning leads to higher genetic diversity due to a larger mixing of genes within a group. The chances of survival of the species also increase.
For sessile aquatic organisms such as sponges, broadcast spawning is the only ... Internal fertilization has the advantage of protecting the fertilized egg from ... gonads to produce sperm and egg was a major step in the evolutionary process
Upvote ·
Kartik Upadhyay
Infrastructure Engineer at Sopra Steria Group (2016–present)
·11y
Related
How do frogs reproduce?
In the male frog, the two testes are attached to the kidneys and semen passes into the kidneys through fine tubes called efferent ducts. It then travels on through the ureters, which are consequently known as urinogenital ducts. There is no penis, and sperm is ejected from the cloaca directly onto the eggs as the female lays them. The ovaries of the female frog are beside the kidneys and the eggs pass down a pair of oviducts and through the cloaca to the exterior.
When frogs mate, the male climbs on the back of the female and wraps his fore limbs round her body, either behind the front legs or
Continue Reading
In the male frog, the two testes are attached to the kidneys and semen passes into the kidneys through fine tubes called efferent ducts. It then travels on through the ureters, which are consequently known as urinogenital ducts. There is no penis, and sperm is ejected from the cloaca directly onto the eggs as the female lays them. The ovaries of the female frog are beside the kidneys and the eggs pass down a pair of oviducts and through the cloaca to the exterior.
When frogs mate, the male climbs on the back of the female and wraps his fore limbs round her body, either behind the front legs or just in front of the hind legs. This position is called amplexus and may be held for several days. The male frog has certain hormone-dependent secondary sexual characteristics. These include the development of special pads on his thumbs in the breeding season, to give him a firm hold.The grip of the male frog during amplexus stimulates the female to release eggs, usually wrapped in jelly, as spawn. In many species the male is smaller and slimmer than the female. Males have vocal cords and make a range of croaks, particularly in the breeding season, and in some species they also have vocal sacs to amplify the sound.
Hope my answer satisfied the need.
Upvote ·
9 3
9 1
Lee Duer
USA, Earth (1953–present) · Author has 13.8K answers and 5.1M answer views
·1y
Related
What is the reproduction of frogs?
Q: What is the reproduction of frogs?
Amphibians life cycle is tied to water. Their eggs must be wet and their larval forms (like tadpoles) live their part of the life cycle in the water. After they reach their adult stage they can stay in or near water (like many frogs and salamanders do) or they can live on land and return to water only to the lay their eggs (like most toads do).
Frog Life Cycle
Toad Life Cycle
Salamander Life Cycle
Continue Reading
Q: What is the reproduction of frogs?
Amphibians life cycle is tied to water. Their eggs must be wet and their larval forms (like tadpoles) live their part of the life cycle in the water. After they reach their adult stage they can stay in or near water (like many frogs and salamanders do) or they can live on land and return to water only to the lay their eggs (like most toads do).
Frog Life Cycle
Toad Life Cycle
Salamander Life Cycle
Upvote ·
Arthur Lawrence Bennett
Retired teacher of English and European languages. · Author has 740 answers and 722.9K answer views
·6y
Related
Can two different species of frog procreate successfully together?
The answer is probably not but possibly yes.
There is a tendency for us not to notice the differences between strangers (the all Chinese look the same to me syndrome). Humans will tend to see all frogs including those of different species as more or less the same, just your basic Kermit jobbie. Amphibians are however an extremely ancient class of vertebrates and have had countless millions of years to develop differences in their species underneath their basic froggy exterior which would be barriers to interspecific hybridization.
The Australian golden bell frog, the New Zealand Hamilton ‘s frog
Continue Reading
The answer is probably not but possibly yes.
There is a tendency for us not to notice the differences between strangers (the all Chinese look the same to me syndrome). Humans will tend to see all frogs including those of different species as more or less the same, just your basic Kermit jobbie. Amphibians are however an extremely ancient class of vertebrates and have had countless millions of years to develop differences in their species underneath their basic froggy exterior which would be barriers to interspecific hybridization.
The Australian golden bell frog, the New Zealand Hamilton ‘s frog, Canadian wood frog, European fire bellied toad, Brazilian arrow frog, European marsh frog - there is no way that any of the above could procreate together. You will probably discover that we have assigned each of the above to a different genus which is always a good guide to breeding incompatibility.
If on the other hand you take frogs of the same genus but different species such as the various European frogs of the genus Rana, temporaria, arvalis, dalmatina, lataste, graeca, iberica, ridibunda, lessonae, esculenta, you might expect hybridization to be possible, yet this doesn't seem to be the case. This is especially remarkable because fertilization takes place outside the body. The male simply releases his sperm into the water where the female has just released her eggs. If several species were using the same pond you could well expect that any eggs could accidentally encounter sperm from another species. What this means is that however similar various species of brown or green European frogs might seem to us, the differences are already so anciently embedded that they are incompatible at a cellular or chemical level. This has to be so to maintain species integrity when fertilization takes place in an open pond.
It would be interesting to investigate the different species of south American arrow frogs. The different species have violently different colour patterns. Although all the patterns advertise their toxicity, they are also very clear species markers. I do not know where fertilization takes place with arrow frogs. The female lays a single egg in a bromeliad leaf base. I do not know whether the male is at hand to deposit sperm there or whether the eggs have already been fertilized either on the female's body or even internally. I suspect that in the case of arrow frogs it is not a chemical barrier to hybridization but a cultural one based on the colour patterns to identify mates. The South American poison arrow frogs could be analogous to New Guinea birds of paradise.
Upvote ·
9 1
Marguerite Church
Former Instructor in Biology Appalachian State University (1984–2008) · Author has 6.8K answers and 5M answer views
·6y
Related
Why is the fertilization in frog called external fertilization?
With frog fertilization , the eggs are laid in a select spot in water . The male frog may or may not be close by or may even clasp the female as she lays the eggs. He releases sperm over the eggs and the adults leave. It is literally fertilization outside the frog’s body and the development of the eggs into tadpoles occur outside as well. My first experience of science was in the second grade when our teacher brought fertilized frog eggs into the class for us to watch them develop. She had us draw pictures of the developing tadpole and the resulting froglets. I just realized how formative for
Continue Reading
With frog fertilization , the eggs are laid in a select spot in water . The male frog may or may not be close by or may even clasp the female as she lays the eggs. He releases sperm over the eggs and the adults leave. It is literally fertilization outside the frog’s body and the development of the eggs into tadpoles occur outside as well. My first experience of science was in the second grade when our teacher brought fertilized frog eggs into the class for us to watch them develop. She had us draw pictures of the developing tadpole and the resulting froglets. I just realized how formative for me that experience was. Thank you Ms. Baxter!!!
Upvote ·
Calvin Peck
Evolutionary Biologist, Masters in Human Cognition and Learning · Author has 171 answers and 1.2M answer views
·9y
Related
Why is external fertilization used by aquatic animals?
A better question would be why is internal fertilization used by land animals.
As a living being we all started out in the sea. All reproduction must be conducted in an aquatic environment that can support the medium in which we live (again, water). Imagine trying to breath oxygen through water…can’t do it right? It’s called drowning because our bodies are only built to absorb oxygen in a specific medium…the air. The same thing is true in reverse for basically all other of our life’s processes; they require water.
Digestion, circulation, excretion, hormonal response….all conducted in an aquatic
Continue Reading
A better question would be why is internal fertilization used by land animals.
As a living being we all started out in the sea. All reproduction must be conducted in an aquatic environment that can support the medium in which we live (again, water). Imagine trying to breath oxygen through water…can’t do it right? It’s called drowning because our bodies are only built to absorb oxygen in a specific medium…the air. The same thing is true in reverse for basically all other of our life’s processes; they require water.
Digestion, circulation, excretion, hormonal response….all conducted in an aquatic environment.
All plants, bacteria, fungi, and animals live in the medium of water. The difference between you and a sea sponge is that you developed ways of simply carrying around all that water with you. That’s where 80%+ of your weight comes from.
So the development characteristics that allowed creatures to use water for external fertilization precedes those that developed for us to live on land.
The bottom line is: all reproduction requires a watery environment. Simply laying eggs then having them fertilized later is a pretty basic way of getting it done compared to what humans have to endure.
Upvote ·
Mary Higgins
Nutritionist and Yoga Teacher · Author has 4.4K answers and 2.3M answer views
·1y
Related
What is the reproductive process of frogs? Where are their eggs laid and where are the tadpole larvae found?
The majority of frogs come about this way: males sit around the pond calling out for females. If a female frog is interested, she calls out in a much softer voice that she is interested. Usually there is a mad dash of males toward the female and the competition amongst the males can get quite rough. Some male frogs may be drowned by others as they rush to give her a hug around the waist. The male sits on her back in a position known as amplexus. At this point, the female who is full of eggs, gets squeezed by the male and the eggs exit from her body and the male frog squirts sperm over the eggs
Continue Reading
The majority of frogs come about this way: males sit around the pond calling out for females. If a female frog is interested, she calls out in a much softer voice that she is interested. Usually there is a mad dash of males toward the female and the competition amongst the males can get quite rough. Some male frogs may be drowned by others as they rush to give her a hug around the waist. The male sits on her back in a position known as amplexus. At this point, the female who is full of eggs, gets squeezed by the male and the eggs exit from her body and the male frog squirts sperm over the eggs. The eggs are surrounded in a jelly like matter and may stick to the rocks, to plants or to the surface of the pond. Other species of frogs such as the tree frogs, lay their eggs on land or on the branch of a tree located above water. The eggs can look like long strands, or clumps depending on the species of frog. The eggs hatch and the tadpole enters the water.
Upvote ·
Alex Hirsekorn
Former Aquarium Volunteer in Port Angeles, WA · Author has 3.2K answers and 5.1M answer views
·5y
Related
Unfortunately, amphibians have external fertilization. Why?
Simply put, amphibians have evolved to use external fertilization because that method works and there has yet to be any evolutionary pressure to use other methods. In other words “If it works, don’t fix it”.
Why do you use the word “Unfortunately”? Amphibians have been around, externally fertilizing their brains out for 370 million years! Please name an animal group that has exceeded that level of success using internal fertilization.
Hint: There are a few such animals but I’m not going to tell you which ones.
Upvote ·
9 2
Richard Pierce
Photographer, Aquarist, Marine Biologist, Philosopher · Author has 2.2K answers and 5.2M answer views
·7y
Related
In which species does external fertilisation take place?
External fertilization is common in most fish and many aquatic invertebrates. Internal fertilization is the exception to the rule in fishes, although many different taxa have some examples of internal fertilization. The most common example of internal fertilization in fish is the Family Poeciliidae. These are the guppies, mollies, platies and swordtails. Internal fertilization also occurs in some sharks and rays, some catfish, other families within the order Cyprinodontidae, some tetras and a few lesser known families.
Most fish, bivalves, corals, starfish, and other non-mobile invertebrates al
Continue Reading
External fertilization is common in most fish and many aquatic invertebrates. Internal fertilization is the exception to the rule in fishes, although many different taxa have some examples of internal fertilization. The most common example of internal fertilization in fish is the Family Poeciliidae. These are the guppies, mollies, platies and swordtails. Internal fertilization also occurs in some sharks and rays, some catfish, other families within the order Cyprinodontidae, some tetras and a few lesser known families.
Most fish, bivalves, corals, starfish, and other non-mobile invertebrates also practice external fertilization. In order to maximize the chance of fertilization, the release of gametes is often synchronized, either by a chemical or environmental signal. Fish will often come in close proximity to maximize the chance of successful fertilization. The gametes have a limited lifespan outside of the body, and both will quickly die if fertilization does not occur quickly.
External fertilization is much less common in terrestrial animals. No mammals, birds or reptiles practice external fertilization. Some invertebrates, both terrestrial and aquatic practice what might be considered a hybrid type of fertilization. Males may produce sperm packets called spermatophores. These packets are delivered or attached to the female. The spermatophores may be used for internal or external fertilization depending on the species, often long after the male has departed. Some species can store spermatophores for up to several years and can fertilize many batches of eggs.
Plants are the only terrestrial organisms that practice external fertilization by broadcasting their gametes into the air or by using intermediaries. Pollen grains spread by the wind or through pollinators are the equivalent of sperm in animals. The pollen grains land on the receptor organs in the plant but the actual fertilization may be internal.
Upvote ·
9 3
9 1
Related questions
What is the reason behind external fertilization in frogs and fish? Why can’t sperm pass inside their ovaries?
Why Do fish, frogs show external fertilisation?
Why is the fertilization in frog called external fertilization?
Which animal vagina is most similar to that of a human?
What would happen if a female was filled with animal semen?
Which other animals apart from human beings will do mating (have sexual intercourse) for pleasure and not for reproduction purposes?
Is it possible for humans to have sex with frogs?
Why is animal intercourse so rapid as opposed to humans?
Do animals other than hominids experience any orgasm when they procreate?
Do animals contract diseases through sex like humans?
How do land frogs reproduce?
In evolutionary terms, why do male mammals have penises? Do all mammals (or only primates) eliminate urine and sperm through the penis?
Is sperm an animal?
How do frogs reproduce?
Where does the fertilization of a frog take place?
Related questions
What is the reason behind external fertilization in frogs and fish? Why can’t sperm pass inside their ovaries?
Why Do fish, frogs show external fertilisation?
Why is the fertilization in frog called external fertilization?
Which animal vagina is most similar to that of a human?
What would happen if a female was filled with animal semen?
Which other animals apart from human beings will do mating (have sexual intercourse) for pleasure and not for reproduction purposes?
Is it possible for humans to have sex with frogs?
Why is animal intercourse so rapid as opposed to humans?
Do animals other than hominids experience any orgasm when they procreate?
Do animals contract diseases through sex like humans?
About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press ·
© Quora, Inc. 2025 |
187833 | https://www.andrew.cmu.edu/user/ramesh/teaching/course/48-175/lectures/2.BasicsOfDescriptiveGeometry.pdf | 2 Basic Concepts of Descriptive Geometry From this moment onwards we look at a particular branch of geometry—descriptive geometry—developed by Gaspard Monge in the late eighteenth century, who, incidentally, played an important role in Napoleonic war efforts, and which, now plays a major part of current architectural drawing practice. Gaspard Monge Gaspard Monge (1746-1818) discovered (or invented) the principles of descriptive geometry at the tender age of 18, working as a military engineer on the design of fortifications, which were made of stones accurately cut to fit one onto another so that a wall or turret so constructed was self-supporting and strong enough to withstand bombardment. Monge’s descriptive geometry system was declared classified and a military secret and it was not until many years later around 1790s (when Monge was a Professor at the Beaux Arts) that it became a part of French engineering and archi-tectural education and then adopted virtually universally. Descriptive geometry deals with physical space, the kind that you have been used since birth. Things you see around you and even things that you cannot see have geometry. All these things concern geometric objects almost always in relationship to one another that sometimes requires us to make sense of it all—in other words, when we try to solve geometric problems albeit in architecture, engineering, science. Descriptive geometry deals with manually solving problems in three-dimensional geometry by generating two-dimensional views. So …what is a view? 52 2.1 VIEWS A view is a two dimensional picture of geometric objects. Not any old picture, but, more precisely, a ‘projection’ of geometrical objects onto a planar surface. This notion is more familiar than some of us of may think. For example, whenever we see a movie on the silver screen, we are really seeing a ‘projection’ of a sequence of ‘moving pictures’ captured on transparent film through a cone of light rays emanating from a lamp so that each picture appears enlarged on a flat screen placed at a distance from the image. Each such picture is a view. 2-1 Movie projection Another example is the shadow cast by an object, say, a tree, on another object, say, a wall. In this example, the shadow cast by the tree can be viewed as being ‘projected’ on the wall by the rays of light emanating from the sun. 2-2 Projecting shadows Here, the rays are almost parallel, in contrast to the rays emanating from a single point source as in the movie example. Another difference is that a tree is a truly 3-dimensional object, while the picture on a piece of film is essentially flat. In either case the types of projection is a close physical model of the mathematical notion of a projection. So … what is a projection? 2.2 PROJECTIONS In geometry, projections are mappings of 2- or 3-dimensional figures onto planes or 3-dimensional surfaces. For our purpose, we consider a projection to be an association between points on an object and points on a plane, known as the picture plane. This association— between a geometric figure and its image—is established by lines from points on the figure to corresponding points on the image in the picture plane. These lines are referred to as projection lines. 53 The branch of geometry that investigates projections, including a study of the properties that are preserved under them, goes under the name of projective geometry. Descriptive geometry is really a subfield of projective geometry. Problems solved using descriptive geometry can be intricate. For example, the task may be to depict accurately in a drawing the shadow cast by a tree on a roof that may not be flat. Since this shadow is in itself the result of a projection, this tasks calls for depicting the projection of a projection. An understanding of projections is therefore essential not only for the generation of images, but also for an understanding of what goes on in the scenes depicted by these images. The present chapter introduces the principles of parallel projections to build a foundation for the specific techniques of descriptive geometry dealing with ‘orthographic views’, which are commonly represented in architecture by floor plans, sections and elevation drawings. 2.3 PARALLEL PROJECTION BETWEEN LINES Let us start simple … with lines. Definition 2-1: Family The set of all lines parallel to any given line is a family of parallel lines. When no misunderstandings are possible, a family of parallel lines is simply referred to as a line family. Being parallel is an equivalence relation for lines in the sense that if a line is parallel to another, which is also parallel to a third, then the first and third lines are also parallel. The relation ‘line family’ partitions the all lines into classes so that each line belongs to exactly one class, containing all the lines parallel to it. 2-3 A line family As parallel lines do not intersect: Property 2-2: Uniqueness For a line family and a given point, there is exactly one line in the family that passes through that point. Consider two coplanar lines, l and m, and a coplanar line family as shown in Figure 2-4. A parallel projection of l on m maps every point P of l to that point P' of m, where m meets the projection line that passes through P. P’ is called the image of P. 54 2-4 Projection between lines This projection establishes a one-to-one correspondence between the points on l and the points on m. We call this simply a projection between lines l and m. From elementary geometry, whenever parallel lines are intersected by a traversal (a line not parallel to the line), opposite interior angles formed at the intersection points are identical in measure (congruent). 2-5 Opposite angles along a transversal are identical Consider now a parallel projection of a line l on a line m and two distinct points, A and B, on l and their images on m, A’ and B’. There are two cases to consider: • l and m are parallel, in which case polygon ABB'A' is a parallelogram and consequently, AA' = BB'; that is, the projection preserves distances. • l and m intersect at a point, say P. In this case, P is fixed and triangle !PAA' is similar to !PBB'. Consequently, !"!
!" = !"!
!" = !"!
!" = k That is, the projection multiplies distances by a constant factor k. 55 2-6 Parallel projections multiply distances by a constant factor 2.3.1 Between-ness and parallel projections A point B lies between two points, A and C, whenever AC = AB+BC. Since a parallel projection multiplies distance by a constant factor, k, (which may be identically equal to 1) it follows that image A'C'= kAC = kAB + kBC = A'B' + B'C', it must also preserve between-ness of points. That is: Property 2-3: Distance Preserving and Between-ness A parallel projection between two lines multiplies distances by a constant; if the lines are parallel, the constant is one. Moreover, a parallel projection preserves between-ness Applying properties of parallel projections to Figure 2-7, it is easy to see that the sum of the projections of segments of a polyline onto a line equals the projection of the segment between the first and last end-points of the polyline. Between-ness preserving entities An entity, which preserves between-ness, also preserves distances. Rays, lines, segments, conic sections in general geometric figures are preserved by between-ness preserving entities. m l C A A' B B' C' m l C P A A' B B' C' 56 2-7 Sum of the projections of segments of a polyline onto a line equals the projection of the segment between the first and last end-points of the polyline Construction 2-1 Dividing a given segment of arbitrary length in a given ratio We are going to revisit the constructible constructions albeit via a variation. Suppose 𝐴𝐵 is a given segment and suppose we are given a ratio, say m:n, where m and n are integers. From one of the end points, say A, draw a ray, r. Mark m units on a convenient unit of measure from A on the ray. Let the mark be M. From repeat this step with a measure of n units. Call this mark N. Draw a line –M– parallel to –NB– and let it meet 𝐴𝐵 at M', which divides the segment into the required ratio. 2-8 Dividing a segment into two segments in a given ratio This construction clearly employs a parallel projection from r to 𝐴𝐵. Note that construction also works if 𝐴𝐵 is to be extended in the ratio m:n; in this case, we draw a line –N– parallel to –MB– and let it meet –AB– at N’. Then, AN' extends AB in the required ratio. line projections polyline 57 EXAMPLE: Dividing and extending segments The above construction also works for any number of divisions or extensions by integer ratios and for any combination of divisions and extensions. For example, if AB is to be divided into three segments with ratios 4:2:3 and extended by two segments in the ratio 2:4, we mark off points on r after 4, 6 (= 4+2), 9 (= 4+2+3), 11(= 9+2) and 15 (= 9+2+4) units and draw the line l through the point at mark 9; the desired points on AB or its extension are the projections of the marks on r by lines parallel to l. This is illustrated in Figure 2-9. 2-9 Dividing and extending a segment into an arbitrary number of given ratios The reader may notice the relationship to constructions [1.16], which is interpreted in terms of similarity between lengths, whereas this construction is interpreted in terms of parallel projections. 2.1.1 Parallel projections between planes We can now extend the notion of a parallel projection to planes in space. Definition 2-4: Parallel Projection between Planes Let p and p' be two distinct planes and ƒ a line family not parallel to either plane. A parallel projection of p onto p’ maps every point P of p onto point P' of p' where p' meets the line in ƒ that passes through P. Because there is exactly one line in f that passes through P, this type of mapping establishes a 1-1 correspondence between points of p and p’, and we often, simply, call this a projection between the points of the planes. Consider a line l on p. The lines in ƒ that pass through l form a plane distinct from p’ that intersects p’ at a line l’ which is the image of l under the projection. It thus maps lines on lines. Furthermore, l’ is the image of l under a coplanar parallel projection. Because a parallel projection between two lines multiplies distances by a constant factor 58 k, which equals 1 when p and p’ are parallel (see Figure 2-10i), in which case it is constant for the entire projection. Otherwise, k may vary for different lines (see Figure 2-10ii). The projection thus preserves between-ness and the properties that depend on it, but does not always multiply distances by a constant factor. i ii 2-10 Illustrating a parallel projection between planes We summarize these observations in the following property: Property 2-5: Between-ness Preserving A parallel projection between two planes is a 1-1 mapping between points on planes, which preserves between-ness between points and parallelism, concurrence and ratio of division between lines. Distances are preserved only if the planes are parallel. 2.1.2 Not all points of an object have to be projected Since a parallel projection preserves between-ness, it maps not only lines on lines, but also segments on segments, rays on rays etc.; that is, it maps linear figures on linear figures of the same type. Notice that not all points of an object have to be projected. Because of Property 2-5 on preserving between-ness, we can project just distinguished points such as the endpoints of lines. The projection of the line is the line joining the projected endpoints. In a similar fashion the projection of a surface is constructed from the projection of its ‘boundary’ lines, each, in turn, constructed from the projection of their endpoints. f p' p l' l f a p' p l' l 59 If we apply this to a piece of architecture, we see that a projection of a planar facade on a plane parallel to it shows all features of the facade in true size (see Figure 2-11). This fact explains why elevations are so important in building design; plans and sections are important for similar reasons. But note also in the figure that the roof, which is not projected on a plane parallel to it, appears in the projection not in true size. The next section will get back to the role of parallel projections in architecture in greater detail. 2-11 Orthographic projection of a facade The method works even for curved objects. In fact, we can show (although we do not do so here) the following property, which will be important for subsequent chapters: Property 2-6: Type Preserving A parallel projection between two planes maps parabolas onto parabolas, hyperbolas onto hyperbolas, circles or ellipses onto circles or ellipses, and, more generally, curves of degree n onto curves of degree n. The ‘boundary’ of the surface corresponds to points on the object at which the projection lines are tangential to the surface of the object. 2.1.3 Parallel projections of general figures The notion of a parallel projection that underlies Definition 2-4 can be extended to projections of general 3-dimensional objects by means of a line family on a plane, even a non-planar surface in space. For example, to model geometrically the shadow cast by a cylindrical tower on a domed roof under parallel light rays like the ones generated by the sun, one would use a parallel projection of a cylinder on part of a sphere under a line family whose lines are parallel to the rays of the sun. More straightforward is the projection of a spatial figure on a plane. The image created on that plane can be viewed as a 2-dimensional representation of the figure; descriptive 60 geometry deals with precisely the generation of such images. Since objects of interest in architecture and related fields are often composed of linear shapes, such images can often be pieced together by a series of parallel projections between planes as defined— Figure 2-11 illustrates this as well. However, that scenes thus depicted may involve images of more complicated projections such as the shadows described above. The handling of ‘shades and shadows’ and similar projections constitutes indeed a special subfield of descriptive geometry, which we will consider later in the course. We conclude this section with a special case of particular significance for parallel projections used in descriptive geometry. Consider a plane p' and a line family ƒ not parallel to p'. Any two distinct lines in ƒ define a plane, p, which must intersect p’ at a line, l. Consider the image of p projected by the lines in ƒ on p', that is, the set of points where a line in ƒ through a point in p meets p'. Every point on p is projected by a line in p, which intersects p' at a, and every point P' on l is the image of infinitely many points on p (namely the points on the line in ƒ that passes through P'). l is thus the image of p under the projection (see Figure 2-12). Observe that l is also the image of p' on p under a parallel projection by a line family parallel to p'. 2-12 The image of a plane onto another is the line where the two planes meet The following property states this result for further reference. p p' l 61 Property 2-7: Image of plane onto another If ƒ is a line family, p a plane parallel to ƒ and p' a plane not parallel to ƒ, the image of p projected on p' by ƒ is the line l where p and p' meet. Conversely, the image of p’ projected on p by a line family is the line l where p and p' meet. 2.4 ORTHOGRAPHIC VIEWS When you draw a floor plan, section or elevation of a building, you are (consciously or unconsciously) using a special form of parallel projection, which is introduced in the following definition: Definition 2-8: Orthographic Projection and Views Let ƒ be a family of lines and p a plane not parallel to ƒ. The parallel projection of a figure on p by ƒ is an orthographic projection if the lines in ƒ are normal (perpendicular) to p. If g is an orthographic projection on a plane p, and h is a distance-preserving mapping of p, the image of hg is called an orthographic view of the figure. In principle, there are two consecutive mappings involved in the generation of an orthographic view. Figure 2-13 illustrates this for a single point, X. The plane p on which X is mapped is called the picture plane. A particular line, l, in ƒ maps X onto a point Xp of p; l is called the projection line through X. The mapping h maps this image on a point X' in the Euclidean plane—which, for us, will be a sheet of paper. 2-13 Specifying an orthographic view of a point on a sheet a paper It is important to realize that the mapping from a spatial to a 2-dimensional plane can be done in one of two ways, where each way implies a specific point of view or viewer projection line l h picture plane p sheet of paper Xp X X' 62 position on one or the other side of the spatial plane (see Figure 2-14). The view from one side is the mirror image of the other view. 2-14 Two ways of mapping onto a picture plane We normally assume that the picture plane is between the viewer and the object being viewed. 2-15 Normally the picture (projection) plane is between viewer and object We call the view generated by a horizontal picture plane that is placed above an object and viewed from above a top view of the object. Similarly we call a view generated by a horizontal picture plane place below an object and viewed from below a bottom view of the object. A view generated similarly by a vertical picture plane placed in front, of an object and viewed from the front is called respectively, a front view of the object. Likewise we have back or side views of an object. As locations, and therefore measurements, are involved in descriptive geometry we devise a method by means of which accurate perpendicular measurements are represented on a sheet of paper. We can apply an orthographic view (Definition 2-8) by imagining the horizontal plane to be hinged to the frontal plane and likewise the side or profile plane to be hinged to the frontal plane so that all three planes are represented on the same sheet of paper. The viewer’s lines of sight remains perpendicular to the respective picture planes. Even after the planes have been swung into place as illustrated in Figure 2-17, the observer considers the horizontal picture plane to be perpendicular to the frontal picture plane. viewpoint 1 viewpoint 2 PROJECTION PLANE Projector - a line from a point in space perpendicular to a plane surface called a projection plane Observer's line of sight is perpendicular to the projection plane Point in space 63 2-16 Three orthogonal views: top, front and side (profile) 2-17 Unfolding the picture planes onto a sheet of paper Observer's line of sight to see frontal projection of points Projection of the point on frontal plane (similar to a wall in a room) Projection of the point on horizontal plane (similar to a ceiling in a room) Projectors Observer's line of sight to see horizontal projection of points FRONTAL PLANE HORIZONTAL PLANE Point in space Observer's line of sight Observer's line of sight Projection of the point on frontal plane Projection of the point on horizontal plane Projection of the point on left profile plane Projectors Observer's line of sight LEFT PROFILE PLANE FRONTAL PLANE HORIZONTAL PLANE Point in space Horizontal plane Frontal projection of point Profile projection of point Horizontal projection of point Line of sight after the projection planes are in the plane of the drawing surface Projection planes before being swung into the plane of the drawing surface Profile plane Frontal plane Plane of the drawing surface Point The six orthographic views Top and Bottom views Front and Back views Left and Right Side view 64 Thus, when the viewer looks at the horizontal picture plane, he/she sees the frontal picture plane as an edge. Likewise when the viewer sees the frontal picture plane, he/she sees the horizontal and profile planes as edges. 2-18 Visualizing a picture plane in the other picture planes The viewer’s line of sight would appear as a point in each projection plane. This can be visualized by standing right behind the lines of sight in each view of a point in space. Thus, when the viewer sees the frontal plane, he/she sees the point at distance below the horizontal plane. When viewing the horizontal plane, the viewer sees the point at a distance behind the frontal plane. When he/she views the side (profile) plane, he/she stills sees the point behind the front plane. See Figure 2-19. 2-19 Individual views Horizontal and frontal planes seen as edges Horizontal and profile planes seen as edges Frontal and profile planes seen as edges Edge of the profile plane Edge of the profile plane Edge of the frontal plane Line of sight is perpendicular to the profile projection plane Line of sight is perpendicular to the frontal projection plane Edge of the frontal plane Edge of the horizontal plane Edge of the horizontal plane Line of sight is perpendicular to the horizontal projection plane PROFILE PLANE FRONTAL PLANE HORIZONTAL PLANE HORIZONTAL PLANE PROFILE PLANE FRONTAL PLANE Distance behind frontal projection plane Distance below horizontal projection plane Distance behind profile projection plane 65 2.1.4 Notational convention 2-20 Only the reference (hinge or folding) lines are important 2.5 ADJACENT VIEWS Figure 2-19 illustrates an important concept in descriptive geometry. When dealing with orthographic views, descriptive geometry always assumes that the figure under consideration is given in at least two adjacent views, which is captured by the following definition: 3 2 2 1 Distance behind frontal projection plane Distance below horizontal projection plane Distance behind profile projection plane P1 P2 P3 3 2 2 1 Folding, reference or hinge line 1|2 Line of sight for hotizontal plane Line of sight for left profile plane Line of sight for frontal plane Folding, reference or hinge line 2|3 LEFT PROFILE PLANE FRONTAL PLANE HORIZONTAL PLANE P3 P P1 P2 66 Definition 2-9: Adjacent Views Two intersecting planes are perpendicular if the line of intersection is the image of one plane in the other plane under an orthographic projection. Two views obtained from two perpendicular picture planes are called adjacent. Figure 2-21 illustrates perpendicular planes. In architecture you will find countless examples of adjacent views for instance, plan and elevation, an elevation and a second elevation in a picture plane perpendicular to the one used in the first elevation, an elevation and section with picture planes that have the same relationship with each other etc. 2-21 Perpendicular planes The line where the picture planes of two adjacent views, t and f, meet is called the folding line between t and f and denoted by t | f. The folding line is the image of each view in the adjacent view. We denote the view of a point or object, X, in a view, t, by Xt. Sometimes views are numbered instead of a letter. The same naming conventions apply. 2-22 A point in two adjacent views X Xtop Xfront folding line projection line top front Xtop Xfront 67 The folding line is also known as the reference or hinge line 2.1.5 Literal versus normal renderings of orthographic views A literal rendering of the orthographic view of an object would treat each point of the object equally; for a cylinder, this could result in a drawing like the one shown at the top of Figure 2-23, which depicts the cylinder as an unstructured mass without distinguishing between parts that are more important than others. This is hardly ever done. The normal way of rendering the orthographic view of an object is demonstrated at the bottom of Figure 2-23. The visible outline of the figure is always drawn in its entirety because it separates the figure from the background or from other figures. In addition, such a rendering tends to emphasize important other features; in the example shown, the upper rim of the cylinder is shown in its entirety, including the part that does not lie on the outline; this implies that the upper surface is visible from the particular direction from which the object is viewed, while the bottom is not visible and therefore not shown in its entirety. 2-23 Two renderings of an orthographic view of a cylinder Orthographic views, in architecture or other fields, are generated for a purpose, and the selection of the features to be shown may vary with that purpose. In general, one shows, aside from the outline, at least the boundary of each surface that can be seen from the direction of view; hidden boundaries are normally deleted if showing them would confuse the image; otherwise, they are shown with dashed lines (see Figure 2-24) or by similar means that distinguish them from the visible lines. folding line f t Xt Xf 68 2-24 A tetrahedron with a hidden edge Note also that different parts of an object may lie on the outline in different views. For example, the boundary of the cylinder is delineated in Figure 2-23 by two segments, which correspond to two specific segments on the cylinder’s boundary; two different segments would fall on the outline if we were to shift the direction of view (by choosing a different picture plane). 2.6 ORTHOGRAPHIC VIEWS IN ARCHITECTURE: FLOOR PLANS Floor plans, sections and elevations are for the most part orthographic views of a building or of portions thereof. When we produce such a drawing, we are often not even aware of the underlying principles because the problems involved can be solved intuitively. But in cases when it is not immediately clear how to draw a certain part of a design, we have to go back to the underlying principles in order to resolve the issue at hand. Take, for example, a floor plan. It shows in the majority of cases the walls and partitions on a floor that separate spaces from each other and the outside and indicates the position and width of doors, windows and other openings, along with other important objects. When the walls and openings are vertical, this can be done without complications because the sides of these objects are on planes perpendicular to the picture plane and project on line segments. But how should we draw the floor plan of an attic under a pitched roof, where the main parts are not vertical? This problem can be resolved when we consider what a floor plan really is. Definition 2-10: Floor Plan A floor plan of a building is the top view of a portion of a building below a picture plane cutting horizontally through the building. It shows the parts of the building underneath this plane as seen when we view the picture plane from above. The solid parts that intersect the picture plane (also called the cutting plane) are the most important features depicted and are shown, normally, in outline; openings appear as gaps in this outline. In order to make such a plan as informative as possible, the height of the picture plane must be chosen carefully; especially, it should cut through as many openings as possible (in practice, some ‘cheating’ may be tolerated here if it increases the clarity of the plan; but in case of doubt, it is advisable to stick to a literal interpretation of this process). 69 Objects such as steps, furniture, floor patterns etc. that are below the cutting plane can be projected into that plane and are then shown as in any other orthographic view. Important features above the cutting plane that are visible from it in reality can also be projected into the cutting plane, possibly with dashed lines to distinguish them from other objects. If the sole purpose of a plan is to show ceiling pattern in this way, the plan is usually called a reflected ceiling plan. Many of these devices are illustrated in the first floor plan shown in Figure 2-25. 2-25 Standard drawings of a house in the Tudor style (Top) Third floor plan; and (Bottom) First floor plan It is important to understand that all of the projections that come into play in such a drawing use the same family of projection lines normal to the cutting plane (and this 70 family is unique). It is up to the designer to decide which of the many features projected by these lines are actually drawn. The normal case illustrated by the first floor plan in Figure 2-25 depicts vertical walls, that is, walls perpendicular to the picture plane, whose sides consequently project into lines. This is the single most important feature that makes floor plans so easy to construct. But consider the third floor plan shown in Figure 2-25. The rooms on this floor are enclosed by vertical walls only up to a certain height above which the underside of the roof becomes the boundary of the room towards the outside (see the section in the same figure). If one understands that a floor plan is an orthographic projection into a horizontal cutting plane, viewed from above, it becomes apparent how this situation can be drawn. The plan shown in Figure 2-25 uses a cutting plane slightly below the point where the roof meets the vertical portions of the walls. The plan depicts the mass of the roof as it is cut by the picture plane; the parts of the roof that extend from this plane outwards and down to the eaves appear in top view, while the parts that rise above the plane are indicated by dashed lines outlining the different sides of the roof that are visible from below. Details such as the elaborate articulation of the gables are drawn following the same principles. A section is developed in the same way using a vertical cutting plane that cuts through the building. Elevations are developed with vertical picture planes that do not intersect the building. Features of the building beyond the picture plane are again shown as orthographic views to the degree of detail desired. See Figures 2-27 and 2-27. 2-26 Standard drawings of a house in the Tudor style: Longitudinal section; 71 2-27 Standard drawings of a house in the Tudor style: Front elevation In these constructions, the given adjacent views are always assumed to be placed relative to each other as they would appear when one of the picture planes is ‘folded’ at the folding line to become co-planar with the other picture plane as demonstrated in Figures 2-21 and 2-22 (the folding hinge line receives its name from this convention). In such an arrangement, any point in one view appears in the other view on a line perpendicular to the folding line; this line is, in fact, the image of a projection line projecting the point into the picture plane of the adjacent view; it appears as a point in the other view, mapping every point on it on that point. Figure 2-28 shows two adjacent views of a building, a floor plan and a side elevation, generated from two perpendicular picture planes and arranged as two adjacent views in the way described. It is important to realize that in order to resolve all details in any one of these standard views, some of the more advanced techniques may have to be used. The core of descriptive geometry consists of a collection of constructions that can be used to depict spatial objects in orthographic views and to determine geometric properties of such objects from these views; in many cases, additional views must be generated in order to solve a certain problem. But these additional views can in most cases be obtained by purely 2-dimensional constructions if at least two adjacent views are given. This assumption underlies most constructions. 72 2-28 Two adjacent views of a building 2.7 VISIBILITY OF LINES We stated already that a projected object is hardly ever shown in its entirety—that is, by showing every point that belongs to the object—because this would lead to a completely unstructured, incomprehensible image. Most prominent among the features shown are the edges that form the boundary of the object (or of its parts) in the view under consideration. The edges appear as line segments or as curves in the view and are often simply called lines. If all of these lines are displayed, the view is called a wire-frame image. For all but the simplest objects, however, this type of view is still too confusing to be informative. Wire-frame views are therefore usually edited in order to make them more comprehensible. Most commonly, they show which lines are visible or hidden in a particular view. Hidden lines are either dashed (this is usual in many engineering fields) or omitted altogether (this is usual in architectural design). 73 Construction 2-2 Visibility/Intersection Test Given two lines in two adjacent views, neither line perpendicular to the folding line, that meet at a point, X, in at least one view, t, determine which line is in front of the other (relative to t) at the intersection point. (Right) The problem 1 Determine which line is in front (or above) of the other There are two steps. 1. Draw the projection line a through Xt into view f. There are two possibilities. 1.1 If the lines meet also at a point on a in view f, the lines truly intersect. (Right) The lines truly intersect 1.2 Otherwise, the lines do not intersect. Determine the spatial relation between the lines at Xt from the relative positions of their intersections with a in f: the line that intersects a at a point closer to the t f X a t f X 74 folding line than the other line is closer to the picture plane of f at that point; consequently, it is in front of the other line at Xt in t. For example, in the figure below, line l satisfies this condition. (Right) The lines do not intersect Project from the (apparent) point of intersection onto the other view The same test is applied if the two lines also intersect in view f. See Figure 2-29. (Left) Relative visibility of two lines 2-29 Intersection test – l is in front in view t and behind in view The obstructed line is often shown interrupted at the intersection point (when it is visible everywhere except at that point) or dashed (when an entire portion is obstructed) as the example below illustrates. f t X a l l t f X 75 2.1.6 Worked example – Visibility Determine visibility of a three-sided pyramid in given top and front views, t and f Consider the front and side views of the pyramid as shown in Figure 2-30. 2-30 Front and side views of a pyramid The outside edges of the pyramid are visible in both views since the points of intersection coincide in both views. However, edges l and m intersect in both views, but not at the same point. Applying the visibility test to the intersection point in the top view t, we find that the corresponding point on l is closer to the folding line than the corresponding point on m in the front view f. Similarly, applying the visibility test to the intersection point in the front view f, we find that the corresponding point on m is closer to the folding line than the corresponding point on l in the top view ƒ. Consequently, we show m dashed in view t and l dashed in view f as shown on the right side in Figure 2-31. m l m l front top 76 2-31 Visibility test for a pyramid 2.8 PRINCIPAL VIEWS (OR PROJECTIONS) See Figure 2-32 2-32 Principal views m l m l front top m l m l front top Line of sight for top view Line of sight for bottom view Line of sight for left side elevation view LEFT SIDE FRONT TOP Line of sight for back elevation view Line of sight for front elevation view Line of sight for right side elevation view 77 It is important to note: • Elevation views are always perpendicular to the horizontal view. That is, perpendicular distances below or above the horizontal plane are always seen in the elevation views. • Lines of sight that are perpendicular to an elevation view are always horizontal. Figure 2-33 showing the view swung into the plane of a drawing paper. 2-33 Principal views unfolded onto paper 5 1 6 4 2 3 2 2 1 Bottom View Top View Right Side Elevation Right Side Elevation Front Elevation Back Elevation 78 2.9 AUXILIARY VIEWS Definition 2-11: Auxiliary View If p and q are two adjacent views, a view using a picture plane perpendicular to the picture plane used in p or q is an auxiliary view. A primary auxiliary view is a view using a picture plane perpendicular to one of the coordinate planes and inclined to the other two coordinate planes. A secondary auxiliary view is an auxiliary perpendicular to a primary auxiliary view. For each of the three different picture planes parallel to the coordinate planes, there are three types of primary auxiliary views, depending on the coordinate plane to which their picture planes are perpendicular; possible picture planes for these views are shown in Figure 2-34. 2-34 Picture planes for principal auxiliary views for different coordinate planes 2.1.7 Auxiliary elevation views Consider the left-most figure in Figure 2-34. It shows a plane perpendicular to the horizontal plane. As shown in Figure 2-35 we can consider of number of auxiliary views based on different lines of sight, each parallel to the horizontal plane (and hence, horizontal) and each perpendicular its picture plane. 2-35 Auxiliary views Planes 2,3,4,5 and 6 are all elevations as each is perpendicular to the horizontal projection planes Observer's line of sight remains horizontal when viewing elevations 6 5 4 3 1 2 Auxiiary elevation Auxiiary elevation Frontal projection plane Front view Horizontal projection plane Top view 79 2-35(continued) Unfolded auxiliary views When solving descriptive geometry problems we eliminate the outlines of the projection planes and keep just the folding, hinge or reference lines as shown in Figure 2-36. Note that point X is located at a distance H below the horizontal picture plane as indicated by the projector from X1 to X2 in the top and front views. For the other auxiliary views 3, 4, 5 and 6, draw projectors from X1 perpendicular to the folding lines 1|k, k > 2, and transfer the distance H to points Xk. 2-36 Auxiliary elevation views 2.1.8 Auxiliary inclined views The same idea applies when the line of sight is neither horizontal nor vertical. See Figure 2-37, where the viewer’s line of sight is inclined at some angle and the picture plane perpendicular to the line of sight is inclined with respect to the top and front views. 3 1 5 6 1 4 2 1 Aux Elevation Aux Elevation Aux Elevation Aux Elevation Front Elevation Top view H = distance below horizontal plane H H H H 3 1 5 6 1 4 2 1 X3 X6 X4 X5 X1 X2 80 2-37 Auxiliary inclined view Figure 2-38 shows the top, front, and auxiliary projection planes. Note that as the top picture plane is perpendicular to the front plane, the viewer will see point X at a distance F behind the front plane. Likewise as the auxiliary picture plane is perpendicular to the front plane the point will be seen at the same distance F in the auxiliary view. 2-38 Auxiliary inclined view 2.10 TRANSFER DISTANCE An auxiliary view of an object can be constructed from two adjacent views by use of the construction below. This construction reiterates the notion of ‘transfer distance’ introduced above, of transferring the distance from an adjacent view to an auxiliary view. It is important that you thoroughly understand the construction, especially the role of transfer distance. No other construction is more fundamental in descriptive geometry. Horizontal projection plane Frontal projection plane Inclined line of sight – horizontal plane (top view) appears as an edge Aux inclined projection plane Top view Front elevation 3 2 2 1 F F Auxiliary inclined plane Frontal projection plane Horizontal projection plane 3 2 2 1 X3 X1 X2 81 Construction 2-3 Transfer Distance Given a point, X, in two adjacent views, t and f, construct an auxiliary view of X using a picture plane perpendicular to the picture plane of t. (Right) The problem There are three steps. 1. Call the auxiliary view a, and select a folding line, t | a, in t (any convenient line other than t | f will do). 2. Draw the projection line, lX, through Xt perpendicular to t | a. 3. Let dx be the distance of Xf from folding line t | f. Xa (that is, the view of X in a) is the point on lx that has distance dx from the folding line t | a. The distance dx is called a transfer distance. The construction is illustrated in Figure 2-39. 2-39 Constructing an auxiliary view from two adjacent views (Left) Constructing an auxiliary view of the point; (Right) Visualization of the construction t f Xt Xf dX dX lX t a f t Xa T Xt Xf dX dX dX a f t Xf Xt X Xa 82 2.1.9 More on transfer distances Figure 2-40 illustrates five consecutively adjacent projection planes in which two are always perpendicular to the third. Note that point P is located at a distance H below the horizontal view, F behind the front view and E behind the auxiliary elevation. 2-40 An example with five projection planes and its unfolding #5 Auxiliary inclined projection plane #4 Auxiliary inclined projection plane #3 Auxiliary elevation plane #2 Frontal projection plane #1 Horizontal projection plane Inclined line of sight #4 -auxiliary elevation plane appears as an edge Inclined line of sight #5 -frontal plane appears as an edge Level line of sight #3 -horizontal projection plane #1 and auxiliary inclined projection plane #4 appear as edges Level line of sight #2 -horizontal projection plane always appears as an edge Vertical line of sight #1 - frontal projection plane and all other elevation planes appear as edges P5 P2 P4 P P3 P1 1 2 5 3 4 P1 P2 P5 P3 P4 83 2-41 Unfolding 2-40 and the method of transfer distances 2.1.10 Examples of auxiliary view from given top and front elevation views A typical problem situation is that the top and front views are given along with an auxiliary axis specified as shown on the right. (Right) The problem F = distance behind frontal plane H = distance below horizontal plane F E = distance behind aux elevation #3 H E view #4 - aux. inclined projection plane view #3 - aux.
elevation view #1 - top view view #2 - front elevation view #5 - aux. inclined projection plane 5 2 1 2 1 3 3 4 P4 p3 P5 P1 P2 1 f f t 60° 60° 84 When constructing an auxiliary especially if it is your first time at it, or a truly dense drawing it is often wise to number the points. See Figure 2-42(top left), where points 3 and 7, 4 and 6 overlap in top view and points 1 and 3 overlap in front view. We can now proceed as per Construction 2-3. See Figure 2-42(right). Hidden lines are shown dashed by Construction 2-2 (see page 73) for visibility of lines. 2-42 Constructing an auxiliary view (Top Left) The problem with points numbered (Right) Auxiliary view for the specified folding line with hidden lines shown dashed (Bottom Left) Rendered view 1 f f t 1,3 7 4 7,3 6,4 2 5,1 2 6 5 1 f f t 60° 60° 1 5 3 7 6 4 2 1,3 7 4 7,3 6,4 2 5,1 2 6 5 85 Figures 2-43 and 2-44 give two further examples of given top and front views and the location of the folding lines for the auxiliary view(s) although the points are numbered. These are left as exercises. 2-43 Example top, front and constructed auxiliary views transfer distances from view t transfer distances from view f 2 f 1 t f t 86 2-44 Another example top, front and constructed auxiliary views ∠45° lines not visible in any particular view are normally shown dashed (or dotted) in that view 1 f f t |
187834 | https://www.quora.com/What-conditions-determine-that-two-lines-in-3-D-space-are-parallel | Something went wrong. Wait a moment and try again.
Straight Line in Space
Vectors (geometry)
Geometric Lines
Analytic Geometry
3-d Geometry
Lines of Latitude
Geometric Analysis
Vector Algebra
5
What conditions determine that two lines in 3-D space are parallel?
·
In 3-D space, two lines are considered parallel if they meet the following conditions:
Direction Vectors
: The direction vectors of the two lines must be scalar multiples of each other. If line
L
1
has a direction vector
d
1
and line
L
2
has a direction vector
d
2
, then
L
1
and
L
2
are parallel if there exists a scalar
k
such that:
d
1
=
k
d
2
or
d
2
=
k
d
1
2. Non-Intersecting
: If the lines do not intersect, they are either parallel or skew. Skew lines are not parallel and do not lie in the same plane.
3. Same Plane Condition
: For two lines
In 3-D space, two lines are considered parallel if they meet the following conditions:
Direction Vectors: The direction vectors of the two lines must be scalar multiples of each other. If line L1 has a direction vector d1 and line L2 has a direction vector d2, then L1 and L2 are parallel if there exists a scalar k such that:
d1=kd2
or
d2=kd1
Non-Intersecting: If the lines do not intersect, they are either parallel or skew. Skew lines are not parallel and do not lie in the same plane.
Same Plane Condition: For two lines to be parallel, they must either lie in the same plane or be parallel in three-dimensional space. If the lines are coplanar and their direction vectors are parallel, they are parallel lines.
For two lines in 3-D space to be parallel:
- Their direction vectors must be scalar multiples of each other.
- They can either be distinct parallel lines or coincident lines (the same line).
Michael Paglia
Former Journeyman Wireman IBEW
·
Author has 33.3K answers and 5.3M answer views
·
1y
Originally Answered: What is the condition for two lines being parallel in 3D?
·
Not much on 3d
But
2d slopes need to be the same
And their distance needs to be the same
I'd say same slope
And same plane
I'm not getting how the intercepts work there
Sorry
Hey how's about that
No Alzheimer's yet
Lost both mom.and mom in law to it
SoI know
Not much on 3d
But
2d slopes need to be the same
And their distance needs to be the same
I'd say same slope
And same plane
I'm not getting how the intercepts work there
Sorry
Hey how's about that
No Alzheimer's yet
Lost both mom.and mom in law to it
SoI know
Your response is private
Was this worth your time?
This helps us sort answers on the page.
Absolutely not
Definitely yes
Promoted by Coverage.com
Johnny M
Master's Degree from Harvard University (Graduated 2011)
·
Updated Sep 9
Does switching car insurance really save you money, or is that just marketing hype?
This is one of those things that I didn’t expect to be worthwhile, but it was. You actually can save a solid chunk of money—if you use the right tool like this one. I ended up saving over $1,500/year, but I also insure four cars.
I tested several comparison tools and while some of them ended up spamming me with junk, there were a couple like Coverage.com and these alternatives that I now recommend to my friend.
Most insurance companies quietly raise your rate year after year. Nothing major, just enough that you don’t notice. They’re banking on you not shopping around—and to be honest, I didn’t.
This is one of those things that I didn’t expect to be worthwhile, but it was. You actually can save a solid chunk of money—if you use the right tool like this one. I ended up saving over $1,500/year, but I also insure four cars.
I tested several comparison tools and while some of them ended up spamming me with junk, there were a couple like Coverage.com and these alternatives that I now recommend to my friend.
Most insurance companies quietly raise your rate year after year. Nothing major, just enough that you don’t notice. They’re banking on you not shopping around—and to be honest, I didn’t. It always sounded like a hassle. Dozens of tabs, endless forms, phone calls I didn’t want to take.
But recently I decided to check so I used this quote tool, which compares everything in one place. It took maybe 2 minutes, tops. I just answered a few questions and it pulled up offers from multiple big-name providers, side by side. Prices, coverage details, even customer reviews—all laid out in a way that made the choice pretty obvious.
They claimed I could save over $1,000 per year. I ended up exceeding that number and I cut my monthly premium by over $100. That’s over $1200 a year. For the exact same coverage.
No phone tag. No junk emails. Just a better deal in less time than it takes to make coffee. Here’s the link to two comparison sites - the one I used and an alternative that I also tested.
If it’s been a while since you’ve checked your rate, do it. You might be surprised at how much you’re overpaying.
Donald Hartig
PhD in Mathematics, University of California, Santa Barbara (Graduated 1970)
·
Author has 7.4K answers and 2.8M answer views
·
6y
Originally Answered: How can you check if 2 lines are perpendicular in 3D space?
·
Dot their direction vectors. The lines are perpendicular if and only if the dot product is zero.
Related questions
How can you determine if two lines are parallel without finding their equations?
Are two lines parallel if they are the same line?
How can we determine which side of a line a point is on, if the two given lines are parallel?
Do parallel lines in 3-D space have the same slope?
What is the side view of a straight line parallel to both plane?
Jeff Koch
Physics / Math Professor
·
Author has 11.1K answers and 1.7M answer views
·
2y
The two lines must be co planar to be parallel. Any set of parallel lines must be in the same 2 dimensional plane. And of course, they must be non intersecting. Another words, they must have the same slope in their plane. If two lines are not in the same plane, they are called skew lines.
Alexandru Carausu
Former Assoc. Prof. Dr. (Ret) at Technical University "Gh. Asachi" Iasi (1978–2010)
·
Author has 3K answers and 875.1K answer views
·
Updated 1y
Related
How can you determine if two lines are parallel? What happens when these lines intersect?
If the direction vectors of the two lines are (respectively)
v_
1 &
v
_
2 then
(
L
_1 ) || (
L
_2 ) <==>
v_
1 ||
v
_
2 . (1)
This means that the two vectors are collinear, which means that the two free vectors can be represented by two lines segments that lie on the same line or parallel lines. If the two vectors have known Cartesian coordinates in the orthonormal system (
O
;
i
,
j
,
k
) , then relation (1) is equivalent to the proportionality of their respective coordinates. In the 3D space,
v_
1 (
x
_1 ,
y
_1 ,
z
_1 ) ,
v
_2 (
x
_2 ,
y
_2 ,
z
_2 ) & (1) <==>
x
_1 /
x
_2 =
y
_1 /
y
_2 =
z
_1 /
z
_2 . (2)
The common v
If the direction vectors of the two lines are (respectively) v_1 & v_2 then
( L_1 ) || ( L_2 ) <==> v_1 || v_2 . (1)
This means that the two vectors are collinear, which means that the two free vectors can be represented by two lines segments that lie on the same line or parallel lines. If the two vectors have known Cartesian coordinates in the orthonormal system ( O ; i , j , k ) , then relation (1) is equivalent to the proportionality of their respective coordinates. In the 3D space,
v_1 ( x_1 , y_1 , z_1 ) , v_2 ( x_2 , y_2 , z_2 ) & (1) <==> x_1 / x_2 = y_1 / y_2 = z_1 / z_2 . (2)
The common value of the three ratios in (2) can be = λ , hence v_1 = λ v_2 : the two direction vectors are proportional, with the multiplication of vectors by scalars.
As regards the second question, this is more than strange ! Two parallel lines do not intersect one another at any point(s) : ( L_1 ) ∩ ( L_2 ) = ∅ , the empty set !
Note: A few (minor) corrections operated on Thu, January 18, 9:39 AM (RO time)
Related questions
How can we determine if two lines are parallel using only their slopes (no other information)?
How can I prove that two lines in 3D are parallel?
How do you tell if two lines are parallel?
How can you determine if two lines are parallel using only two points on each line, without knowing the equations of the lines?
How do you find the midpoint of two parallel lines?
Michael Paglia
Former Journeyman Wireman IBEW
·
Author has 33.3K answers and 5.3M answer views
·
1y
Related
How can you determine if two lines are parallel? What happens when these lines intersect?
I can tell you 3 ways
But I learned you needed to prove anything you used
And it seems some proofs are definitions these days
If 2 lines are intersected by a 3rd
And the opposite angles are congruent they are parallel
I'd have to have that actual theorem number in my school in 1973
But that's one way
Perpendicular lines are another
Any 2 lines that never meet are parallel
Lines have no length or width
Line segments do
A point has no dimension
Logia den Drase
Agent of the Alternate New South Wales Education Review
·
Author has 456 answers and 881.1K answer views
·
9y
Related
Given the components of two three dimensional vectors, how do you determine if the two are parallel to each other?
For each vector, take the largest component, and divide the other components by the largest component. (“Normalise the largest component to 1.”) If these are parallel vectors, the normalised vectors will be identical.
Buddha Buck
Studied at University at Buffalo
·
Author has 5.8K answers and 16.9M answer views
·
9y
Related
Given the components of two three dimensional vectors, how do you determine if the two are parallel to each other?
Take their wedge product. If vectors →a,→b are parallel, then →a∧→b=0.
If you just have their components, →a=a1→e1+a2→e2+a3→e3,→a=b1→e1+b2→e2+b3→e3, then by remembering that the wedge product is antisymmetric and distributes over vector addition, you get →a∧→b=(a1→e1+a2→e2+a3→e3)∧(→a=b1→e1+b2→e2+b3→e3)=(a1→e1)∧(b1→e1+(a1→e1)∧(b2→e2+(a1→e1)∧(b3→e3+(a1→e2)\we
Take their wedge product. If vectors →a,→b are parallel, then →a∧→b=0.
If you just have their components, →a=a1→e1+a2→e2+a3→e3,→a=b1→e1+b2→e2+b3→e3, then by remembering that the wedge product is antisymmetric and distributes over vector addition, you get Unable to parse this math expression.\vec{a}\wedge\vec{b} = (a_1\vec{e_1} + a_2\vec{e_2} + a_3\vec{e_3})\wedge(\vec{a} = b_1\vec{e_1} + b_2\vec{e_2} + b_3\vec{e_3}) = (a_1\vec{e_1})\wedge(b_1\vec{e_1} + (a_1\vec{e_1})\wedge(b_2\vec{e_2} +(a_1\vec{e_1})\wedge(b_3\vec{e_3} +(a_1\vec{e_2})\wedge(b_1\vec{e_1} +(a_2\vec{e_2})\wedge(b_2\vec{e_2} +(a_2\vec{e_2})\wedge(b_3\vec{e_3} +(a_3\vec{e_3})\wedge(b_1\vec{e_1} +(a_3\vec{e_3})\wedge(b_2\vec{e_2} +(a_3\vec{e_3})\wedge(b_3\vec{e_3} = a_1b_2\vec{e_1}\wedge\vec{e_2} +a_1b_3\vec{e_1}\wedge\vec{e_3} +a_2b_1\vec{e_2}\wedge\vec{e_1} +a_2b_3\vec{e_2}\wedge\vec{e_3} +a_3b_1\vec{e_3}\wedge\vec{e_1} +a_3b_2\vec{e_3}\wedge\vec{e_2} = (a_1b_2-a_2b_1)\vec{e_1}\wedge\vec{e_2} + (a_1b_3-a_3b_1)\vec{e_1}\wedge\vec{e_3} + (a_2b_3-a_3b_2)\vec{e_3}\wedge\vec{e_2}
In order for that to be zero, all three components of the wedge product (e.g, a1b2−b2a1) must be zero.
Andrea Baisero
Does not compute.
·
Author has 140 answers and 329.3K answer views
·
11y
Related
Given the components of two three dimensional vectors, how do you determine if the two are parallel to each other?
Given x and y the above mentioned vectors, check that
⟨
x
,
y
⟩
2
==
⟨
x
,
x
⟩
⟨
y
,
y
⟩
.
Ajay Sreenivas
Former Aerospace Engineer/ Staff Consultant at Ball Aerospace (1980–2010)
·
Author has 5.3K answers and 1.7M answer views
·
1y
Related
How can you determine if two lines are parallel? What happens when these lines intersect?
Draw two lines that are perpendicular to one of the lines at two different points and compare the distance from these points to the points of intersection with the second line. If the lines are parallel the distances will be equal else the lines will intersect at some point.
Calvin L.
18, Mathematics & Statistics major
·
Author has 10K answers and 2.2M answer views
·
2y
Related
How do you find the condition of parallel lines?
Two lines are parallel if their gradients/slopes are the same. Parallel lines will never touch each other.
e.g. y=7x−3 and y=7x+14 are parallel since their gradient, 7 is the same. You'll realize that there is no value of x and y where these lines touch.
y=13x−6 and y=19 are not parallel since they have different gradients (—13 and 0). You'll notice that these lines will touch at some point in x and when y is 19.
Justin Rising
PhD in statistics
·
Upvoted by
David Joyce
, Ph.D. Mathematics, University of Pennsylvania (1979) · Author has 12.1K answers and 26.5M answer views
·
5y
Related
Are two lines parallel if they are the same line?
Do you agree with the following statements?
If
a
is parallel to
b
, then
b
is parallel to
a
.
2. If
a
is parallel to
b
and
b
is parallel to
c
, then
a
is parallel to
c
.
3. For every line
a
, there is some line
b
such that
a
is parallel to
b
.
If so, you are forced to conclude that every line is parallel to itself.
Alexis Sullivan
Senior Product Manager
·
11mo
Related
How do you tell if two lines are parallel?
Method 1 of 3:Comparing the Slopes of Each Line
The slope of a line is defined by (Y2 - Y1)/(X2 - X1) where X and Y are the horizontal and vertical coordinates of points on the line. You must define two points on the line to calculate this formula. The point closer to the bottom of the line is (X1, Y1) and the point higher on the line, above the first point, is (X2, Y2).
This formula can be restated as the rise over the run. It is the change in vertical difference over the change in horizontal difference, or the steepness of the line.
If a line points upwards to the right, it will have a positive
Method 1 of 3:Comparing the Slopes of Each Line
The slope of a line is defined by (Y2 - Y1)/(X2 - X1) where X and Y are the horizontal and vertical coordinates of points on the line. You must define two points on the line to calculate this formula. The point closer to the bottom of the line is (X1, Y1) and the point higher on the line, above the first point, is (X2, Y2).
This formula can be restated as the rise over the run. It is the change in vertical difference over the change in horizontal difference, or the steepness of the line.
If a line points upwards to the right, it will have a positive slope.
If the line is downwards to the right, it will have a negative slope.
Identify the X and Y coordinates of two points on each line.
A point on a line is given by the coordinate (X, Y) where X is the location on the horizontal axis and Y is the location on the vertical axis. To calculate the slope, you need to identify two points on each of the lines in question.
Points are easily determined when you have a line drawn on graphing paper.
To define a point, draw a dashed line up from the horizontal axis until it intersects the line. The position that you started the line on the horizontal axis is the X coordinate, while the Y coordinate is where the dashed line intersects the line on the vertical axis.
For example: line l has the points (1, 5) and (-2, 4) while line r has the points (3, 3) and (1, -4).
Plug the points for each line into the slope formula.
To actually calculate the slope, simply plug in the numbers, subtract, and then divide. Take care to plug in the coordinates to the proper X and Y value in the formula.
To calculate the slope of line l: slope = (5 – (-4))/(1 – (-2))
Subtract: slope = 9/3
Divide: slope = 3
The slope of line r is: slope = (3 – (-4))/(3 - 1) = 7/2
Compare the slopes of each line.
Remember, two lines are parallel only if they have identical slopes. Lines may look parallel on paper and may even be very close to parallel, but if their slopes are not exactly the same, they aren’t parallel.
In this example, 3 is not equal to 7/2, therefore, these two lines are not parallel.
Method 2 of 3:Using the Slope-Intercept Formula
1. Define the slope-intercept formula of a line.
The formula of a line in slope-intercept form is y = mx + b, where m is the slope, b is the y-intercept, and x and y are variables that represent coordinates on the line; generally, you will see them remain as x and y in the equation. In this form, you can easily determine the slope of the line as the variable "m".
For example. Rewrite 4y - 12x = 20 and y = 3x -1. The equation 4y - 12x = 20 needs to be rewritten with algebra while y = 3x -1 is already in slope-intercept form and does not need to be rearranged.
Rewrite the formula of the line in slope-intercept form.
Oftentimes, the formula of the line you are given will not be in slope-intercept form. It only takes a little math and rearranging of variables to get it into slope-intercept.
For example: Rewrite line 4y-12x=20 into slope-intercept form.
Add 12x to both sides of the equation: 4y – 12x + 12x = 20 + 12x
Divide each side by 4 to get y on its own: 4y/4 = 12x/4 +20/4
Slope-intercept form: y = 3x + 5.
Compare the slopes of each line.
Remember, when two lines are parallel to each other, they will have the exact same slope. Using the equation y = mx + b where m is the slope of the line, you can identify and compare the slopes of two lines.
In our example, the first line has an equation of y = 3x + 5, therefore it’s slope is 3. The other line has an equation of y = 3x – 1 which also has a slope of 3. Since the slopes are identical, these two lines are parallel.
Note that if these equations had the same y-intercept, they would be the same line instead of parallel.
Method 3 of 3:Defining a Parallel Line with the Point-Slope Equation
1. Define the point-slope equation.
Point-slope form allows you to write the equation of a line when you know its slope and have an (x, y) coordinate. You would use this formula when you want to define a second parallel line to an already given line with a defined slope. The formula is y – y1= m(x – x1) where m is the slope of the line, x1 is the x coordinate of a point given on the line and y1 is the y coordinate of that point. As in the slope-intercept equation, x and y are variables that represent coordinates on the line; generally, you will see them remain as x and y in the equation.
The following steps will work through this example: Write the equation of a line parallel to the line y = -4x + 3 that goes through point (1, -2).
Determine the slope of the first line.
When writing the equation of a new line, you must first identify the slope of the line you want to draw yours parallel to. Make sure the equation of the original line is in slope-intercept form and then you know the slope (m).
The line we want to draw parallel to is y = -4x + 3. In this equation, -4 represents the variable m and therefore, is the slope of the line.
This equation only works if you have a coordinate that passes through the new line. Make sure you don’t choose a coordinate that is on the original line. If your final equations have the same y-intercept, they are not parallel, but the same line.
In our example, we will use the coordinate (1, -2).
Write the equation of the new line with the point-slope form.
Remember the formula is y – y1= m(x – x1). Plug in the slope and coordinates of your point to write the equation of your new line that is parallel to the first.
Using our example with slope (m) -4 and (x, y) coordinate (1, -2): y – (-2) = -4(x – 1)
Simplify the equation.
After you have plugged in the numbers, the equation can be simplified into the more common slope-intercept form. This equation's line, if graphed on a coordinate plane, would be parallel to the given equation.
For example: y – (-2) = -4(x – 1)
Two negatives make a positive: y + 2 = -4(x -1)
Distribute the -4 to x and -1: y + 2 = -4x + 4.
Subtract -2 from both side: y + 2 – 2 = -4x + 4 – 2
Simplified equation: y = -4x + 2
Related questions
How can you determine if two lines are parallel without finding their equations?
Are two lines parallel if they are the same line?
How can we determine which side of a line a point is on, if the two given lines are parallel?
Do parallel lines in 3-D space have the same slope?
What is the side view of a straight line parallel to both plane?
How can we determine if two lines are parallel using only their slopes (no other information)?
How can I prove that two lines in 3D are parallel?
How do you tell if two lines are parallel?
How can you determine if two lines are parallel using only two points on each line, without knowing the equations of the lines?
How do you find the midpoint of two parallel lines?
How can you determine the distance between two parallel lines?
What lines are parallel to
y
=
−
4
x
+
3
?
What is the difference between parallel lines and non-parallel lines? Why do parallel lines have a slope of 0?
How do you determine if a point lies between two parallel lines on a graph?
How can two lines be identified as parallel without using any instruments? Is there a specific condition or rule that determines parallelism?
About
·
Careers
·
Privacy
·
Terms
·
Contact
·
Languages
·
Your Ad Choices
·
Press
·
© Quora, Inc. 2025 |
187835 | https://psu.pb.unizin.org/chem112maluz4/ | Chemistry 112- Chapters 12-17 of OpenStax General Chemistry – Simple Book Publishing
Skip to content
Menu
Primary Navigation
Home
Read
Sign in
Search in book: Search
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
Book Title: Chemistry 112- Chapters 12-17 of OpenStax General Chemistry
by OpenStax
Book Description: The textbook provides an important opportunity for students to learn the core concepts of chemistry and understand how those concepts apply to their lives and the world around them. The book also includes a number of innovative features, including interactive exercises and real-world applications, designed to enhance student learning.
License: Creative Commons Attribution
Read Book
Contents
Show All Contents Hide All Contents
Book Contents Navigation
Preface
Chapter 12. Kinetics
Introduction
12.1 Chemical Reaction Rates
12.2 Factors Affecting Reaction Rates
12.3 Rate Laws
12.4 Integrated Rate Laws
12.5 Collision Theory
12.6 Reaction Mechanisms
12.7 Catalysis
Chapter 13. Fundamental Equilibrium Concepts
Introduction
13.1 Chemical Equilibria
13.2 Equilibrium Constants
13.3 Shifting Equilibria: Le Châtelier’s Principle
13.4 Equilibrium Calculations
Chapter 14. Acid-Base Equilibria
Introduction
14.1 Brønsted-Lowry Acids and Bases
14.2 pH and pOH
14.3 Relative Strengths of Acids and Bases
14.4 Hydrolysis of Salt Solutions
14.5 Polyprotic Acids
14.6 Buffers
14.7 Acid-Base Titrations
Chapter 15. Equilibria of Other Reaction Classes
Introduction
15.1 Precipitation and Dissolution
15.2 Lewis Acids and Bases
15.3 Multiple Equilibria
Chapter 16. Thermodynamics
Introduction
16.1 Spontaneity
16.2 Entropy
16.3 The Second and Third Laws of Thermodynamics
16.4 Free Energy
Chapter 17. Electrochemistry
Introduction
17.1 Balancing Oxidation-Reduction Reactions
17.2 Galvanic Cells
17.3 Standard Reduction Potentials
17.4 The Nernst Equation
17.5 Batteries and Fuel Cells
17.6 Corrosion
17.7 Electrolysis
Appendix
Appendix A: The Periodic Table
Appendix B: Essential Mathematics
Appendix C: Units and Conversion Factors
Appendix D: Fundamental Physical Constants
Appendix E: Water Properties
Appendix F: Composition of Commercial Acids and Bases
Appendix G: Standard Thermodynamic Properties for Selected Substances
Appendix H: Ionization Constants of Weak Acids
Appendix I: Ionization Constants of Weak Bases
Appendix J: Solubility Products
Appendix K: Formation Constants for Complex Ions
Appendix L: Standard Electrode (Half-Cell) Potentials
Appendix M: Half-Lives for Several Radioactive Isotopes
Book Information
Author
OpenStax
License
Chemistry 112- Chapters 12-17 of OpenStax General Chemistry Copyright © 2016 by Rice University is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.
Subject
Chemistry
Metadata
Title Chemistry 112- Chapters 12-17 of OpenStax General Chemistry
Author OpenStax
License
Chemistry 112- Chapters 12-17 of OpenStax General Chemistry Copyright © 2016 by Rice University is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.
© Jun 20, 2016 OpenStax.Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License 4.0 license.
Under this license, any user of this textbook or the textbook contents herein must provide proper attribution as follows:
The OpenStax College name, OpenStax College logo, OpenStax College book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the creative commons license and may not be reproduced without the prior and express written consent of Rice University.For questions regarding this license, please contact partners@openstaxcollege.org.
If you use this textbook as a bibliographic reference, then you should cite it as follows:OpenStax, Chemistry. OpenStax CNX. Jun 20, 2016
If you redistribute this textbook in a print format, then you must include on every physical page the following attribution:
“Download for free at
If you redistribute part of this textbook, then you must retain in every digital format page view (including but not limited to EPUB, PDF, and HTML) and on every physical printed page the following attribution:
“Download for free at
Primary Subject Chemistry
Pressbooks
Powered by Pressbooks
Pressbooks User Guide
|Pressbooks Directory
Pressbooks on YouTubePressbooks on LinkedIn |
187836 | https://web.stanford.edu/~lindrew/8.044.pdf | 8.044: Statistical Physics I Lecturer: Professor Nikta Fakhri Notes by: Andrew Lin Spring 2019 My recitations for this class were taught by Professor Wolfgang Ketterle.
1 February 5, 2019 This class’s recitation teachers are Professor Jeremy England and Professor Wolfgang Ketterle, and Nicolas Romeo is the graduate TA. We’re encouraged to talk to the teaching team about their research – Professor Fakhri and Professor England work in biophysics and nonequilibrium systems, and Professor Ketterle works in experimental atomic and molecular physics.
1.1 Course information We can read the online syllabus for most of this information. Lectures will be in 6-120 from 11 to 12:30, and a 5-minute break will usually be given after about 50 minutes of class. The class’s LMOD website will have lecture notes and problem sets posted – unlike some other classes, all pset solutions should be uploaded to the website, because the TAs can grade our homework online. This way, we never lose a pset and don’t have to go to the drop boxes.
There are two textbooks for this class: Schroeder’s “An Introduction to Thermal Physics” and Jaffe’s “The Physics of Energy.” We’ll have a reading list that explains which sections correspond to each lecture. Exam-wise, there are two midterms on March 12 and April 18, which take place during class and contribute 20 percent each to our grade.
There is also a final that is 30 percent of our grade (during finals week). The remaining 30 percent of our grade comes from 11 or 12 problem sets (lowest grade dropped).
Office hours haven’t been posted yet; they will also be posted on the website once schedules are sorted out.
1.2 Why be excited about 8.044?
One of the driving principles behind this class is the phrase “More is different.” We can check the course website for the reading “More is Different” by P.W. Anderson.
Definition 1 Thermodynamics is a branch of physics that provides phenomenological descriptions for properties of macroscopic systems in thermal equilibrium.
Throughout this class, we’ll define each of the words in the definition above, and more generally, we’re going to learn about the physics of energy and matter as we experience it at normal, everyday time and length scales. The 1 most important feature is that we’re dealing with the physics of many particles at once – in fact, we’re going to be doing a statistical description of about 1024 particles at once. It would be very hard and basically useless to try to use ordinary equations of motion to describe the behavior of each particle.
Fact 2 Because thermodynamics is a study of global properties, like magnetism or hardness, the largeness of our systems will often actually be an advantage in calculations.
The concept of time asymmetry will also come up in this class. In Newton’s laws, Maxwell’s equations, or the Schrodinger equation, there is no real evidence that time needs to travel in a certain direction for the physics to be valid. But the “arrow of time” is dependent on some of the ideas we’ll discuss in this class.
Two more ideas that will repeatedly come up are temperature and entropy. We’ll spend a lot of time precisely understanding those concepts, and we’ll understand that it doesn’t make sense to talk about the temperature of an individual particle – it only does to define temperature with regards to a larger system. Meanwhile, entropy is possibly the most influential concept coming from statistical mechanics: it was originally understood as a thermodynamic property of heat engines, which is where much of this field originated. But now, entropy is science’s fundamental measure of disorder and information, and it can quantify ideas from image compression to the heat death of the Universe.
Here’s a list of some of the questions we’ll be asking in this class: • What is the difference between a solid, liquid, and gas?
• What makes a material an insulator or a conductor?
• How do we understand other properties of materials, like magnets, superfluids, superconductors, white dwarfs, neutron stars, stretchiness of rubber, and physics of living systems?
None of these are immediately apparent from the laws of Newton, Maxwell, or Schrodinger. Instead, we’re going to need to develop a theoretical framework with two main parts: • Thermodynamics: this is the machinery that describes macroscopic quantities such as entropy, temperature, magnetization, and their relationship.
• Statistical mechanics: this is the statistical machinery at the microscopic level. What are each of the degrees of freedom doing in our system?
These concepts have been incorporated into different other STEM fields: for example, they come up in Monte-Carlo methods, descriptions of ensembles, understanding phases, nucleation, fluctuations, bioinformatics, and (now the foundation of most of physics) quantum statistical mechanics.
1.3 An example from biology Many living systems perform processes that are irreversible, and the behavior of these processes can be quantified in terms of how much entropy is produced by them. Statistical physics and information theory help us do this! As a teaser, imagine we have a biological system where movement of particles is influenced by both thermal motion and motor proteins. By watching a video, we can track each individual particle, and looking at the trajectory forward and backward, and we can construct a relative entropy ⟨˙ S⟩ kB ≡D[pforward||pbackward] = X pf ln pf pb 2 which compares the probability distributions of forward and backward motion, and the point is that this relates to the entropy production rate of the system! But it’ll take us a lot of work to get to that kind of result, so we’ll start with some definitions and important concepts.
To summarize this general overview, there’s two complementary paths going on here: Thermodynamics = ⇒global properties = ⇒(temperature, entropy, magnetization, etc.), and Statistical physics = ⇒(microscopic world to macroscopic world).
We’ll also spend time on two “diversions:” quantum mechanic will help us construct the important states that we will end up “counting” in statistical physics, and basic probability theory will give us a statistical description of the properties we’re trying to describe (since entropy itself is an information theory metric)! To fully discuss these topics, we’re going to need some mathematics, particularly multivariable calculus.
1.4 Definitions We’ll start by talking about the basic concepts of heat, internal energy, thermal energy, and temperature.
Definition 3 (Tentative) Thermal energy is the collective energy contained in the relative motion of a large number of particles that compose a macroscopic system. Heat is the transfer of that thermal energy.
(We’ll try to be careful in distinguishing between energy and the transfer of that energy throughout this class.) Definition 4 Internal energy, often denoted U, is the sum of all contributions to the energy of a system as an isolated whole.
This internal energy U is usually made up of a sum of different contributions: • Kinetic energy of molecular motion, including translational, vibrational, and rotational motion, • Potential energy due to interactions between particles in the system, and • Molecular, atomic, and nuclear binding energies.
Notably, this does not include the energy of an external field or the kinetic and potential energy of the system as a whole, because we care about behavior that is internal to our system of study.
Example 5 Consider a glass of water on a table, and compare it to the same glass at a higher height. This doesn’t change the internal energy, even though the glass has gained some overall gravitational potential energy.
Definition 6 (Tentative) Temperature is what we measure on a thermometer.
3 As a general rule, if we remove some internal energy from a system, the temperature will decrease. But there are cases where it will plateau as well! For example, if we plot temperature as a function of the internal energy, it is linear for each phase state (solid, liquid, vapor), but plateaus during phase changes, because it takes some energy to transform ice to water to vapor. And now we’re ready to make some slightly more precise definitions: Definition 7 Let U0 be the energy of a system at tempreature T = 0. Thermal energy is the part of the internal energy of a system above U = U0.
Notice that with this definition, the binding energy does not contribute to thermal energy (because that’s present even at T = 0, U = U0), but the other sources of internal energy (kinetic energy, potential energy) will still contribute.
Definition 8 Heat is the transfer of thermal energy from a system to another system.
This is not a property of the system: instead, it’s energy in motion! And heat transfer can occur as heat conduction or radiation, a change in temperature, or other things that occur at the microscopic level.
1.5 States and state variables The next discussion is a little bit more subtle – we want to know what it means for our system to be in a particular state. In classical mechanics, a state is specified by the position and velocity of all objects at time t. So if we’re given the two numbers {xi(t), ˙ xi(t)} for each i (that is, for every particle in our system), we have specified everything we might want to know. Meanwhile, in quantum mechanics, the state of a system is specified by quantum numbers: for example, |n1, · · · , nM⟩(for some nonnegative integers ni) is one way we might describe the system. But we have a completely different definition of “state” now that we’re in a macroscopic system: Definition 9 A system which has settled down is in a state of thermodynamic equilibrium or thermodynamic macrostate.
Here are some of the characteristics of a system at thermal equilibrium: • The temperature of the system is uniform throughout the space.
• More generally, at the macroscopic scale, no perceptible changes are occurring, though there are still changes at the microscopic level (like atomic or molecular movement).
• The system is dynamic (meaning that the system continues to evolve and change at the microscopic level).
It’s important to note that other properties of the system (that are not temperature) can be non-uniform at equilibrium! For example, if we mix water and oil, there will obviously be some differences in different parts of the system no matter how long we wait for the atoms to mix.
Definition 10 State functions and state variables are properties that we can use to characterize an equilibrium state.
4 Some examples are pressure P, temperature T, volume T, the number of particles N, and the internal energy U. Note that some quantities are only defined for systems at thermal equilibrium, such as pressure and temperature, while others are defined for more general systems, such as volume.
Another big part of this class is coming up with equations that relate these state functions: Example 11 The most famous equation of state, the ideal gas law (PV = NkBT), dictates the behavior of an ideal gas.
Definition 12 A macrostate is a particular set of values for the state variables of a system. Meanwhile, a microstate tells us more at the particle level, specifying the state of each individual particle.
For example, if we have a glass of water, we could (in principle) track each particle, writing down a microstate and describing our system at the microscopic level. But there are many different configurations that give a specific pressure and temperature, so a vast number of microstates can be consistent with a given macrostate.
Definition 13 A macrostate’s multiplicity is determined by the number of microstates consistent with that macrostate, an ensemble is defined as the set of all possible microstates that are consistent with a given macrostate.
This class will develop methods for describing such ensembles corresponding to a specific macrostate, and in particular one important consideration is that each microstate in the ensemble occurs with some probability. So we’ll be developing some probability theory in the next few lectures, and that will help us approach the physics more precisely.
2 February 6, 2019 (Recitation) 2.1 Introduction Usually, Professor Ketterle introduces himself in the first class, and this time he notices some people from 8.03. We’ll start today by giving a spiel for 8.044, and this class is particularly exciting to Professor Ketterle because he feels a connection to his research! Professor Fakhri does statistical physics in the classical world (in cells in aqueous solution), while Professor Ketterle examines systems at cold temperatures, which require quantum mechanics to describe. But it doesn’t always matter whether the microscopic picture is quantum or classical! In this class, we’re going to learn a framework for systems that have many degrees of freedom, but where we only know the macrostate (such as a liter of water at a certain pressure and temperature).
Fact 14 Professor Ketterle’s research can be described as taking temperature T →0.
His lab has gotten to temperatures of 450 picoKelvin, which is “pretty cold.” (In fact, according to Wikipedia, it’s the coldest temperature ever achieved.) For comparison, the background temperature in interstellar space is about 2.5 Kelvin, which is more than a billion times warmer than what’s been achieved in labs.
5 Low-temperature developments have opened up fields in physics today, because when we cool gas down to that regime, quantum mechanics becomes more apparent. Basically, when atoms have higher energy (at higher tempera-tures), they behave like classical particles that collide. But in reality, atoms should be thought of as de Broglie waves – it’s just that in the classical situations, the de Broglie wavelength λ = h mv does not play any role in the collisions, because λ is shorter than the size of the nucleus. But when we have an atom, which has low m, and cool it down so that v is small, the de Broglie wavelength can increase to the order of a micron or millimeter, which is large at atomic scales. So if we have a bunch of gas particles, and each particle’s wave is localized, new forms of matter can form with completely new properties, and that’s what makes low-temperature physics interesting.
Remark 15. When Professor Ketterle was an undergraduate student, he found statistical mechanics fascinating. He found it attractive that we can predict so much in physics given so few pieces of information: just using statistics and ideas of “randomness” (so that we have isotopy of velocity), we can make many predictions: find limits on efficiency of engines, derive gas laws, and so on. So we’ll get a taste of that through this class!
Professor Ketterle wants recitations to be interactive, so we should bring questions to the sessions. (He does not want to copy to the blackboard what we can just read from solutions.) Concepts are important: it’s important to address all of the important key points so that we can understand derivations. However, to become a physicist, we do need to do problems as well. So in order to compensate for the focus on problems in lecture, usually we will not have material “added.” So Professor Ketterle will prepare topics that help us get a deeper understanding or general overview of concepts, but we can also bring questions about homework if we have any.
2.2 Review of lecture material There were many buzzwords covered during the first lecture. We started thinking about what it means to have a state (microstates and macrostates, both classical and quantum), ensemble, energy (thermal energy Q, internal energy U, heat, and work), and temperature (which is particularly interesting in the limit T →0 or T →∞). Basically, we have introduced certain words, and it will take some practice to get fluent in those ideas.
Question 16. How do we explain to a non-physicist why there is an “absolute zero” lowest temperature? (This is a good question to ask a physicist.) In fact, how is temperature actually defined?
We still don’t have a formal definition of temperature, but let’s look at an example of a real-life situation in which temperature is very relevant: an ideal gas.
Fact 17 Here’s an important related fact: starting on May 20, 2019, the kilogram will be defined in terms of the fixed constant ℏ, and along with that, the definition of a Kelvin will also change to depend on the Boltzmann constant kB rather than the triple point of water.
So the Boltzmann constant will soon specify our units – it’s some constant that is approximately 1.38 · · ·×10−23J/K (and will come up frequently in the rest of this class), and now temperature is related to a measure of energy: kBT = [J].
6 In an ideal gas (in which we neglect weak interactions between particles), we’ll make the definition E = 1 2mv 2 ≡3 2kBT, where v 2 will be defined next lecture. If we take all of this for granted, the lowest possible temperature must occur when there is no kinetic energy: if v 2 →0, T →0. And this also explains why there is no negative temperature in the ideal gas – we can’t have less than zero energy. So absolute zero is essentially created by definition: it’s a situation in which there is no kinetic energy for any of our particles. For an analogous situation, we can measure the pressure in a vacuum chamber, which is proportional to the density of particles. And thus the lowest pressure is zero, since we can’t have negative particles.
On the other hand, what’s the highest temperature we can achieve? In principle, there is no upper limit: we can make kinetic energy per particle arbitrarily large. We can add some entertainment to the situation as well: even though the velocity v is upper bounded by c, we do have a divergent expression for relativistic energy which is not just 1 2mv 2: KE = 1 2 m0c2 q 1 −v 2 c2 .
In other words, we can keep adding energy to a particle, and it will just get heavier (without going over the speed of light). And in the lab, the highest temperatures we’ve ever achieved are around 2 × 1012 K in particle accelerators. In general, if we take two nuclei and smash them together when moving near the speed of light, temperature changes happen when during the actual collision. Then energy is converted into particles, and we have a piece of matter in which we are almost at thermal equilibrium.
But what really happens at 1012 Kelvin is interesting on its own. As our temperature rises, a solid melts, a liquid evaporates, molecules dissociate, atoms ionize into electrons and ions, and then ions lose more and more electrons until they are bare nuclei. If we go hotter, the nuclei dissociate as well! All of this behavior does actually happen in white dwarfs and neutron stars, but if we go even hotter, the protons and neutrons will dissociate into quarks, and we get a soup of quarks and gluons. Overall, it’s interesting that we’ve actually achieved room temperature times 1010, and we’ve also achieved room temperature divided by 1010, and there is physical behavior going on at both temperatures.
And now we can talk about what “negative temperature” actually means – we’ll have a more rigorous discussion when we study spin systems, but it’s good for us to know that there are some magnetic systems that can reach infinite temperatures at finite energies. What’s basically happening is that we’re traversing 1 T from ∞to 0: when we cross over 0, we can get into negative temperatures, and thus negative temperatures are in some way “even hotter” than infinite temperatures! Expressions of the form e−E/(kBT ) are going to show up frequently in this class, so we will actually get 1 T reasonably often in our calculations. And we’ll understand why this happens as probability theory comes into the picture!
3 February 7, 2019 The first lecture was a glossary of the terms we will see in this class; we’ll be slowly building up those concepts over the semester. The first problem set will be posted today, and the deadline will generally be the following Friday at 9pm. Lecture notes have been uploaded, and they will generally be uploaded after each class (as will some recitation materials) under the Materials tab of the LMOD website. For example, starting next Monday in recitation, there will be a review of partial derivatives, and the notes for those are posted already.
Also, we have two graduate TAs this semester! Pearson (from last semester’s 8.03 team) will also be available to 7 help. General office hours will also be posted by tomorrow, but in general, we should use email instead of Piazza.
3.1 Overview We will always begin lecture with a small overview of what will be covered. Statistical physics and thermodynamics are for bringing together the macroscopic and microscopic world, and we’re going to start by defining state functions like pressure and temperature using a tractable, simply-modeled system and working from first principles.
Essentially, we will use a monatomic ideal gas to define temperature and pressure, and then we will derive the ideal gas law. From there, we’ll see how to make empirical corrections to have a more realistic understanding of a system (for example, a van der Waals gas). We’ll also briefly talk about the equipartition theorem, which lets us connect temperature to energy, as well as the first law of thermodynamics, which is basically a restatement of conservation of energy.
3.2 The simplest model Definition 18 A monatomic ideal gas is a system with N molecules, each of which is a single atom with no internal dynamics (such as rotation or vibration). The molecules collide elastically as point particles (and take up no space), and the only energy in the system is kinetic energy.
So putting in our definitions, the kinetic energy, thermal energy, and internal energy are all essentially the same thing, and they are all equal to U = N X i=1 Ei = m 2 N X i=1 v 2 ix + v 2 iy + v 2 iz if all molecules have the same mass. Now assuming that we have an isotropic system, we can assume the three coordinates have equal averages, and we can define an average (squared) velocity v 2 ≡⟨v 2 ix⟩= ⟨v 2 iy⟩= ⟨v 2 iz⟩.
Plugging this in, the average internal energy is then Uavg = N⟨E⟩= 3 2Nmv 2.
Definition 19 The temperature is a measure of the thermal energy of the system, given by mv 2 ≡kBT where kB is a proportionality constant and T has units of Kelvin (in degrees above absolute zero).
The Boltzmann constant kB has units of energy per temperature, and it is experimentally about 1.381×10−23J/K.
Plugging this in, we find that the internal energy of an ideal gas is U = 3 2NkBT.
8 Fact 20 Usually chemists write this slightly differently: N is defined to be NAn, where NA is Avogadro’s number ≈ 6.023 × 1023mol−1, and n is the number of moles of the gas. Then the ideal gas constant is defined to be R ≡NAkB ≈8.314J/mol K, and our equation can be written as U = 3 2NkBT = 3 2nRT.
The idea is that each of the three dimensions is contributing an equal amount to the energy in the system. We’ve used the fact that particles have energy to define a temperature, and now we’ll similarly use the fact that particles have momentum as well to define pressure. Consider a container shaped like a box with a piston as one of the walls (in the x-direction). We know that by Newton’s law, the force can be described as Fx = dpx dt , where p = mvx is the momentum in the x-direction for one of the particles. Since the piston will barely move, we can just say that the particle will reverse x-momentum when it bounces off, but has no change in the other two directions: ∆px = 2mvx, ∆py = ∆pz = 0.
If we let the cross-sectional area of the piston-wall be A and the length of the box in the x-direction be ℓ, then the time between two collisions with the piston is ∆t = 2ℓ vx (since it must hit the opposite wall and bounce back). So now plugging in our values, the average force from this one molecule is Fx = ∆px ∆t = mv 2 x ℓ .
Assuming no internal molecular collisions (since we have an ideal gas), the total force on the piston for this system is then Fx = N X i=1 Fxi = Nm ℓv 2 = N ℓkbT by our definition of temperature. So now the pressure on the piston, defined to be force per unit area, is P = Fx A = NkBT ℓA = NkBT V Now we’re making some assumptions: if we say that collisions and interactions with the wall don’t matter, and that the shape of the container does not matter (an argument using small volume elements), we can rearrange this as PV = NkBT , which is our first equation of state for the class. (As a sidenote, the shape of the container will matter if our system is out of equilibrium, though.) So pressure, volume, and temperature are not independent: knowing two of them defines the third in our ideal gas system, and we’re beginning to find a way to relate our state functions!
9 3.3 An empirical correction Let’s start modifying our equation of state now – we’re going to use the chemistry version of the ideal gas law, where n is measured in moles. One key assumption that we have right now is that the particles have no volume: we have to make some corrections if we don’t have point particles any more. So we change our volume: letting b be some measure of how much volume is taken up by the particles, we replace V →V −nb.
Also, some particles may have attractive intermolecular forces, and to introduce this, we claim the pressure will change as P = nRT V −a n V 2 .
The constants a and b are empirically measured in a lab, but the point is that these modifications give us the van der Waals equation P + a n2 V 2 (V −nb) = nRT.
This means the effective volume for a real gas is smaller than an ideal gas, but the pressure can be larger or smaller than an ideal gas because we could have attractive or repulsive molecular interactions.
3.4 The equipartition theorem If we take another look at the equation for internal energy U = 3 2NkBT = 3N 1 2kBT , notice that our system has 3N degrees of freedom: one in each of the x, y, and z coordinates for each of the particles.
Proposition 21 (Classical equipartition theorem) At thermal equilibrium at temperature T, each quadratic degree of freedom contributes 1 2kBT to the total internal energy U of the system.
This is important for being able to consistently define temperature! Unfortunately, this is only true in classical limits at high temperatures. And we should make sure we’re precise with our language: Definition 22 A degree of freedom is a quadratic term in a single particle’s energy (or Hamiltonian). Examples include: • translational (in each coordinate) about the center of mass: 1 2m v 2 x + v 2 y + v 2 z , • rotational (in each axis): 1 2 ℓ2 x Ix and so on, and • vibrational: 1 2m ˙ x2 + 1 2kx2, imagining a molecule with two atoms in simple harmonic oscillation.
Example 23 Let’s try writing down the different degrees of freedom for molecules of a diatomic gas.
10 In such a system, we have 3 translational degrees of freedom (looking at the center of mass), 2 rotational degrees of freedom (we don’t have the third because there’s no moment of inertia about the axis connecting the two atoms), and 2 vibrational degrees of freedom (coming from the simple harmonic oscillator of the two atoms stretching). Thus, by the equipartition theorem, we already know that we’ll have U = 7N 1 2kBT , and that’s the power of the equipartition theorem: it allows us to have a general method for relating energy and temperature. And as an exercise, we should try to figure out why a simple crystalline solid has internal energy U = 3NkBT, using the same kinds of argument.
3.5 The first law of thermodynamics We now have a relationship between thermal energy and temperature, and our next step is to think about how the energy of a system can change. We’ll start by trying to define a relationship between work and pressure. Let’s go back to the piston wall that we used to derive the ideal gas law: the differential work being done on the piston by a particle is dW = F dx = PA dx = P dV.
We will use the convention in this class where whenever dV < 0, work is being done on the system. So the change in the internal energy of the system is the mechanical work done, and dU = −P dV (with the negative sign because work is being done by, rather than on, the system.) But there are also other ways to transfer energy, particularly through heat (which we denote with the letter Q).
We’ll use the convention is that heat flow into the system is positive, so dU = dQ . So we can write these together to get a total change in internal energy dU = dQ + dW.
In other words, the internal energy of the system changes if we add infinitesimal heat to it, or if the system does work externally. We can also add particles to the system to further modify this first law: we’ll see later on that we sometimes also get a contribution of the form dU = µ dN where µ is the chemical potential, then dU = dQ + dW + µ dN.
Explicitly, the whole point of this kind of statement is to have an energy conservation law, but implicitly, we also have that U, the internal energy of the system, is also a state function. But we should keep in mind that work and heat are path-dependent quantities, so the expressions dQ and dW are inexact differentials. And in particular, W, Q are not state functions!
Example 24 Consider an evolution of a system as follows. Start in state 1, with N1 particles, a temperature of T1, and a volume of V1. This tells us that we have some internal energy U(N1, T1, V1). We can take this to state 2 by adding some heat, so now we have N2 particles, a temperature of T2, and a volume of V2, giving us a new internal energy. Finally, perform some work on the system to take us back to state 1.
The idea is that U is an exact differential, so the internal energy doesn’t depend on the path taken. But dW and dQ are both inexact and path-dependent quantities – the example above showed us that dU = 0, but the work and 11 heat done depend on what state 2 is. So from here on out, we will use the notation ¯ dW and ¯ dQ, and now we write the first law as dU = ¯ dQ + ¯ dW .
So calculus tells us that dU can be obtained from differentiation (of some other function), while ¯ dQ and ¯ dW cannot.
In general, state functions can be divided into generalized displacements and generalized forces, which will be denoted {x} and {j}, respectively. These xis and jis are conjugate variables, and the idea is to write our differential work as ¯ dW = X i ji dxi.
Here are some sample state functions that we’ll be working with throughout this class Forces (ji) Displacements (xi) Pressure P Volume V Tension F Length L Surface tension σ Area A Magnetic field H Magnetization M Chemical potential µ Number of particles N The quantities under “displacements” are generally extensive, while the quantities under “force” are intensive.
We’ll use these words more as the class progresses, but the basic point is that scaling the system up changes our generalized displacements, but not the generalized forces.
4 February 11, 2019 (Recitation) 4.1 Questions and review We’ll begin with a few ideas from the homework. If we want to calculate the work done by moving from one state to another, we can integrate along the path taken and compute W = I final initial P dV.
Many such problems are solved using PV diagrams, which plot each state based on their pressure (on the y-axis) and volume (on the x-axis), and knowing P and V tells us the temperature T (through the equation of state) as well. And if we have an ideal gas, lines PV = NkBT on the PV diagram are where the temperature is constant. For example, if we have an isothermal expansion along one of these lines, we find that Z P dV = Z NkBT V dV = NkBT(ln Vf −ln Vi).
There are other kinds of work that can be specified: for instance, isobaric compression is done under constant pressure, so the work is just P∆V . But we’ll come back to all of this later!
Remark 25. State functions are all the parameters that characterize a system. So for an ideal gas, pressure, volume, and temperature are enough – if we are asked to find the final state, we just need to find the values of those parameters.
12 Fact 26 The word adiabatic (referring to a process in which a system evolves) has two different definitions in statistical physics. In one definition, no heat is exchanged, meaning that ¯ dQ = 0 throughout the process. (This can happen if our system has insulating walls, so no heat is transferred in or out of the system, or if a process proceeds quickly enough so that heat can’t move.) But in the other definition, our process is slow enough so that the system is always in equilibrium. And the main feature of this definition (which isn’t true in the other) is that entropy is conserved.
Adiabaticity in quantum mechanics means that a particle doesn’t deviate from its quantum state, because the process happens very gradually. (If we try to evolve too slowly, though, we will get noise that also interferes with the system.) The bottom line is that adiabaticity forces our system to act “not too slow and not too fast,” so that we get the desired constraints.
4.2 Energy Let’s start from the first law of thermodynamics, stated as dU = ¯ dW + ¯ dQ.
As physicists, we don’t have access to absolute truth: we do our best to find better and better approximations. So Professor Ketterle doesn’t always like it when we call things “laws:” for example, why are we trying to test Coulomb’s law to the tenth decimal place, and why do we do it for two electrons that are very close? Regardless, the “law” above is a statement about energy.
Question 27. What’s a form of energy that is internal energy but not thermal energy?
Two possible answer are “the binding energy of the nucleus” or “the mass energy,” but these aren’t exactly correct.
Thermal energy is supposed to “come and go” as our system heats up, so let’s think about a system of water molecules at increasing temperature. At first, our molecules gain kinetic energy, and then after we continue to heat, chemical energy will change through dissociation. So in this system, the binding energy of hydrogen is “reaction enthalpy,” which is indeed considered in thermal energy.
And similarly, if we increase temperature so that the kinetic energy is comparable to rest mass energy, we get issues with relativity. If two such particles collide, they can create a particle-antiparticle pair, and in this regime, even the rest mass becomes part of a dynamic process. Therefore, that rest mass can become thermal energy as well. One takeaway we can have is that U = Ethermal + U0 for some constant U0, and we basically always only care about differences in energy anyway. So this distinction between “thermal energy” and “internal energy” isn’t really that important in Professor Ketterle’s eyes.
4.3 Scale of systems Question 28. Can a single particle have a temperature? That is, can we have an ensemble consisting of one particle?
Normally, we are given some constant P, V, T, U, and a microstate is specified by a set of positions and momenta {xi, pi : 1 ≤i ≤N}, where N is a large number of particles. It turns out, though, that even single particles can be 13 thermal ensembles! For example, if we connect a particle in a box to a certain thermal reservoir at temperature T, we can find a “Boltzmann probability distribution” ∝e−E/(kBT ) for being at state E (this is a point which we’ll study later). So having an ensemble just means we have many copies of a system that are equally prepared macroscopically, regardless of how many particles this system has, as long as we follow all of the important laws.
And in particular, remember that in an ideal gas, we’ve assumed the particles are not interacting! So it’s perfectly fine to take N →1 for an ideal gas; rephrased, an ideal gas is just N copies of a single-particle system.
Remark 29. Schrodinger once said that the Schrodinger equation only describes ensembles when measurements are applied many times. He made the claim that the equation would not apply to just one particle, but recently, single photons, atoms, and ions were observed repeatedly, and it was shown that the quantum mechanical ideas applied there too.
So we may think it’s nonsense that statistics can apply to a single particle, but we can often study a complicated system by simplifying it into multiple copies of a simple one.
4.4 Partial derivatives Consider the two equations dz dx = dz dy dy dx , dz dx = −dz dy dy dx .
We can ask ourselves “which one is correct?”. One general rule to keep in mind is that in each field of physics, we need to learn some mathematics. In particular, two tools we’ll need to learn for this class are partial derivatives and statistics. In the handout posted online, we’ll see the second statement, but of course we need to ask ourselves why we don’t cancel the dys like we have done in ordinary calculus.
The key point is that we use the two equations in different situations. The first equation is valid when z is a function of x, but we have it written as an implicit function y. Rephrased, if we have a function z(y(x)), such as exp(sin x), then indeed the chain rule indeed tells us that dz dx = dz dy dy dx .
This is true for a function that depends on a single independent parameter. But on the other hand, suppose we have a function where x and y are independent variables: that is, we have z(x, y)? (For example, pressure is a function of volume and temperature.) Now z can change by the multivariable chain rule, and we can say that dz = ∂z ∂x y dx + ∂z ∂y x dy.
Often we’ll have a situation where we want to keep z constant: for example, we might be keeping the pressure constant as we heat up our system, and we’re thinking about pressure as a function of other state variables. In a situation like that, we have 0 = ∂z ∂x y dx + ∂z ∂y x dy, so now x and y must be changed in a certain ratio: dy dx = − ∂z ∂x y ∂z ∂y x 14 And the left hand side of this equation, in more rigorous language, is just ∂y ∂x as we keep z fixed. So the second equation is now true, and the moral of the story is that we need to be very careful about what variables are being kept constant. We’ll do much more study of this in the following weeks!
5 February 12, 2019 We’ll start with some housekeeping: our first problem set is due on Friday, and there will be office hours for any questions we have. If we click on the instructor names on the course website, we can see what times office hours are being held. (Hopefully we all have access to the online class materials: if there are any problems, we should send an email to the staff.) Last lecture, we introduced thermodynamics with a simple and tractable model (the ideal gas). Once we defined pressure and temperature, we derived an equation of state, and we learned that we can empirically modify the ideal gas law to capture real-life situations more accurately. Next, we introduced the first law of thermodynamics, which is essentially conservation of energy. We learned that there are many ways to do work, and we can write the infinitesimal work as the product of a “force” and its corresponding “conjugated variable.” This led us to the generalized first law dU = ¯ dQ + ¯ dW, ¯ dW = X i ji dxi, where each ji is a generalized force and xi is its conjugated generalized displacement.
In recitation, we reviewed some material about partial derivatives, and there will be a “zoo” of them in this class.
Any macroscopic quantity can be found by taking derivatives of “free energy” (which will be defined later as well). For example, we can take derivatives of energy to find temperature and pressure, which will be helpful since we can use statistical physics to find general quantities like the free energy or entropy. But again, this is all a preview for what will become more rigorous later.
5.1 Experimental properties Definition 30 A response functions are quantities that change when parameters of a system are adjusted, and they are used to characterize the macroscopic properties of that system.
Basically, we introduce a perturbation to a system, and we can then observe the response in our measurements.
Example 31 (Heat capacity) Suppose we add some heat to a system, and we want to keep track of what happens to the temperature.
We need to be careful, because the system can change while keeping pressure or volume constant? Both are useful quantities, and we’ll define the heat capacities CV ≡¯ dQ dT V , CP ≡¯ dQ dT P .
(These can be thought of as variables for a gas on which we perform experiments.) 15 Example 32 (Force constant) Suppose we apply a force F on our system, and we want to see the displacement x that results from this external force. (This is a generalization of a spring constant.) We can therefore define an effective force constant via the equation dx dF ≡1 k .
For example, we can define the isothermal compressibility of a gas via KT ≡−1 V ∂V ∂P T .
This is, again, something we can measure experimentally.
Example 33 (Thermal responses) The expansivity of a system is defined as αP = 1 V ∂V ∂T P .
And finally, if we have an equation of the form dU = ¯ dQ + X i ji dxi, it makes sense to try to write ¯ dQ in a similar way as well. And it turns out that if we treat T, the temperature, as a force, we can find a conjugate displacement variable S (called the entropy)! So for a reversible process (which we will discuss later), we can write ¯ dQ = T dS, and now all of our contributions to the internal energy are in the same form.
5.2 Experimental techniques Now that we’ve discussed the abstract quantities that we care about measuring, we should ask how we actually measure pressure, volume, and temperature for a given system. We generally deal with quasi-equilibrium processes, in which the process is performed sufficiently slowly that the system is always (basically) at equilibrium at any given time. This means that thermodynamic state functions do actually exist throughout our evolution, so we can always calculate well-defined values of P, V, T, and other state functions. And the work done on the system (which is the negative of the work done by the system) is related to changes in thermodynamic quantities, as we wrote above.
Example 34 Let’s say we want to measure the potential energy of a rubber band experimentally, and we do this by stretching the rubber band and applying a force.
The idea is that if we are performing the stretching slowly enough, at any point, the force that we apply is basically the same as the internal force experienced by the system. That means that we can indeed take the force we’re applying to the rubber band, and we will find that U = R F dℓ.
16 5.3 PV diagrams Since our state functions are only defined in equilibrium states, all derivatives are also only described in the space of equilibrium states.
Definition 35 In a PV diagram, the pressure P is plotted on the y-axis, the volume V is plotted on the x-axis, and every equilibrium state lies somewhere on the graph. The work done by or on the system as it transitions from a state I to a state II is defined via Wby = −Won = Z II I P dV.
The idea here is that (as we discussed with the ideal gas) pressure is force per area, and volume is length times area, so this integral is basically computing R F dx. There are ways to go from one state to another without being in equilibrium states along the way as well: for example, if we have sudden free expansion, there is no heat being exchanged and no work done on or by the system, so ∆Q = ∆W = 0 = ⇒UA = UB.
This tells us that U is a function of only the temperature of the gas (because it doesn’t change even when we change our pressure and volume).
Example 36 (Isothermal expansion) Consider a situation in which a gas moves along an isotherm in the PV diagram: for an ideal gas, the equation of this isotherm is just PV = NkBT.
As the name suggests, we’re keeping the temperature of the system constant while we compress the ideal gas. If we start with a volume V1 and pressure P1, and we end up at volume V2 and pressure P2, then the work done on the system is − Z V2 V1 P dV = Z V1 V2 NkBT V dV = NkBT(ln V2 −ln V1) = NkBT ln V1 V2 , since N, kB, and V are all constants independent of T. If we define r = V1 V2 , and we want to know how much heat is required for this process, we must have 0 = dU = ⇒ Q = −W = −NkBT ln r , because the internal energy U(T) does not change if T is fixed.
Example 37 (Adiabatic compression) In an adiabatic process, there is no heat added or removed from the system (for example, this happens if we have an isolated container).
Since ¯ dQ = 0, the first law of thermodynamics tells us that dU = ¯ dW = −P dV.
17 Since we have an ideal gas, we also know that PV = NkBT, U = f 2NkBT, where f is the number of active degrees of freedom for the molecules of the gas. We’ll now manipulate these equations a bit: in differential form, we have (using the product rule) that P dV + V dP = NkBdT, dU = f 2NkBdT.
Combining these equations, we have a relation between changes in internal energy and the state variables: dU = f 2(P dV + V dP).
So now since dU = −PdV from above, −P dV = f 2(P dV + V dP) = ⇒(f + 2)P dV + f V dP = 0.
Definition 38 The adiabatic index for a gas with f degrees of freedom is given by γ = f + 2 f .
Now if we rearrange the equation and integrate, 0 = γP dV + V dP = ⇒dP P = −γ dV V = ⇒ P P1 = V1 V γ .
This tells us that PV γ is constant, and equivalently that TV γ−1 and T γP 1−γ are constant as well. So now the rest is just integration: the work done on the system is W = − Z V2 V1 P dV = −P1V γ 1 Z V2 V1 dV V γ = −P1V γ 1 (V 1−γ 2 −V 1−γ 1 ) .
Plugging in our definition of r, W = NkBT1 γ −1 (r γ−1 −1).
This quantity depends on the number of degrees of freedom in the system, but we can see that in general, the work done for an isothermal process is less than for an adiabatic process! (This is true because the PV curves for PV γ = c are “steeper” in a PV diagram than those for PV = c, so we are also increasing the temperature, meaning more work is required for us to get the same change in volume.) Fact 39 It’s hard to design an adiabatic experiment, since it’s hard to insulate a system completely from its surroundings.
Example 40 (Isometric heating) In an isometric heating, we keep the volume of a system constant while we increase the heat (temperature goes up).
18 In a PV diagram, this corresponds to moving vertically up. Since there is no change in volume, there is no work done on the system, and thus dU = ¯ dQ: any internal energy change is due to addition of heat. But this is a measurable quantity: we can write ¯ dQ dT V = ∂U ∂T V = CV , and we can measure the change in internal energy to find the specific heat capacity CV experimentally. Since the energy is dependent only on temperature, for an ideal gas, U = f 2NkBT = ⇒ CV = f 2NkB .
We can also define ˆ cV (per molecule) as CV N = f 2kB.
Finally, we have one more important process of change along PV diagrams: Example 41 (Isobaric heating) This time, we keep the pressure constant and move horizonally in our PV diagram.
Let’s differentiate the first law of thermodynamics with respect to T: since ¯ dW = −PdV , ¯ dQ = dU −¯ dW = ⇒ ¯ dQ dT P = ∂U ∂T P + P ∂V ∂T P .
Since pressure is constant, our internal energy U is a function of T and V , and taking differentials, dU = ∂U ∂T V dT + ∂U ∂V T dV.
Dividing through by a temperature differential to make this look more like the equation we had above, ∂U ∂T P = ∂U ∂T V + ∂U ∂V T ∂V ∂T P .
Combining our equations by substituting the second equation into the first, ¯ dQ dT P = CV + P + ∂U ∂V T ∂V ∂T P .
But the left side is CP , so we now have our general relation between CP and CV CP = CV + P + ∂U ∂V T ∂V ∂T P .
Example 42 Let’s consider this equation when we have an ideal gas.
Then U is only a function of T, so ∂U ∂V while keeping T constant is zero. This leaves CP + CV + P ∂V ∂T P 19 and since we have an ideal gas where PV = NkBT, ∂V ∂T at constant P is just NkB P , and we have CP = CV + NkB.
Example 43 What if we have an incompressible system, like in solids or liquids?
Then the volume does not change with respect to temperature noticeably, so CP = CV + P ∂V ∂T P .
Defining αP = 1 V ∂V ∂T P , CP = CV + PV αP .
For ideal gasses, αP = 1 T , and at room temperature this is about 1 300K−1. For solids and liquids, the numbers are smaller: αP = 10−6K−1 for quartz and αP = 2 × 10−4K−1 for water. This essentially means CP ≈CV for solids and liquids!
Let’s look back again at isometric heating. We found that we had a state function U, which is exactly the amount of heat we added to the system. So in this case, we can directly measure dU = ¯ dQ. Are there any new state functions such that the change in heat for isobaric heating is the same as the change in that state function? That is, is there some quantity H such that ¯ dQ = dH? The answer is yes, and we’ll discuss this next time! It’s enthalpy, and it is H ≡U + PV .
6 February 13, 2019 (Recitation) Today, we have a few interesting questions, and we’ll be using clickers! We can see how we will respond to seemingly simple questions, because professor Ketterle likes to give us twists.
6.1 Questions Let’s start by filling in the details of “adiabatic” processes. In thermodynamics, there are two definitions of “adiabatic” in different contexts.
Fact 44 “dia” in the word means “to pass through,” much like “diamagnetic.” In fact, in Germany, slide transparencies are called “dia”s as well.
So “adia” means nothing passes through: a system in a perfectly insulated container does not allow transfer of heat. Adiabatic will mean thermally isolated in general!
On the other hand, we have adiabatic compression (which we discussed in class), in which we have an equation of the form PV γ = c. But what is adiabatic expansion? It sounds like it should just be a decompression: perhaps it is just a reverse of the adiabatic compression process. But this isn’t quite right.
In compression, we do the compression slowly: we assume the equation of state for the ideal gas is always valid, so we are always at equilibrium. Indeed, there also exists an adiabatic expansion that is very slow. But in problems 20 like the pset, we can have sudden changes: a sudden, free expansion is not in equilibrium all the time, and it is not reversible!
Fact 45 Adiabatic compression increases the temperature, and slow adiabatic expansion does the opposite. But in sudden free expansion, the internal energy of the system is 0 (as ¯ dW = ¯ dQ = 0). So the temperature is constant in free expansion.
All three processes have ¯ dQ = 0, since there is no heat transfer. The point is to be careful about whether we have reversible processes, since different textbooks may have different interpretations! We’ll talk about entropy later, but the key idea is that the slow adiabatic compression and expansion are isentropic: dS = 0.
Fact 46 For example, if we change the frequency of our harmonic oscillator in quantum mechanics slowly, so that the energy levels of our system does not jump, that’s an adiabatic process in quantum mechanics.
Example 47 Given a PV diagram, what is the graph of sudden, free expansion?
We start and end on the same isotherm, since the temperature is the same throughout the process. But we can’t describe the gas as a simple equation of state! In fact, we’re not at equilibrium throughout the process, so there is no curve on the PV diagram. After all, the work done W = R PdV has to be zero. In other words, be careful!
6.2 Clickers Let’s talk about the idea of “degrees of freedom.” Molecules can look very different: they can be monatomic, diatomic, or much larger. The degrees of freedom can be broken up into • center of mass motion • rotational motion • vibrational motion.
There are always 3 center of mass degrees of freedom, and let’s try to fill in the rest of the table! (We did this using clickers.) COM ROT VIB total atom 3 0 0 3 diatomic 3 2 1 6 CO2 3 2 4 9 H2O 3 3 3 9 polyatomic 3 3 3N −6 3N Some important notes that come out of this: • There are 2 rotational degrees of freedom for diatomic and straight triatomic molecules: both axes that are not along the line connecting the atoms work. As long as we can distinguish the three directions, though, there are 3 rotational degrees of freedom.
21 • Here, we count degrees of freedom as normal modes (which is different from 8.223). Recall that in 8.03, we distinguished translational from oscillatory normal modes.
• Water is a triatomic molecule with three modes: the “bending” mode, the “symmetric” stretch, and the “anti-symmetric” stretch.
• Carbon dioxide has 4 modes: the symmetric stretch, the asymmetric stretch, and two bending modes (in both perpendicular axes).
In classical mechanics, if we’re given one particle, we can write 3 differential equations for it: each coordinate gets a Newton’s second law. That’s why we have 3 total degrees of freedom. Similarly, with two particles, we have 6 total degrees of freedom, and the numbers should add up to 3N in general. This lets us make sure we don’t forget any vibrational modes!
6.3 Wait a second...
Notice that this definition of “degrees of freedom” is different from what is mentioned in lecture. Thermodynamic degrees of freedom are a whole different story! Now let’s change to using f , the thermodynamic degrees of freedom.
Recall that we define γ = f +2 f .
f measures the number of quadratic terms in the Hamiltonian, and as we will later rigorously derive, if the modes are thermally populated, the energy of each degree of freedom is kBT 2 . But the vibrations count twice, since they have both kinetic and potential energy! We’ll also rigorously show this later.
So it’s time to add another column to our table: COM ROT VIB total thermodynamic atom 3 0 0 3 3 diatomic 3 2 1 6 7 CO2 3 2 4 9 13 H2O 3 3 3 9 12 polyatomic 3 3 3N −6 3N 6N −6 and as we derived in lecture, E = f kBT 2 , CV = f kB 2 N.
However, keep in mind that this concept breaks down when we add too much energy and stop having well-defined molecular structure.
Finally, let’s talk a bit about adiabatic and isothermal compression.
Example 48 Let’s say we do an isothermal compression at T1, versus doing an adiabatic compression to temperature T2.
We measure the work it takes to go from an initial volume V1 to a final volume V2 under both compressions. The total work is larger for the adiabatic process, since the “area under the curve is larger,” but why is this true intuitively?
One way to phrase this is that we press harder, and that means there is more resistance against the work done.
So now let’s prepare a box and do the experiment. We find that there is now no difference: why? (Eliminate the answer of bad isolation.) • The gas was monatomic with no rotational or vibrational degrees of freedom.
22 • Large molecules were put in with many degrees of freedom.
• The gas of particles had a huge mass.
• This is impossible.
This is because γ = f +2 f ≈1 if f is large! Intuitively, the gas is absorbing all the work in its vibrational degrees of freedom instead of actually heating up.
7 February 14, 2019 Remember the pset is due tomorrow night at 9pm. As a reminder, the instructors are only accepting psets on the website: make a pdf file and submit them on LMOD. This minimizes psets getting lost, and it lets TAs and graders make comments directly.
Fact 49 Don’t use the pset boxes. I’m not really sure who put them there.
The solutions will become available soon after.
Today, we’re starting probability theory. The professor uploaded a file with some relevant information, and delta functions (a mathematical tool) will be covered later on as well.
Also, go to the TA’s office hours!
7.1 Review from last lecture We’ve been studying thermodynamic systems: we derived an ideal gas law by defining a pressure, temperature, and internal energy of a system. We looked at different processes that allow us to move from one point in phase space (in terms of P, V ) to another point.
Thermodynamics came about by combining such motions to form engines, and the question was about efficiency!
First of all, let’s review the ideas of specific heat for volume and pressure: • Remember that we discussed an isometric heating idea, where the volume stays constant. We could show that dV = 0 = ⇒¯ dW = 0 = ⇒dU = ¯ dQ, which means we can actually get access to the change in internal energy (which we normally cannot do). We also found that ¯ dQ dT V = dU dT V = CV .
• When we have constant pressure (an isobaric process), we don’t quite have dU = ¯ dQ, but we wanted to ask the question of whether there exists a quantity H such that dH = ¯ dQ|P . The idea is that d(PV ) = V dP + PdV, ¯ dQ|P = dU|P + PdV |P and this last expression is just dU|P + d(PV )p since P is constant. So combining all of these, d(U + PV )|P = ¯ dQ|P = ⇒H ≡U + PV.
23 Definition 50 H is known as the enthalpy.
It is useful in the sense that ¯ dQ dT P = ∂H ∂T P = CP .
This is seen a lot in chemistry, since many experiments are done at constant pressure!
We can write a general expression that combines those two: CP = CV + P + ∂U ∂V T ∂V ∂T P and for an ideal gas, this simplifies nicely to CP = CV + NkB.
Last lecture, we also found that CP CV = f + 2 f ≡γ, which is the adiabatic index.
For a monatomic ideal gas, f = 3 = ⇒ γ = 5 3, and for a diatomic ideal gas, f = 7 = ⇒γ = 9 7.
7.2 Moving on Fact 51 If you plot heat capacity CV per molecule as a function of temperature, low temperatures have CV ≈3 2 (only translational modes), corresponding to γ = 5 3, but this jumps to CV ≈5 2 = ⇒γ = 7 5 for temperatures between 200 to 1000 Kelvin. Hotter than that, vibrational modes start to come in, and CV increases while γ approaches 1.
In a Carnot engine, we trace out a path along the PV diagram. How can we increase the efficiency?
There are two important principles here: energy is conserved, but entropy is always increasing.
Definition 52 (Unclear) Define the entropy as ∆S = Q T .
But this doesn’t give very much physical intuition of what entropy really is: it’s supposed to be some measure of an “ignorance” of our system. Statistical physics is going to help us give an information theoretic definition later: S = −kB⟨ln Pi⟩, which will make sense as we learn about probability theory in the next three or four lectures!
7.3 Why do we need probability?
Almost all laws of thermodynamics are based on observations of macroscopic systems: we’re measuring thermal properties like pressure and temperature, but any system is still inherently made up of atoms and molecules, so the motion is described by more fundamental laws, either classical or quantum.
24 So we care about likelihoods: how likely is it that particles will be in a particular microscopic state?
7.4 Fundamentals Definition 53 A random variable x has a set of possible outcomes S = {x1, x2, · · · , }.
(This set is not necessarily countable, but I think this is clear from the later discussion.) This random variable can be either discrete or continuous.
Example 54 (Discrete) When we toss a coin, there are two possible outcomes: Scoin = {H, T}.
When we throw a die, Sdie = {1, 2, 3, 4, 5, 6}.
Example 55 (Continuous) We can have some velocity of a particle in a gas dictated by S = {−∞< vx, vy, vz < ∞}.
Definition 56 An event is a subset of some outcomes, and every event is assigned a probability.
Example 57 When we roll a die, here are some probabilities: Pdie({1}) = 1 6, Pdie({1, 3}) = 1 3.
Probabilities satisfy three important conditions: • positivity: any event has a nonnegative real probability P(E) ≥0.
• additivity: Given two events A and B, P(A ∪B) = P(A) + P(B) −P(A ∩B) where A ∪B means “A or B” and A ∩B means “A and B”.
• normalization: P(S) = 1, where S is the set of all outcomes. In other words, all random variables have some outcome.
There are two ways to find probabilities: 25 • Objective approach: given a random variable, do many trials and measure the result each time. After N of them, we have probabilities NA for each event A: this is just the number of times A occurs, divided by N. In particular, as we repeat this sufficiently many times, P(A) = lim N→∞ NA N .
• Subjective approach: We assign probabilities due to our uncertainty of knowledge about the system.
For example, with a die, we know all six outcomes are possible, and in the absence of any prior knowledge, they should all be equally probable. Thus, P({1}) = 1 6.
We’ll basically do the latter: we’ll start with very little knowledge and add constraints like “knowledge of the internal energy of the system.” 7.5 Continuous random variables Fact 58 We’ll mostly be dealing with these from now on, since they’re are what we’ll mostly encounter in models.
Let’s say we have a random variable x which is real-valued: in other words, SX = {−∞< x < ∞}.
Definition 59 The cumulative probability function for a random variable X, denoted FX(x), is defined as the probability that the outcome is less than or equal to x: FX(x) = Pr[E ⊂[−∞, x]].
Note that FX(−∞) = 0 and FX(∞) = 1, since x is always between −∞and ∞.
Definition 60 The probability density function for a random variable X is defined by pX(x) ≡dFX dx In particular, pX(x)dx = Pr [E ⊂[x, x + dx]] .
It’s important to understand that Z ∞ −∞ pX(x)dx = 1, since this is essentially the probability over all x.
Fact 61 The units or dimension of pX(X) is the reciprocal of the units of X.
26 Note that there is no upper bound on pX; it can even be infinity as long as p is still integrable.
Definition 62 Let the expected value of any function f (x) of a random variable x be ⟨F(x)⟩= Z ∞ −∞ F(x)p(x)dx.
As a motivating example, the expected value of a discrete event is just ⟨X⟩= X i pixi, so this integral is just an “infinite sum” in that sent.
7.6 More statistics Definition 63 Define the mean of a random variable x to be ⟨x⟩= Z ∞ −∞ xp(x)dx.
For example, note that ⟨x −⟨x⟩⟩= 0.
In other words, the difference from the average is 0 on average, which should make sense. But we can make this concept into something useful: Definition 64 Define the variance of a random variable x to be var(x) = ⟨(x −⟨x⟩)2⟩.
This tells something about the spread of the variable: basically, how far away from the mean are we? Note that we can expand the variance as var(x) = ⟨x2 −2x⟨x⟩+ ⟨x⟩2⟩= ⟨x2⟩−⟨x⟩2.
Fact 65 (Sidenote) The reason we square instead of using an absolute value is that the mean is actually the value that minimizes the sum of the squares, while the median minimizes the sum of the absolute values. The absolute value version is called “mean absolute deviation” and is less useful in general.
We’re going to use this idea of variance to define other physical quantities like diffusion later!
27 Definition 66 Define the standard deviation as σ(x) = p var(x).
With this, define the skewness as a dimensionless metric of asymmetry: (x −⟨x⟩)3 σ3 , and define the kurtosis as a dimensionless measure of shape (for a given variance). (x −⟨x⟩)4 σ4 , Let’s look at a particle physics experiemnt to get an idea of what’s going on: e+e−→µ+µ−.
Due to quantum mechanical effects, there is some probability distribution for θ, the angle of deflection: p(θ) = c sin θ(1 + cos2 θ), 0 ≤θ ≤π.
To find the constant, we normalize with an integral over the range of θ: 1 = c Z π 0 sin θ(1 + cos2 θ)dθ.
We will solve this with a u-substitution: letting x = cos θ, 1 = c Z 1 −1 (1 + x2)dx = ⇒1 = 8 3c = ⇒c = 3 8 So our probability density function is p(θ) = 3 8 sin θ(1 + cos2 θ) Fact 67 This has two peaks and is symmetric around π 2 . Thus, the mean value of θ is ⟨θ⟩= π 2 , and σ is approximately the distance to the peak.
We can calculate the standard deviation exactly: ⟨θ2 ⟩= 3 8 Z π 0 (1 + cos2 θ) sin θ · θ2 dθ = π2 4 −17 9 , and therefore var(θ) = ⟨θ2⟩−⟨θ⟩2 ≈0.579 = ⇒σ ≈0.76.
Finally, let’s compute the cumulative probability function: F(θ) = Z θ 0 3 8 sin θ(1 + cos2 θ)dθ = 1 8(4 −3 cos θ −cos3 θ).
This has value 0 at θ = 0, 1 2 at θ = π 2 , and 1 at θ = π.
28 Next time, we will talk about discrete examples and start combining discrete and continuous probability. We’ll also start seeing Gaussian, Poisson, and binomial distributions!
8 February 19, 2019 (Recitation) A guest professor is teaching this recitation. We’re going to discuss delta functions as a mathematical tool, following the supplementary notes on the website.
There are multiple different ways we can represent delta functions, but let’s start by considering the following: Definition 68 Define δε(x) = 1 √ 2πε exp −x2 2ε2 .
This is a Gaussian (bell curve) distribution with a peak at x = 0 and an inflection point at ±ε. It also has the important property Z ∞ −∞ δε(x) = 1, so it is already normalized. This can be shown by using the fact that I = Z ∞ −∞ dxe−αx2 = ⇒I2 = Z ∞ −∞ Z ∞ −∞ dxdye−α(x2+y 2) and now switch to polar coordinates: since dxdy = rdrdθ, I2 = Z ∞ 0 drr Z 2π 0 e−αr 2 = π α using a u-substitution. So δε is a function with area 1 regardless of the choice of ε. However, ε controls the width of our function! So if ε goes down, the peak at x = 0 will get larger and larger: in particular, δε(0) = 1 √ 2πε goes to ∞ as ε →0.
So we have a family of such functions, and our real question is now what we can do with integration? What’s Z ∞ −∞ dxδε(x)f (x)?
For a specific function and value of ε, this may not be a question you can answer easily. But the point is that if we put an arbitrary function in for f , we don’t necessarily know how to do the integration. What can we do?
Well, let’s think about taking ε →0. Far away from x = 0, δε(x)f (x) is essentially zero. If we make δε extremely narrow, we get a sharp peak at x = 0: zooming in, f is essentially constant on that peak, so we’re basically dealing with Z ε −ε dxf (0)e−x2 2ε2 1 √ 2πε = f (0) · 1 = f (0).
So the idea is that we start with a particular family of functions and take ε →0, and this means that δε is a pretty good first attempt of a “sharp peak.” 29 Definition 69 Let the Dirac delta function δ satisfy the conditions • δ(x −x0) = 0 for all x ̸= x0.
• R ∞ −∞dxδ(x −x0) = 1, where the integral can be over any range containing x0.
• R dxδ(x −x0)f (x) = f (x0), again as an integral over any range containing x0.
This seems pretty silly: if we already know f (x), why do we need its evaluation at a specific point by integrating?
We’re just evaluating the function at x = 0. It’s not at all clear why this is even useful. Well, the idea is that it’s often easier to write down integrals in terms of the delta function, and we’ll see examples of how it’s useful later on.
For now, let’s keep looking at some more complicated applications of the delta function. What if we have something like Z dxδ(g(x))f (x)?
We know formally what it means to replace δ(x) with δ(x −x0), but if we have a function g with multiple zeros, we could have many peaks: what does that really mean, and how tall are the peaks here? This is useful because we could find the probability that g(x, y, z) = c by integrating Z p(x, y, z)δ(g(x, y, z) −c)dxdydz and this answer is not quite obvious yet. So we’re going to have to build this up step by step.
Let’s start in a simpler case. What’s Z dxf (x)δ(cx)?
We can do a change of variables, but let’s not rush to that. Note that δε(−x) = δε(x), and similarly δ(x) = δ(−x): this is an even function. So replacing y = cx, = Z dy 1 |c|f y c δ(y) = 1 |c|f (0).
So we get back f (0), just with some extra constant factor. Be careful with the changing integration limits, both in this example and in general: that’s why we have the absolute value in the denominator. In general, the delta function “counts” things, so we have to make sure we don’t make bad mistakes with the sign!
Similar to the above, we can deduce that linear functions give nice results of the form Z dxδ(cx −a)f (x) = 1 |c|f a c .
But this is all we need! Remember that we only care about δ when the value is very close to 0. So often, we can just make a linear approximation! f (x) looks linear in the vicinity of x0, and there’s a δ peak at x0. So if we make the Taylor expansion f (x) ≈f (x0) + f ′(x0)(x −x0), we have found everything relevant to the function that we need.
Note that by definition, δ(g(x)) = 0 whenever g(x) ̸= 0. Meanwhile, if g(xi) = 0, g(x) ≈g(xi) + g′(xi)(x −xi) = g′(xi)(x −xi).
So that means we can treat δ(g(x)) = X i δ(g′(xi)(x −xi)) where we are summing over all zeros of the function!
30 Fact 70 Remember that δ is an everywhere-positive function, so δ(g(x)) cannot be negative either.
Well, we just figured out how to deal with δ(g(x)) where g is linear! So Z dxf (x)δ(g(x)) = X i 1 |g′(xi)|f (xi).
So at each point xi where g is 0, we just take f (xi) and modify it by a constant. Now this function is starting to look a lot less nontrivial, and we’ll use it to do a lot of calculations over the next few weeks.
Example 71 Let’s say you want to do a “semi-classical density of states calculation” to find the number of ways to have a particle at a certain energy level. Normally, we’d do a discrete summation, but what if we’re lazy?
Then in the classical case, if u is the velocity, we have an expression of the form f (E) = Z p(u)δ E −mu2 2 du.
To evaluate this, note that the δ function is zero at u = ± q 2E m , and the derivative g′(u) = −mu = ⇒|g′(u±)| = √ 2mE, so the expression is just equal to f (E) = 1 √ 2mE (p(u+) + p(u−)).
This is currently a one-dimensional problem, so there’s only two values of u. In highest dimensions, we might be looking at something like Z d3⃗ up(⃗ u)δ E −mu2 x + u2 y + u2 z 2 .
Now the zeroes lie on a sphere, and now we have to integrate over a whole surface!
By the way, there are different ways to formulate the delta function. There also exists a Fourier representation δ(x) = Z ∞ −∞ dkeikx.
It’s not obvious why this behaves like the delta function, but remember eikx is a complex number of unit magnitude.
Really, we care about Z δ(x)f (x)dx, and the point is that for any choice of x other than 0, we just get a spinning unit arrow that gives net zero contribution.
But if x = 0, eikx is just 1, so this starts to blow up just like the δε function.
There also exist a Lorentzian representation δε = 1 π ε x2 + ε2 and an exponential representation δε = 1 2εe|x|/ε.
31 The point is that there are many different families of functions to capture the intended effect (integrates to 1), but as all of them get sharper, they end up having very similar properties for the important purposes of the delta function.
9 February 20, 2019 (Recitation) It is a good idea to talk about the concept of an exact differential again, and also to look over delta functions.
9.1 Questions We often specify a system (like a box) with a temperature, volume, and pressure. We do work on the system when the volume is reduced, so dW = −PdV .
When we have a dielectric with separated charges, we can orient the dipoles and get a dipole energy ∝⃗ E · ⃗ p, where ⃗ p is the dipole moment. We now have to be careful if we want to call this potential energy: are we talking about the energy of the whole system, the external field, or something else?
Well, the differential energy can be written as dW = EdP: how can we interpret this? Much like with −PdV , when the polarization of the material changes, the electrostatic potential energy changes as well.
So now, what’s the equation of state for an electrostatic system? Can we find an equation like PV = nRT to have E(T, P)? Importantly, note that some analogies to break down: the electric field E is sort of necessary (from the outside) to get a polarization P.
So if we consider an exact differential, and we’re given ∂E ∂P T = T AT + B , ∂E ∂T P = BP (AT + B)2 , we know everything we could want to know about the system. First, we should show that these do define an equation of state: is dE = T AT + B dP + BP (AT + B)2 dT an exact differential? Well, we just check whether ∂ ∂T T AT + B = ∂ ∂P BP (AT + B)2 .
Once we do this, we can integrate ∂E ∂P with respect to P to get E up to a function of T, and then differentiate with respect to T to find that unknown function T.
Next, let’s talk a bit more about mean and variance. We can either have a set of possible (enumerable, discrete) events {pi}, or we could have a probability density p(x). The idea with the density function is that p(x)dx = Pr[in the range [x, x + dx]].
Remember that probabilities must follow a normalization, which means that X pn = 1 or Z p(x)dx = 1.
32 What does it mean to have an average value? In the discrete case, we find the average as ⟨n⟩= X i ipi, since an outcome of i with probability pi should be counted pi of the time. Similarly, the continuous case just uses an integral: ⟨x⟩= Z xp(x)dx.
Note that we can replace n and x with any arbitrary functions of n and x. Powers of n and x are called moments, so the mean value is the “first moment.” (This is a lot like taking the second “moment of inertia” R r 2ρ(r)dr.) Then the variance is an average value of ⟨(n −⟨n⟩)2⟩= ⟨n2 −2n⟨n⟩+ ⟨n⟩2⟩= ⟨n2⟩−⟨n⟩2.
Let’s look at another situation where we have a probability density dP dw = p(w).
9.2 Delta functions Here’s a question: what is the derivative δ′(x)?
Well, let’s start with some related ideas: δ(10x) = 1 10δ(x), and this might make us cringe a bit since δ is mostly infinite, but it works for all purposes that we are using it. The important idea is to not always think about infinity: we could consider the delta function to be a rectangle of width ε and height 1 ε. This makes it seem more like a real function.
So now, if we take any function f (x), f (10x) is just a function that keeps the maximum constant but shrinks the width by a factor of 10. Well, if we integrate over f (10x), we’ll get a factor of 1 10 less, and that’s how we should understand δ(10x).
Is δ′(x) defined, then? Let’s say we have a triangle function with peak 1 ε and width from −ε to ε. The derivative of this function is not defined at x = 0! So δ′(x) doesn’t necessarily need to be defined. The idea is that we can take ε →0 using any representation of a real function, and a derivative would have to be well-defined across all representations: that just doesn’t happen here.
Curiously, though, if we used the triangle function, we can actually represent the derivative as a delta function itself, because the derivative is a large number from −ε to 0 and from 0 to ε: δ′(x) = 1 εδ −ε 2 −1 εδ ε 2 , where the 1 ε factor is just for normalization, since the area of the rectangle for the derivative is ε · 1 ε2 . So it seems that δ′(x) = −1 ε δ ε 2 −δ −ε 2 .
It’s okay, though: delta functions always appear under an integral. (So continuity is important, but not necessarily 33 differentiability.) This means that if we’re integrating this with a function f (x), Z f (x)δ′(x)dx = −1 ε Z δ ε 2 −δ −ε 2 f (x)dx = −1 εf ε 2 −f −ε 2 = −f ′(0).
But the idea is that we want to be faster at manipulating such things. What if we integrated by parts? Then Z f (x)δ′(x) = − Z f ′(x)δ(x)dx + f (x)δ(x)|∞ −∞.
The delta function is mathematically zero, so the boundary term disappears (unless we have a bad Lorentzian or other description of the delta function). But now this just gives − Z f ′(x)δ(x)dx = −f ′(0).
This is maybe how we should use delta functions, but it’s still important to have confidence that what we’re doing is correct!
Finally, let’s ask one more question. We can store energy by pressing air into an underground cave, and we can do that in two ways: adiabatic and isothermal. If we compare the two situations, where does the energy go?
In the isothermal case, the internal energy is the same. So isothermal compression is just transferring the energy as heat to the surrounding ground! Is there a way to retrieve it? (Hint: the process may be reversible if we do it slow enough!) 10 February 21, 2019 Today we’re going to continue learning about probability. As a reminder, we’re learning probability because there’s a bottom-up and top-down approach to statistical physics: thermodynamics gives us state functions that tell us physical properties of the world around us, and we can connect those with microscopic atoms and molecules that actually form the system. Probability allows us to not just follow every particle: we can just think about general distributions instead!
We’ll discuss some important distributions today and start our connections between probability distributions and physical quantities. We’ll eventually get to the Central Limit Theorem!
10.1 A discrete random variable Consider a weighted coin such that P(H) = 5 8, P(T) = 3 8.
This is a “biased distribution.” Let’s say that every time we get a head, we gain $1, and every time we get a tail, we lose $1. Letting x be our net money, our discrete probability distribution P(x) satisfies P(1) = 5 8, P(−1) = 3 8.
Here are some important statistics: • P(1) + P(−1) = 1.
• ⟨x⟩= P(1) · 1 + P(−1) · (−1) = 5 8 −3 8 = 1 4.
• ⟨x2⟩= P(1)12 + P(−1)12 = 1, so the variance of this distribution is var(x) = ⟨x2⟩−⟨x⟩2 = 15 16.
34 This is a large variance, since most events are far away from the mean!
• To find the probability density function p(x), we can use delta functions: p(x) = 5 8δ(x −1) + 3 8δ(x + 1) Note that if we integrate p(x)dx over any interval containing 1 but not −1, we get a probability of 5 8, which is what we want.
In general, if we have a probability density function that has both discrete and continuous parts, we can write it as p(x) = f (x) + M X j=1 piδ(x −xj).
See the Weibull distribution, as well as the Xenon lamp spectrum!
10.2 Important probability distributions We’re going to discuss the Gaussian, Poisson, and Binomial distributions. The idea is that the limit of a Binomial distribution will converge to a Poisson distribution, which will then converge to a Gaussian. We’ll see the second part today as part of the Central Limit Theorem!
10.3 Gaussian distribution Definition 72 The probability density function for a Gaussian with standard deviation σ is p(x) = 1 √ 2πσ exp −(x −a)2 2σ2 .
This distribution has mean a and variance σ2; let’s check for normalization to make sure this is indeed a valid distribution. So Z ∞ −∞ p(x)dx = 1 √ 2πσ Z ∞ −∞ exp −(x −a)2 2σ2 , and now defining y ≡x −a √ 2σ = ⇒dy = 1 √ 2σ dx.
Substituting in, we end up with 1 √ 2πσ · √ 2σ Z ∞ −∞ e−y 2dy.
But the integral is known to be √π (by the polar substitution trick), and therefore our integral of p(x) is 1, and indeed we have a normalized distribution.
So the cumulative distribution function is F(x) = Z x −∞ dξ 1 √ 2πσ exp −(ξ −a)2 2σ2 .
35 Denoting ξ−a √ 2σ = z as before, we can substitute again to find F(x) = 1 √π Z (x−a)/( √ 2σ) −∞ dze−z2.
Since our probability function is normalized, we can also write this as F(x) = 1 − 1 √π Z ∞ (x−a)/( √ 2σ) dze−z2 Let erfc, the complementary error function, be defined as erfc(η) = 2 √π Z ∞ η dze−z2.
So that means our cumulative distribution can be written as F(x) = 1 −1 2 erfc x −a √ 2σ .
It’s important to note that the shape of p(x) is the familiar “bell curve” shape. When a = 0 (so the curve is centered at x = 0), p(0) = 1 √ 2πσ and we can also compute that P(σ) ≈0.61P(0), P(2σ) ≈0.135P(0).
What exactly is the significance of σ? Well, for any a and σ in the Gaussian, Z a+σ a−σ p(x)dx ≈0.68, meaning that about 68% of the time, a random sample from the Gaussian distribution will be within σ of the mean.
Similarly, Z a+2σ a−2σ p(x)dx ≈0.95, and that means that 95 percent of all measurements are within 2σ of the mean.
Example 73 Consider a measurement of the magnetic moment of a muon m = geℏ 2Mµc .
At first, we expect that g ≈2 theoretically. However, after many measurements, we get a Gaussian distribution for (g −2)µ: (g −2)µ = (116591802 ± 49) × 10−11, where the first term is the mean and the part after the ± is σ. Theoretical calculations actually end up giving (g −2)µ = (116592089 ± 63) × 10−11, and these distributions are actually significantly different: the discrepancy is still a point of ongoing research! The idea is that this measurement of σ allows us to compare two different distributions.
36 10.4 Poisson distribution For random variables X and Y , if they have probability distributions p(x) and p(y), then X, Y statistically independent = ⇒p(x, y) = p(x)p(y).
Let’s start with an example. Given a random student, the probability that a student is born in May is 31 365.25 ≈0.0849.
Meanwhile, the probability of being born between 9 and 10 in the morning is 1 24 ≈0.0417. So the probability of being born between 9 and 10 in the morning in May is 0.0849 × 0.0417 = 3.54 × 10−3.
We need this to introduce the idea of a Poisson distribution! This is important for rare events with low probability.
Here are two important ideas: • The probability of an event happening exactly once in the interval [t, t + dt] is proportional to dt as dt →0: dp = λdt for some λ.
• The probability of events in different events are independent of each other.
If we have these two conditions satisfied, the idea is that we can subdivide a time interval of length T into small intervals of length dt. In each interval, the probability that we observe an event is equal and independent to all the other ones!
Definition 74 Then the probability that we observe a total of exactly n events in an interval time T is given by the Poisson distribution Pn(T).
Let’s try to compute pn. We break T into N bins of length dt, so dt = T N , in such a way that the probability of getting two events in the same bin (small time interval) is negligible. Then dP = λdt = λT N ≪1.
To compute the probability of finding n events, first we find the probability of computing no events, then 1 event, and so on. Note that the probability of observing no event in an interval dt is 1 −λT N , so the probability of observing no events overall is lim N→∞ 1 −λT N N since we must have no event observed in all N intervals. By definition of an exponential, this is just P0(T) = e−λT .
Next, find the probability of observing exactly 1 event in time interval T. There are N different places in which this one event can happen, and the probability that it happens is λT N . Then the other N −1 intervals must have no event happen, so this is P1(T) = lim N→∞N · λT N 1 −λT N N−1 = λT lim N→∞ 1 −λT N N−1 .
This gives, again by the definition of an exponential, P1(T) = λTe−λT .
37 Let’s do another example: what about two events? We pick which two intervals are chosen, and then Pk(T) = lim N→∞ N 2 · λT N 2 1 −λT N N−2 = (λT)2 2 e−λT .
In general, the probability of k events happening is going to be Pk(T) = lim N→∞ N k · λT N k 1 −λT N N−k = ⇒Pk(t) = (λT)k k!
e−λT .
It’s important to note that this is a discrete distribution!
Let’s check some statistics for our probability distribution function. First of all, is it normalized? Well, ∞ X n=0 pn(T) = e−λT ∞ X n=0 (λT)n n!
= e−λT eλT = 1, so the Poisson distribution is indeed normalized.
Next, let’s find the mean: ⟨n⟩= ∞ X n=0 npn(T) = e−λT ∞ X n=0 n(λT)n n!
Denoting Z ≡λT, this expression can be written as ⟨n⟩= e−Z ∞ X n=0 nZn n!
= Ze−Z ∞ X n=1 zn−1 (n −1)! = e−ZZ · eZ = Z.
So the mean of the Poisson distribution is ⟨n⟩= λT, which shouldn’t be that surprising: it’s saying that if events have a probability 1 N of happening, they happen on average once per N.
Finally, let’s find the variance: we’ll leave this as an exercise, but the idea is to start by computing ⟨n(n −1)⟩= ⟨n2 −n⟩.
It turns out the variance is also λT, and this is an interesting relationship between the mean and variance! We’ll introduce a dimensionless quantity σ(n) ⟨n⟩ which meausures the width of the distribution. Well, note that as T →∞for the Poisson distribution, this goes to 0, so the distribution becomes more and more spiked around λT. It turns out that this approaches a Gaussian distribution! How can we check that?
Taking T →∞, λT ≫1 = ⇒n ≫1, and we want to find the probability Pn(λT). Denoting λT ≡Z, we expand around the maximum, and we’re going to look at the log of the function. By Stirling’s approximation, ln n! ∼n ln n −n + ln(2πn) as n →∞, so Pn(Z) = Zn n! e−Z = ⇒ln Pn(Z) = n ln Z −Z −ln n! ≈n ln Z −Z −n ln n + n −1 2 ln 2πn.
The maximum of this function can be found by taking the derivative, ∂ ∂n ln Pn = ln Z −ln n −1 2n, 38 and ignoring the 1 2n term, we can say that n0 = Z at the maximum. Doing a Taylor expansion about n0, ln Pn(Z) = ln pn0(z) + (n −n0)2 2 ∂2 ∂n2 Pn(z) n=n0 , and taking the exponential of both sides, we find that Pn(z) = 1 √ 2πz exp −(n −z)2 2z which is a Gaussian with standard deviation √z and mean z, as desired! This is our first instance of the Central Limit Theorem, a powerful tool to deal with large numbers.
11 February 25, 2019 (Recitation) Professor Ketterle was writing a paper on the new definition of the kilogram. Let’s spend a few minutes talking about that!
11.1 The kilogram There’s a law that is taking effect in May. Currently, the kilogram is a physical object in France: it’s what weighs as much as the “original Paris kilogram.” There are copies around the world.
But artifacts like this have discrepancies! There might be diffusion of atoms or fingerprints, so at the microgram level, there are still deviations. This wasn’t a problem, but now people have determined Planck’s constant with an error of ∆h h ≈10−8, and the error is limited from the mass deviation! So this is pretty inconvenient.
Fact 75 So instead, why not define h to be 6.62 · · · × 10−34 Joule-seconds?
Now we’ve turned it around: mass is now determined in terms of h, instead of the other way around!
Question 76. Why exactly does h determine mass?
Since E = hν, and we can consider the frequency of transition in Cesium = 9.1 · · · × 109 Hertz (which actually now defines units of time and frequency). With this, now we can measure energy (as either kinetic or rest energy).
The idea is that all constants are now defined in terms of c, the speed of light, and h, Planck’s constant!
So more precisely, we start with the frequency of Cesium, and we define a second to be 9.1 · · · GHz. But this means that if we take our value h, the mass of a photon will be m = hνCS c2 .
How do we measure the mass of a photon? Well, a Cesium atom in the upper state is slightly heavier than a Cesium atom in the lower state! That gives us the change in mass ∆m, which is on the order of 10−40 kilograms.
Fact 77 When a system loses energy, it often loses mass!
39 It’s a tiny effect, but it’s important for special relativity. So we can now set up a balance, where 1040 Cesium atoms are set up with spins in the “upper state.” On the other side, we have the same number of Cesium atoms, but the spins are in the ground state. Then any substance that balances the scale is exactly one kilogram!
11.2 Poisson distribution Think of the Poisson distribution as modeling a bunch of atoms radiating with some decay rate. If N is the number of particles, λ is the decay rate, and we observe for some rate dt, we have an expectation ⟨n⟩= Nλdt for the number of observed events in time dt. We can do combinatorics to find that pn = ⟨n⟩n n! e−⟨n⟩ This is a prettier way to write the Poisson distribution, and it shows that the whole distribution is based entirely on the value of ⟨n⟩.
Based on this, let’s consider the concept of shot noise, which comes from us trying to count something that is random.
When we have a random event like raindrops falling on a roof, you hear a random noise with fluctuations. This is because rain doesn’t come as a stream: it’s big droplets. So the shot noise comes from the fact that we have a stream of mass (or radioactively decaying particles) that are quantized. So sometimes we have a few more or a few less than expected.
So if we are trying to observe 100 events, the expectation value is 100 ± 10.
This is because the variance var(n) = ⟨n⟩, and therefore σ, the standard deviation, is p ⟨n⟩.
So it’s important to remember that ±√n idea!
Basically, the Poisson distribution looks almost normal, and the inflection point occurs around p ⟨n⟩from the mean.
This leads to the next question: as you observe larger expectations, is the shot noise larger or smaller? Well, if we’re doing repeated measurements to determine some quantity, our precision goes as σ(n) ⟨n⟩∝ 1 √n.
So if we measure 10 times longer, we are √ 10 times more accurate. But this is often the best we can do! Much along the same line, we want to do experiments with large counting rate to get higher expectations.
So if we measure the noise in an electric current, the number of electrons that pass through gives shot noise as well. We’ll find that iδt = nee, where ne is the number of electrons and e is the charge of electron. Well, ne fluctuates by a factor of √ne, and we can then experimentally measure the charge of an electron! But people did experiments with superconductors, which can also carry current. This has no dissipation, and then the noise for ne was different. This is because q is now 2e (superconductivity happens when electrons combine into pairs). So when the current carriers are Cooper pairs instead of electrons, we only need half the number to get the same current, and this means our fluctuation is larger! This was the first proof that superconductors did not have electron charge carriers.
But here’s a real life MIT example for shot noise and how to measure it. Let’s say we have 106 people on Killian Court, and let’s say there are a few exits that people randomly leave through. Through one exit, there’s 104 people who leave per unit time, and by shot noise, there’s a fluctuation of ±100 people.
40 But now let’s say that people leave in groups of 100 instead: now there’s 102 groups that leave per unit time, so the shot noise is just 10 groups. This means the actual absolute fluctuation is 103, which is larger than the original!
So if the carrier of charge, or the unit size of people, is increased by a factor of N, the shot noise is increased by a factor of √ N.
11.3 Stirling’s formula How do you memorize the formula? If you do n! = n · (n −1) · · · · 2 · 1, we know nn is a bad estimate, and n 2 n would be a bit closer. And also because we have logarithms, e is a pretty good number to use. So instead n! ≈ n e n .
This gives log n! ≈n(log n −log e) ≈n ln n −n.
There’s another term of ln √ 2πn, but when n is large (as it is in this class), that’s several orders of magnitude smaller than n! So we can neglect it.
12 February 26, 2019 Remember that we have been introducing probability distributions. We found that the Poisson distribution converges to a Gaussian as the frequency of events becomes larger, and this was an important example of the Central Limit Theorem.
Today, we’ll talk about the binomial distribution and connect it to the idea of diffusion.
Finally, we’ll discuss conditional probability and figure out ideas like “energy given some other knowledge about the system.” 12.1 Binomial distribution Consider a random variable with two possible outcomes A and B, occurring with probability pA and pB = 1 −pA. Our goal is to find out how many times A occurs if we repeat the random variable N times. Then this can be calculated as PN(NA) = N NA pNA A (1 −pA)N−NA The first factor of N NA = N!
NA!(N −NA)!
comes from the number of ways in which we can choose which of the events are A, and the rest is just the probabilities of A and B multiplied the relevant number of times.
Is this normalized? By the Binomial theorem, N X NA=0 PN(NA) = N X NA=0 N NA pNA A (1 −pA)N−NA = (pA + (1 −pA))N = 1, 41 as desired. We can also find some other statistics by doing mathematical manipulation: ⟨NA⟩= N · pA, var(NA) = NpApB.
Then, the ratio of the standard deviation to the mean of NA is σ(NA) ⟨NA⟩= p NpA(1 −pA) NpA = 1 √ N r 1 −pA pA , so as N →∞, the distribution becomes narrower and narrower.
One question we should be asking ourselves: how is it possible for physics to have simple equations that explain the complexity of the world? This shows the beauty of statistical mechanics: we can explain the world by just using probability.
12.2 An application Fact 78 We’re going to use the binomial distribution to derive a diffusion equation. A random process is sometimes a random walk, and we can use the diffusion equation to understand that random walk!
Definition 79 A random walk is a path of successive steps in random directions in some space.
These describe many physical phenomena, like collisions of particles, shapes of polymers, and so on. For example, DNA is often curled up in a coil, and its shape is usually described by a random variable with mean zero!
There are two kinds of emergent behavior for random walks.
• Given any individual random walk, after a large number of steps, it becomes a fractal (scale invariant). We won’t be talking much about this in class though.
• The endpoint of a random walk has a probability distribution that obeys a simple continuum law, which leads to the diffusion equation!
The idea is that these phenomena are global, so they are independent of the microscopic details of the system.
Example 80 Consider a random walk in one dimension: this is also known as Brownian motion. Let’s say that it moves left or right along a line with step size ℓ, and the probability is P(+ℓ) = P(−ℓ) = 1 2.
First of all, we want to find the average displacement after N steps. Well, ⟨∆xi⟩= 1 2(ℓ) + 1 2(−ℓ) = 0, so the average is always 0 after N steps. On the other hand, we can consider the mean squared displacement: Then ⟨∆x2 i ⟩= 1 2(+ℓ)2 + 1 2(−ℓ)2 = ℓ2, 42 and the mean square displacement after N steps is ⟨∆x2⟩= N X i=1 N X j=1 ⟨∆xi∆xj⟩ and now all cross terms ∆xi∆xj with i ̸= j contribute 0 by independence. This means that ⟨∆x2⟩= N X i=1 ⟨∆x2 i ⟩= Nℓ2.
Fact 81 We could also use the fact that variances of independent variables add! So since each step has variance ℓ2, the total sum has variance Nℓ2.
So if successive jumps happen every δt, the number of jumps in a time t is N = t δt = ⇒⟨∆x2⟩= ℓ2 δt t.
This is important: the variance scales linearly with time! In comparison, if our random walk has some average velocity ∆x(t) = vt = ⇒⟨∆x2⟩= ⟨v 2⟩t2, which is called ballistic motion. In more advanced statistics, this is the setup for the fluctuation-dissipation theorem!
But what’s the main physics of what we’re working on: where is the randomness of our process coming from?
• The existence of a randomly fluctuating force will push a particle in random directions.
• There is some inertia of the system, as well as a viscous drag.
Our goal is to compute the probability distribution of finding a particle x away from the original position after N steps. If we denote NL to be the number of steps to the left and NR the number of steps to the right (so N = NL+NR), then the net displacement of the walk is x = ℓ(NR −NL).
Question 82. How many distinct walks are possible if we give ourselves N steps, NL of which are to the left and NR of which are to the right?
This is just N NL = N!
NL!NR!.
In total, since each move can be to the left or to the right, there are 2N distinct ways to form a walk of N steps, and the probability of any sequence is 1 2N .
Fact 83 It’s important to note that sequences each have equal probability, but x, the net distance, is not uniformly distributed.
So the probability of having a walk of net length x is p(x, N) = N!
NR!NL!
1 2 N , which is the number of sequences times the probability of any given sequence.
43 12.3 In the limiting case We claim that this becomes a Gaussian as N becomes large. Indeed, note that we’ve define x = ℓ(NR −NL), so NL = N −x/ℓ 2 , NR = N + x/ℓ 2 .
Substituting these in, we can then use Stirling’s approximation ln n! ≈n ln n −n + 1 2 ln(2πn). This yields p(x, N) ∝ r 2 πN exp −x2 2Nℓ2 This is a Gaussian symmetric about its mean 0, which tells us that we’re mostly likely to have a net displacement of x = 0.
Fact 84 This explains why in polymers, most of the time there are blobs rather than straighter lines! It’s much more probable to be close to the mean.
If we compute the variance, ⟨x2⟩= Nℓ2 as expected, and if we say that our events are equally spaced by some time δt, the variance is again ⟨x2⟩= ℓ2 δt t ∝t.
Definition 85 Define the diffusion constant D such that ⟨x2⟩= 2Dt.
This has various applications!
12.4 Multiple random variables Let’s say we have two variables x and y.
Definition 86 The joint probability distribution function p(x0, y0) = d2F dxdy x=x0,y=y0 = ⇒d2F = p(x, y)dxdy is the probability of x occurring between x0 and x0 + dx and y between y0 and y0 + dy.
Then F(x, y) = Z x −∞ dξ Z y −∞ dηp(ξ, η).
We can also define some other quantities like we did in the one-dimensional case: 44 Definition 87 The expectation value of a function f (x, y) is Z ∞ −∞ dx Z ∞ −∞ dyp(x, y)f (x, y).
We can also integrate one variable out of the equation to find the probability distribution for the other variable: for example, p(x) = Z ∞ −∞ dyp(x, y).
With this in mind, let’s try to relate our variables. Can we answer questions like Question 88. What is the probability that X lies between x and x + dx given that Y is certain, denoted P(X|Y )?
Note that p(x|y) should be proportional to p(x, y): we’ll talk more about this next time!
13 February 27, 2019 (Recitation) Some of Professor Ketterle’s colleagues said that photons don’t have mass, which is true in the basic sense. But there’s a relativistic mass-energy equivalence, so writing the equation for a photon E = m0c2 does actually make sense and has a nonzero m0 for photons. But there’s a question beyond semantics of “real mass” versus “relativistic mass” here: Question 89. If we take a cavity and put many photons inside bouncing back and forth, does the mass increase? Does it have more inertia?
The answer is yes, since we have the photons “basically at a standstill!” But the whole point is to be careful what we mean by “mass.” 13.1 Small aside for surface tension Given a water droplet on a surface, there’s three different surface tensions, corresponding to the three pairs out of {surface, air, water}. The concept of surface tension is that surfaces want to shrink, and this creates net forces at interfaces.
Water droplets stop existing when the forces cannot be balanced anymore, so an equilibrium state cannot exist.
This point is called critical wetting, and anything beyond that point results in water coating the whole surface!
13.2 Delta functions and probability distributions Example 90 Let’s say we have a harmonic oscillator in a dark room, and there is a lightbulb on the oscillator. If we take a long-exposure photo, what does the light distribution look like?
45 If we let x = sin ωt, all phases φ are equally probable, since φ is proportional to time. So we want to go from a probability distribution of φ to one of x: how could we do that?
Well, we know that the velocity is slower at the ends, so we expect more values of x on the ends than the middle.
(This is explained more rigorously in the problem set.) The punchline is that the probability distribution is going to be proportional to 1 v(x), and now we can proceed with mathematics.
So in this case, we have time as our random variable, and we need 1 = Z T 0 p(t)dt for a period of length T. So our starting distribution p(t) = 1 T , and now we want to turn this into a distribution p(x).
Well, p(x) = Z T 0 p(t)δ(x −x(t))dt.
since we ask for the moments in time where x(t) = x. In this case, p(t) is constant, so this is 1 T X i 1 |f ′(ti)| = 1 T 1 |v(t)| where f (t) = x −x(t) and ti are the roots. (Notice that this gives us our probability normalization for free!) So now dx dt = ω cos ωt = ω p 1 −sin2 ωt, which we can write in terms of x as dx dt = ω p 1 −x2 = ⇒p(x) = 1 ωT 1 √ 1 −x2 .
But wait! We haven’t been careful enough, because there’s two different points where x(t) = x. The slopes are negative of each other, so even though dx dt are the same, we need to count the two roots separately. Thus the actual number we want is 2 ωT 1 √ 1 −x2 = 1 π √ 1 −x2 .
The idea in general is that if we have a probability distribution p(x), and x = f (y) = ⇒y = g(x) is some function, we can find the probability distribution p(y) by p(y) = Z p(x)δ(y −g(x))dx.
Basically, we want to take all values where g(x) = y. But here’s another way to see it: the probability differential p(x)dx should correspond to another probability differential p(y)dy, so p(y) = p(x)dx dy .
We’d just have to be careful about multiple roots, which the delta function does a good job of.
13.3 Poisson radioactive decay When sampling decay, scientists often take a small time interval ∆t such that λ∆t = ⟨n⟩is very small. This is the limit ⟨n⟩≪1: the distribution is also correct in this limit.
In general, the probability to get one count is ⟨n⟩= p: our question is to find the probability of two counts in that small interval. Is it p2?
46 Example 91 Consider a die with N faces. We throw the dice twice (n = 2): what is the probability we get two 1s? It’s 1 N2 .
On the other hand, what’s the probability we get exactly one 1? It’ll be 2N−2 N2 ; as N →∞, this goes to 2 N .
So notice that the probability of two 1s is actually not the square of the probability of one 1! In fact, it’s p2 4 .
But back to the radioactive decay case. Does the same argument work here? Well, the Poisson distribution is pn = ⟨n⟩n n! e−⟨n⟩.
Taking this to the limit where ⟨n⟩≪1, we can neglect the exponential term, and pn = pn n! .
This isn’t the same as the p2 4 , because throwing a die is not Poisson - it’s binomial! To modify the distribution into one that’s more Poisson, we have to make N, the number of sides, go to infinity, but we also need to take n, the number of throws, to infinity. We’ll do this more formally next time, but if we take n, N →∞while keeping the expected number of events the same, then n N should be constant. This will indeed get us the desired Poisson distribution!
Question 92. Let’s say we have a count of N for radioactive decay: what is σ?
This is shot noise: it’s just √ N.
Question 93. Let’s say we do a coin toss and flip it some number of times, getting N heads. What’s σ for the number of heads that appear?
Binomial variance works differently: since σ2 = np(1 −p), σ = √ N/2!
Question 94. What if the probability for a head is 0.999?
In this case, σ2 is much less than N, and we’ll have basically no fluctuation relative to √ N. So binomial distributions work in the opposite direction! On the other hand, taking probability of a head to be 0.001 will give basically √ N. So that’s the idea of taking the binomial distribution an(1 −a)N−n N n with mean Na and variance Na(1 −a). If 1 −a is very small, this yields similar statistics to the Poisson distribution!
14 February 28, 2019 There is an exam on March 12th, so here is some information on it! It will be at 11-12:30 (during regular class hours), and there will be four questions on the exam. This year, Professor Fakhri will post the past five years’ worth of exams, and three of the four problems will be from the past five years. They will be posted in the next few days, and we’ll have about 10 days to work through those. (One question will be new.) The material will cover things up to next Thursday. The next two lectures will talk about Shannon entropy, and those are the last lectures that will be on the exam.
There will also be an optional review session held by the TAs next Thursday.
Next week, the American Physical Society meeting will be taking place, so if we want extra office hours, we should send over an email!
47 14.1 Overview We started talking about conditional probability last time, which will be helpful in talking about canonical ensemble properties. We thought about the probability distribution of a sum of random variables, particularly thinking about doing repeated measurements of some quantity. The idea is that we’ll get closer and closer on average to the actual quantity.
The idea was that we started with a two-variable probability distribution, and we wanted to find the probability that X lies between x and x + dx given a fixed value of y. This is denoted p(X|Y ).
14.2 Conditional probability Claim 95. p(x, y) is proportional to p(x|y).
We know that Z p(x|y)dx = 1, since with y held fixed, we expect to find x somewhere (we’re just limiting ourselves to a one-dimensional probability distribution). In addition, Z p(x, y)dx = p(y), since this is “all possible values of x” for a given y. Thus, we can see that (essentially removing the integrals), p(x|y) = p(x, y) p(y) .
This is the Bayesian conditional probability formula!
Fact 96 We plot p(x|y) using “contour plots.” Example 97 Let’s say that the probability of an event happening is uniform inside a circle of radius 1 and 0 everywhere else.
We can write this mathematically using the Heaviside step function: Definition 98 Define the Heaviside step function θ(x) = Z 0 −∞ δ(s)ds.
This can be written as θ(x) = 1 x > 0 0 x < 0 and it is unclear what θ(0) is.
Then the example above has a probability distribution of p(x, y) = 1 π θ(1 −x2 −y 2), 48 since we only count the points with 1 −x2 −y 2 ≥0 and the normalization factor comes from the area of the circle (π). Then if we want to find the probabilities p(x), p(y), p(y|x), we can do some integration: • To find p(x), integrate out the ys: p(x) = Z dy 1 π θ(1 −x2 −y 2) = 1 π Z √ 1−x2 √ 1−x2 dy = 2 π p 1 −x2 since we take the limits to be the zeros of the argument of θ. This holds for all |x| < 1 (the probability is 0 otherwise).
• Similarly, we find p(y) = 2 π p 1 −y 2 for |y| < 1.
• Finally, to find the conditional probability, p(y|x) = p(x, y) p(x) = 1 πθ(1 −x2 −y 2) 2 π √ 1 −x2 = θ(1 −x2 −y 2) 2 √ 1 −x2 , which is 1 2 √ 1−x2 for |y| < √ 1 −x2 and 0 everywhere else. The idea is that we were initially choosing points randomly in the circle, so the distribution for a given x should also be uniform in y.
Definition 99 Given two random variables X and Y , define them to be statistically independent if and only if p(x, y) = p(x)p(y) This means knowing y tells us nothing about x and vice versa: in other words, it’s a corollary that p(x|y) = p(x), p(y|x) = p(y).
Data analysis uses Bayes’ Theorem often, so we should read up on it if we’re curious! Also, see Greytak’s probability notes page 27 to 34 on jointly Gaussian random variables.
14.3 Functions of random variables Suppose x is a random variable with probability distribution dFx dx = p(x) [recall F is the cumulative distribution function].
Let y = f (x) be a function of a random variable x: what is the probability distribution dFy dx = p(y)?
Example 100 Consider a one-dimensional velocity distribution pV (v) = dFV dv = ce−mv 2/(2kBT ).
Given that E = mv 2 2 , is there any way we can find the probability distribution p(E)?
The naive approach is to use the chain rule: just say dFE dE = dFE dv dv dE .
But if we compute this, dE dv = mv = √ 2mE, 49 and we can plug this in to find p(E) = ce−E/kt 1 √ 2mE .
Unfortunately, this is not normalized: instead, let’s use delta functions to try to get to the right answer! We can write dFx dx = p(x) = Z ∞ −∞ dξp(ξ)δ(x −ξ) where the delta function only plucks out the term where x = ξ. So dFy dy = Z ∞ −∞ dξp(ξ)δ(y −f (ξ)) (basically, we only select the values of ξ where f (ξ) = y). Using the important property of delta functions, this is dFy dy = X ξi p(ξi) df dξ ξi .
So now if we have dFv dv = p(v) = ce−mv 2/(2kBT ) as before, we can write dFE dE = Z ∞ −∞ p(u)δ E −mu2 2 du since we want all values of E equal to mu2 2 . Notice this happens at ui = ± q 2E m : there’s two roots, so that results in dFE dE = 2 √ 2mE ce−E/kBT which (can be checked) is normalized in the same way. The chain rule method misses the multiple roots!
14.4 Sums of random variables Let’s say we have x1, · · · , xn random variables with probability density functions pj(xj) for all 1 ≤j ≤n.
Assume xjs are all statistically independent: then p(x1, x2, · · · , xn) = n Y i=1 pj(xj).
For simplicity, let’s say that all probability distributions pj are the same function p(x). Then we can also write this as p(x1, · · · , xn) = n Y i=1 dyjp(yj)δ(xj −yj) (the delta function notation will make our lives easier later on). Our goal is to find the mean, average, variance, and other statistics for x1 + · · · + xn.
Fact 101 This is applicable for experiments that repeatedly try to measure a quantity x. The average measured value x should have a probability distribution that grows narrower and narrower!
50 Using the notation Sn = n X j=1 xj, xn = Sn n , our goal is to find the uncertain in our measurement after n trials. More specifically, we want to find the probability distribution of x.
Proposition 102 The variance of the average is proportional to 1 n, and as n →∞, this becomes a Gaussian.
Why is this? The probability distribution for xn, much like the examples above, is p(xn) = n Y j=1 Z p(yj)δ xn −1 n n X k=1 yk !
dyj and the mean of xn is Z p(xn)xndxn = n Y i=1 Z dyjp(yj)1 n n X k=1 yk.
Switching the sum and product, this is 1 n n X k=1 n Y j=1 Z p(yj)ykdyj = 1 nn⟨x⟩= ⟨x⟩, and this is just a convoluted way of saying that the average expected measurement is just the average of x.
Next, let’s find the variance of the averages: we can first compute ⟨xn 2⟩= Z xn 2p(xn)dxn, which expands out to Z dxnxn 2 n Y j=1 Z dyjp(yj)δ xn −1 n n X k=1 yk !
and simplifying this by evaluating the delta functions, this becomes 1 n2 n Y j=1 Z p(yj)dyj n X k=1 yk !2 = 1 n2 n Y j=1 Z p(yj)dyj n X k=1 y 2 k + 2 X k>ℓ ykyℓ !
So this yields 1 n2 n⟨x2⟩+ n(n −1)⟨x⟩2 , so since we are trying to find the variance, var(xn) = ⟨xn 2⟩−⟨xn⟩2 = 1 n⟨x2⟩−1 n⟨x⟩2 = var(x) n , as desired. So σ(xn) = 1 √nσ(x), and this means that the standard deviation of the average gets smaller relative to the standard deviation of x as we make more measurements!
51 Proposition 103 So the distribution of the sum of random variables is dFSn dSn = pSn(Sn) = n Y j=1 Z dyjp(yj)δ Sn − n X k=1 yk !
.
For example, if we have two random variables, p(S2) = ZZ dxdyp1(x)p2(y)δ(s2 −x −y) = Z dxp1(x)p2(s −x) which is the convolution of p1 and p2, denoted P1 ⊗P2.
Fact 104 The sum of Gaussian random variables is another Gaussian with mean and variance equal to the sums of the means and variances of the two original Gaussians. Similarly, the sum of Poissons (λT)n n!
e−λT is a Poisson distribution with ⟨SN⟩= N⟨n⟩and variance var(SN) = N var n = NλT.
15 March 4, 2019 (Recitation) We’ll go through some ideas from the problem set.
15.1 Probability distributions and brightness We can basically think of this problem as a sphere with fireflies emitting light. What do we see? Well, we know that the light intensity depends on ρ, where ρ = p x2 + y 2. So we basically want to integrate out the z-direction.
The idea is that we should go from p(x, y, z) to different coordinates. Any probability distribution always integrates to 1, so we are interested in only picking out the values p(ρ) = Z p(x, y, z)δ(ρ − p x2 + y 2)dxdydz where ρ2 = x2 + y 2.
Question 105. Is p(ρ) the brightness?
Not quite! p(ρ)dρ gives the “probability” or number of stars in a narrow strip from ρ to ρ + dρ. So to find the brightness, we need to divide through by 2πρ, since we need to divide by the area. This means B(ρ) = p(ρ)dρ 2πρdρ = p(ρ) 2πρ .
When we calculate this and plot it, p(ρ) looks linear for small ρ, so B(ρ) starts offapproximately constant close to the center! This makes sense: the “thickness” at each point is about the same near the center.
52 15.2 Change of random variables Let’s say we have some function (think potential energy) E = f (x) with inverse function x = g(E). Let’s say we’re given some spatial distribution p(x), but there is an energy E at each point: how can we find the probability density function p(E)?
We can use the cumulative function F(x), defined by d dE F(E) = p(E). By the chain rule, we can regard E = f (x), so dF dx = dF df (x) df (x) dx = ⇒p(x) = p(E) df dx .
But it is possible that our function is multi-valued: for example, what if E = f (x) = x2 is quadratic? Then the function is not one-to-one: in this case, our cumulative distribution should not look like F(E) = Z g(E)= √ E −∞ p(x)dx, but rather F(E) = Z √ E − √ E p(x)dx so that we get all values x such that x2 ≤E.
Another example of this is that if we have E = x3, we just have F(E) = Z 3 √ E −∞ p(x)dx, since again we want all values of x that make x3 ≤E. The idea is that being “cumulative” in x doesn’t necessarily mean the corresponding values are any kind of “cumulative” function in f (x), so we need to be careful! This is part of the reason why we like to use delta functions instead: the central idea is that we need to look at all the roots of E = p(x).
So the rest is now a differentiation: p(E) = d dE F(E) = d dE Z g(E) −∞ p(x)dx, where g is a root of the equation f (x) = E = ⇒x = f −1(E) (there could be multiple such roots, which means we’d have to split into multiple integrals). So by the first fundamental theorem of calculus, this is p(E) = p(g(E)) dg dE = ⇒p(E) = p(x) 1 df /dx as before! In general, if our expression looks like p(E) = d dE Z xi+1 xi p(x)dx + Z xi+3 xi+2 p(x)dx + · · · and this gives the sum X i p(xi) dg dE f (xi)=E , where the absolute value comes from the fact that the lower limits have a different sign of slope from the upper limits, which cancels out with the negative sign from the first fundamental theorem.
Here’s one more method that we will all like! Add a theta function θ(E −f (x)): we want to integrate over all 53 values f (x) ≤E, where θ is the step function, and this is just F(E) = Z p(x)dxθ(E −f (x)) (think of this as “including” the parts that have a small enough value of f (x)). But now taking the derivative, p(E) = d dE F(E) = Z p(x)θ′(E −f (x))dx and the derivative of the theta function is the delta function p(E) = Z p(x)δ(E −f (x))dx.
We’ve done this before! We just care about all roots where f (x) = E, and this is X i p(xi) 1 |f ′(xi)|, and indeed this is the same result as we’ve gotten through other methods.
Fact 106 Key takeaway: sum over roots and correct with a factor of the derivative!
Now if we have a function E as a function of multiple variables f (x, y, z), we can just pick out the “correct values” via p(E) = ZZZ p(x, y, z)δ(E −f (x, y, z))dxdydz.
How do we evaluate the derivatives here? It’s possible that δ in general could be a product of three delta functions: for example, think of a point charge in electrostatics. But in this case, we’re dealing with a one-dimensional delta function. We need to solve the equation E = f (xi, y, z) = 0: we may have two roots x1, x2, so now we have p(E) = ZZ 2 X i=1 p(xi, y, z) · 1 ∂P ∂x (x, y, z) i dydz The point is that the delta function eliminates one variable, so integrate one variable at a time! Alternatively, there may be some condition on y and z (for example, if E = x2 + y 2 + z2, then y 2 + z2 are forced to be within some range), and that just means we have to add an additional condition: either an integration or a theta function.
16 March 5, 2019 Fact 107 The optical trap, a focused beam of light, can trap particles of size approximately 1 micron. People can play Tetris with those particles! Once the optical traps are turned off, though, particles begin to diffuse, and this can be explained by statistical physics.
The American Physical Society meeting was held this week: 90 percent of the awards this year were given to statistical physics applied to different fields.
The material covered was a bit more challenging than areas we’ve explored so far, but there are many applications of what we’ve discussed!
54 16.1 Entropy and information: an overview Let’s start by reviewing why we’ve been talking about all the topics of this class so far. We used the first law dU = ¯ dQ + ¯ dW, where ¯ dW is a product of an intensive and extensive quantity such as −PdV (generalized force and displacement).
We found that ¯ dQ is not a state function: it does depend on the path we take to get to a specific state. Entropy is an old idea, and it comes from thermodynamics as a way to keep track of heat flow! It turns out that this S is indeed a state function, and it can characterize a macroscopic state.
We’re going to use the ideas from probability theory to write down an “entropy of a probability distribution” and measure how much “space” is needed to write down a description of the states. One definition we’ll see later is the probabilistic S = − X pj log pj.
Consider the following thought experiment: let’s say we have some particles of gas in a box, and we have the experimental tools to measure all of those positions and velocities. We write them down in a file, and we want to compress the files to the smallest possible length.
If we compress efficiently, the length of the file tells us something about the entropy of the distribution. For example, if all of the particles behave “simply,” it will be easy to efficiently compress the data, but if the system looks more “random” or “variable,” the compression will be less effective.
Fact 108 In general, if we heat up the room by 10 degrees and repeat this process, the compressed file will have a longer length. Generally, we want to see the change in length of the file per temperature or heat added!
This is a connection between two different ideas: an abstract length of a computer program and a tangible heat.
16.2 Shannon entropy and probability Let’s say we have a set S = {s1, · · · , sN} whose outcomes have probability p1, · · · , pN. An example of a “well-peaked” distribution is p1 = 1, pj = 0 ∀j ̸= 1.
If we see an event from this probability, we are “not surprised,” since we knew everything about the system from the beginning. On the other hand, if we have all pj approximately equal, pj = 1 N ∀j.
The amount of surprise for any particular event is about as high as it can be in this case! So we’re really looking at the amount of randomness we have. Claude Shannon published a paper that was basically the birth of information theory: 55 Proposition 109 (Shannon, 1948) What is the minimum number of binary bits σ needed on average to reproduce the precise value of a symbol from the given bits? This turns out to be σ = − N X j=1 pj log2 pj.
Example 110 Let’s say we have a fair coin that can come up heads or tails with PH = PT = 1 2. If we have a string of events like HTHHTHTTHTTT, we can represent this in a binary string by H →1, T →0.
Clearly, we do need 1 bit to represent each coin flip. Our “symbol” here is an individual “head” or “tail” event, and the minimum number of bits needed is − 1 2 log2 1 2 + 1 2 log2 1 2 = 1.
So our “coding scheme” sending heads and tails to 0 and 1 is “maximally efficient.” Example 111 Let’s say we have four symbols A, B, C, D, all equally likely to come up with probability 1 4. We can represent this via A →00, B →01, C →10, D →11.
The Shannon entropy of this system is indeed − 4 X 1 1 4 log2 1 4 = 2, so we need at least 2 bits to encode each symbol.
Example 112 Let’s say we have three symbols with probability PA = 1 2, PB = 1 4, PC = 1 4.
Naively, we can represent A = 00, B = 01, C = 10, so we need 2 bits per symbol. But there is a better code? Yes, because the Shannon entropy − 1 2 log2 1 2 + 1 4 log2 1 4 + 1 4 log2 1 4 = 3 2, so there should be a way to code each symbol in 1.5 bits on average! Here’s a better coding scheme: use A = 0, B = 10, C = 11, which gives an average of 1 2 · 1 + 1 4 · 2 + 1 4 · 2 = 3 2 Now given a string 00101000111101001010, 56 we can reconstruct the original symbol: if we see a 0, then pull it out as an A, if we see a 1, pull it and the next number out to form a B or C, and rinse and repeat until we reach the end!
In general, the idea is to group symbols to form composites, and associate high probability with shortest bit strings. In the case above, we had a high chance of having A, so we made sure it didn’t require too many bits whenever it appeared.
Example 113 If we have a biased coin with probability 3 4 of heads (A) and 1 4 of tails (B), the Shannon entropy is − 1 4 log2 1 4 + 3 4 log2 3 4 ≈0.811.
So there should be a way to represent the heads-tails method in less than 1 character per flip!
We can group symbols into composites with probabilities AA : 9 16 AB : 3 16 BA : 3 16 BB : 1 16 .
These are all fairly close to powers of 2, so let’s represent A as 0, AB as 10, BA as 110, and BB as 111 (this is not perfect, but it works pretty well). Then on average, we need 9 16 · 1 + 3 16 · 2 + 3 16 · 3 + 1 16 · 3 ≈1.688 bits to represent two symbols, for an average of less than 1 bit per symbol! This is better than the version where we just use 1 for heads and 0 for tails.
Fact 114 If we instead group 3 symbols, we may be able to get an even better coding scheme! We do have to make sure we can umambiguously decode, though.
By the way, for our purposes from now on, we’ll be using natural log instead of base 2 log, since we have continuous systems instead. Note that log2 X = ln X ln 2 , so the Shannon entropy is σ = −1 ln 2 X n pn log pn, and if we instead have a continuous probability distribution, we can integrate instead: σ = − Z p(n) log2 p(n)dn, where we normalize such that R p(n)dn = 1.
57 16.3 Entropy of a physical system Now that we have some intuition for “representing” a system, let’s shift to some different examples.
Consider the physical quantity S = −kB X i pi ln pi.
Note that all terms here are nonnegative, so the minimum possible value is S = 0: this occurs when there is only one event with probability 1 and no other possibility. This is called a delta function distribution. On the other hand, the maximum possible value occurs with a uniform distribution: where all pis are the same. If there are M events each with probability 1 M , this evaluates to −kB X i 1 M ln M = kB ln M.
(By the way, the kB is a way of converting from Joules to Kelvin for our measure of temperature.) Proposition 115 This means S is a measure of “dispersion” or “disorder” of the distribution! So this gives an estimate of our probability distribution, or at least its general shape.
For example, if we have no information about our system, we expect it to be uniform. This yields the maximum possible value of S (or entropy), and this is the best unbiased estimate of our distribution. Once we obtain additional information, our unbiased estimate is obtained by maximizing the entropy given our new constraints.
Fact 116 This is done using Lagrange multipliers!
If we have some new information ⟨F(x)⟩= f (we measure the value of some function F(x)), we want to maximize S(α, β, {pj}) = − X i pi ln pi −α( X (pi) −1) −β( X (piF(xi)) −f ) Our constraints are that our distribution must be normalized and that we want ⟨F(x)⟩−f to be close to 0 as well. It turns out this gives a Boltzmann distribution pi = α exp(−βF(xi)).
Here β is fixed by our constraints, and α is our normalization factor! For example, we could find β by knowing the average energy of particles. We’ll see this a bit later on in the course.
16.4 Entropy in statistical physics Recall that we specify a system by stating a thermodynamic equilibrium macrostate: for example, we give the internal energy, pressure, temperature, and volume of a system. This is specifying an ensemble.
On the other hand, we can look at the microstates of our system: they can be specified in quantum systems by numbers {nj, 1 ≤j ≤N} or in the classical systems by positions and velocities {xi, vi, 1 ≤i ≤N}.
We can set up a distinction here between information theory and statistical mechanics. In information theory, our ensembles look very simple: we have usually a small number of possible outcomes, but the probability distributions can look very complicated. On the other hand, ensembles in statistical mechanics are often much more complicated 58 (lots of different possible microstates), but our probability distributions are much more simple. The idea is that S, our entropy, will be the maximum value of S = −kB X pi ln pi across all probability distributions {pi}.
But what are the distributions given our constraints? That’s what we’ll be looking at in the rest of this class!
17 March 6, 2019 (Recitation) Let’s start with a concrete example of the discussion from last recitation. Let’s say we have a probability distribution that is uniform inside a circle of radius R: p(x, y) = 1 πR2 for x2 + y 2 ≤R2 and 0 outside. We’re going to find the probability distribution p(r) in three different ways.
17.1 The messy way First of all, if we use Cartesian coordinates, we can directly write this in terms of a delta function p(r) = Z R −R Z p R2−y 2 − p R2−y 2 p(x, y)δ(r − p x2 + y 2)dxdy Let’s take care of the delta function as a function of x. We know the delta function δ(f (x)) is δ(x), divided by f ′(x) at a zero of the function, so let’s compute the roots!
f (x) = r − p x2 + y 2 = ⇒x± = ± p r 2 −y 2.
The absolute value of the derivative is equal at both roots: f ′(x) = − x p x2 + y 2 = ⇒|f ′(x±)| = p r 2 −y 2 r = r 1 − y r 2 .
So now, we can evaluate our boxed expression above. The delta function is integrated out (except that we gain a factor of |f ′| in the denominator, and we replace x with the root xi wherever it appears. But here the probability distribution is uniform (does not depend on x explicitly), and the two roots have equal |f ′(xi)|, so we get a factor of 2. This simplifies to = X xi roots Z R −R Z p R2−y 2 − p R2−y 2 δ(x −xi) πR2 · 1 q 1 − y r 2 dx dy = Z R −R 1 πR2 · 2 q 1 − y r 2 dy(?) where the boxed terms integrate out to 1, since xi = ± p r 2 −y 2 is always in the range h − p R2 −y 2, p R2 −y 2 i .
But we must be careful: if |y| > |r|, or if |r| > R, we don’t actually have these two roots! So we put in some constraints in the form of theta (step) functions: they force R > r and r 2 > y 2: = θ(R −r) 2 πR2 Z R −R θ(r 2 −y 2) q 1 − y r 2 dy (where we have r 2 and y 2 in the second theta function to deal with potentially negative values of y). What does that 59 θ function mean? The inner one just means we integrate across a different range of y: θ(R −r) 2 πR2 Z r −r 1 q 1 − y r 2 dy and now we can integrate this: substituting u = y r , this is θ(R −r) 2r πR2 Z 1 −1 1 √ 1 −u2 du and the integral is sin−1(u)|1 −1 = π, resulting in a final answer of p(r) = θ(R −r) 2r R2 = 2r R2 r < R 0 r ≥R It’s stupid to use Cartesian coordinates here, but this shows many of the steps needed!
17.2 Polar coordinates Here’s a faster way: the probability distribution x = ρ cos θ, y = ρ sin θ becomes p(x, y) = 1 πR2 for ρ < R. So now we can write our boxed double integral above in our new coordinates: Z R 0 Z 2π 0 p(x, y)δ(r −ρ)ρdθdρ.
The integration over dθ gives a factor of 2π, and p is uniform, which simplifies this to 2π πR2 Z R 0 δ(r −ρ)ρdρ.
The delta function has only the root ρ = r: since we’re integrating over [0, R], this is p(r) = 2π πR2 r · θ(R −r) = 2r R2 θ(R −r), which is identical to what we had before.
17.3 Without delta functions We can use cumulative density distributions instead! What is the cumulative probability F(r) = Z r 0 Z p r 2−y 2 − p r 2−y 2 p(x, y)dxdy?
This is the probability over all x2 + y 2 ≤r 2. This integral is just p(x, y) (which is constant) times the area of a circle with radius r, which is πr 2 πR2 = r 2 R2 60 as long as r < R. So p(r) is just the derivative of F(r): p(r) = dF(r) dr = 2r R2 , and we’re done! We could have fixed up the edge case of r > R by adding a theta function θ(r −ρ) inside the original integrand. Then the derivative of the theta function is the delta function, which gives the same delta function as in our first method.
To summarize, we can generally avoid delta functions with cumulative densities.
17.4 Parity violation We once assumed that if we flip our coordinates x →−x, y →−y, and so on, there is no difference in our laws.
Basically, everything in a mirror would also obey the laws of physics. But the Wu experiment proved this to be false!
This is called parity (P) violation. But there’s something more interesting: people managed to include charge (C) conjugation, changing matter and antimatter, and then CP conservation seemed to be true. But it was found that even this is violated!
Fact 117 So if you want to tell your friends in the alien world what the right side is, you can say to run a current through a coil of wire. Put in Cobalt-60, and the magnetic field from the coil will have more electrons coming out from the top than the bottom if our coil is counterclockwise. This is a right-handed current!
But if we want to explain that our heart is on the left-hand side, we can put ourselves in a solenoid. Run a current through the solenoid, and you can say that the Cobalt-60 emission goes up. Then the current now flows across our chest from the right to the left!
But they’ll be made of antimatter, so they might hold out the wrong hand when they shake hands.
18 March 7, 2019 Just a reminder: there is an exam on Tuesday in this classroom. It will go from 11 to 12:30 (usual class time). The past exams have been uploaded: note that three of the four problems on this year’s exam will be reused from previous exams posted on Stellar. If we are able to do the problem sets and past exams, we have a good mastery of what’s going on.
By the way, the concept of density of states has been moved: it will come later, and it will not come up on the exam.
There will be an optional review session from 5 to 7 today, and Professor Fakhri will also hold additional office hours.
Material is everything from class until today, though entropy will be more about concepts than specific examples.
The next problem set will not be due on Friday.
18.1 Quick review and overview We’ve been learning about macroscopic quantities and connecting them to microscopic systems, and this led us to the idea of entropy. This was a concept that Boltzmann introduced before information theory: basically, we care about how “confused” we are about a system.
61 Today, we’re going to expand on the concept of thermodynamic entropy and introduce the second law of thermo-dynamics, which claims that entropy is nondecreasing with time. This is an emergent phenomenon!
Remember that for a thermodynamic ensemble, we defined our entropy to be S = kB X j pj ln pj, where we’re summing over all possible microstates j that occur with a probability of pj. Note that this is also ≡(kB ln 2)σ, where σ is the Shannon entropy.
18.2 Looking more at entropy Proposition 118 Thermodynamic equilibrium occurs at the (unbiased) probability distribution which maximizes entropy: S = max pj −kb X j pj ln pj.
Recall from last time that an example of such an unbiased probability distribution is the uniform distribution: all states occur with equal probability. If we have no prior information, this is the best “guess” we can have for what our system looks like. In this case, the distribution looks like pj = 1 Γ, where Γ is the total number of consistent microstates. (Γ is known as the multiplicity.) Plugging this in, S = −kb Γ X j=1 1 Γ ln 1 Γ = kb ln Γ .
Fact 119 On Boltzmann’s tombstone in Vienna, S = k log W is written. This equation is kind of the foundation of statistical physics!
Note that S is a measure of the macroworld, while W or Γ is a measure of the microworld, so this is a good relationship between the two.
Here’s some additional facts about our entropy S.
• S is a state function of P and V . In other words, it is independent of the path we took, so we can compute S from other macrostates. For example, we can write the entropy S of a gas in a box in terms of the volume, number of molecules, and internal energy S = S(U, V, N).
On the other hand, if we have a magnet, the state function depends on the magnetization ⃗ M.
62 • The proportionality constant of kB arises because we chose to use units of temperature. In particular, we could have units of temperature in Joules if we just let kB = 1.
18.3 The second law Recall the first law of thermodynamics, which tells us about conservation of energy: dU = ¯ dQ −PdV.
Proposition 120 (Second Law of Thermodynamics) Entropy of an isolated system cannot decrease.
From an information theory perspective, this is saying that our ignorance of a system only increases with time.
Let’s look at an example by time-evolving a system!
Proposition 121 In both classical and quantum systems, the time-evolution of a microstate is both causal and time-reversal invariant.
What do those words mean? Causality says that each microstate at some time t1 evolves into a unique, specific microstate at time t2 > t1. So causality says that we can’t have two different microstates at t2 that both originated from t1: if we had 100 microstates at time t1, we can’t have more than that at a later time t2.
Meanwhile, the concept of time-reversal invariance is that both laws of classical and quantum physics are reversible if we switch t →−t. For instance, any wavefunction |ψ(t)⟩or classical ⃗ x(t), ⃗ v(t) that is a valid also gives a valid |ψ(−t)⟩and ⃗ x(−t), ⃗ v(−t).
So if we think about this, it means we cannot have two microstates at time t1 that converge into one at a later time t2 either. So our ignorance about the system cannot decrease!
But can the entropy increase?
18.4 A thought experiment Example 122 Consider a box with a partition, and one half of the box is filled with a gass with a known U, V, N at some time t < 0. (The other part is filled with a vacuum.) At time t = 0, the partition is removed.
Now the gas fills a volume of 2V , and U and N are still the same, so there are many more microstates that are possible. This increases our entropy! The kinetic energy of the particles has not changed, but our ignorance of the system has increased. There are many more possible values for the initial position and momentum of every particle.
What’s the change in the number of microstates Γ? If we assign a binary variable to each particle, which tells us whether the particle is on the left or right side of the box, after t > 0, we now need an extra binary bit to tell us about the system. Thus, with N particles, our change in Shannon entropy is ∆σ = N. Thus ∆S = kb ln 2∆σ = ⇒∆S = Nkb ln 2 and since S ∼log Γ, we get a factor of 2N more possible microstates!
63 Fact 123 This doesn’t break causality or time-reversal. The idea is that every microstate before t < 0 goes to exactly one microstate at t > 0, but we don’t know which one it is: the probability distribution is still uniform, just with a larger range of possibilities.
Notice that there is some time from our initial state (U, V, N) to our final state (U, 2V, N) to reach equilibrium again (so that we can define our state functions). We can think of this as “mixing” states and making the probability distribution more uniform! There is a whole different field called ergodicity.
Fact 124 In any (real) macroscopic system, regardless of the initial configuration, over a long time, the system will uniformly sample over all microstates.
Basically, over a long time, the sampling of a probability distribution will yield all microstates with equal probability.
For example, instead of preparing many initial configurations, we can prepare one particle and sample it many times.
Fact 125 (Ergodic hypothesis) We can compute the properties of an equilibrium macrostate by averaging over the ensemble.
If there are microstates Si that occur with probability pi, and we have some function f (Si) of microstates, we can compute a property ⟨f ⟩= X si f (si)p(si).
But instead, we can sample our system and average: ⟨f ⟩= 1 T Z T 0 f (t)dt.
This time T may be large though!
Fact 126 (Systems out of equilibrium) There are some systems that have a slow relaxation time, so they’ll never reach equilibrium within a reasonable amount of time! An example is the universe.
In the rest of this class, we’ll come up with ensembles, and find unbiased probability distributions consistent with a macrostate. We’ll try to see what conditions we can impose to define thermodynamic quantities!
18.5 Moving on We’ll be talking about different kinds of ensembles (collections of microstates) in this class.
A microcanonical ensemble is mechanically and adiabatically isolated, so its volume V and number of particles N is constant. In such a system, we can define a temperature! After that, we will discuss the canonical ensemble, which trades fixed U for fixed T. We can then look at grand canonical ensembles, which are systems at fixed chemical potential.
Recall that S = S(U, V, N) is a state function on equilibrium states, and ∆S > 0 for isolated systems. We also know that it is an extensive quantity (it is additive) like N, V , and U: it turns out the conjugated quantity (generalized force) here is temperature T.
64 How can we show additivity of entropy?
Lemma 127 Given two independent non-interacting systems A and B, the entropy SAB = SA + SB.
Proof. A has NA possible microstates with probability Pα,A, so SA = −kB X α Pα,A ln Pα,A and similar for B. By statistical independence, PA,B = PA · PB, so (here Pα,β refers to the probability Pα,APβ,B for brevity) SAB = −kB X α,β Pα,β ln Pα,β = −kB X α,β Pα,APβ,B ln(Pα,APβ,B) = −kB X α,β Pα,APβ,B (ln Pα + ln Pβ)) which can be written as SAB −kB X β Pβ,B X α Pα,A ln Pα,A −kB X α pα,A X β Pβ,B ln Pβ,B and as the boxed terms are 1, this is just SA + SB as desired.
19 March 11, 2019 (Recitation) We’ll cover some short questions and then relate Poisson, Binomial, and Gaussian distributions to each other.
As a quick refresher, if we have a probability distribution p(x, y) and we want to find it in terms of another variable z = f (x, y), then p(z) = Z p(x, y)δ(z −f (x, y))dxdy will pick out the correct values of z. The rest is mathematics: find the roots, magnitudes of derivatives, and make the relevant substitutions.
19.1 Different probability distributions Question 128. What is the counting distribution for radioactive decay? Basically, measure the number of particles that decay / do something else in some interval T: if we do this multiple times, what’s the distribution going to look like?
Remember that we’ve discussed three kinds of probability distributions here: binomial, Poisson, and Gaussian.
We’re always looking for a count rate: can we distinguish anything between these three kinds?
Well, in a binomial distribution, we have some finite number of trials N, so the possible range of n, our count, is always between 0 and N. But for the Poisson distribution, n is any nonnegative integer, and the Gaussian can be any real.
The idea is that if our binomial distribution’s tail is sufficiently flat on the positive end, because our probability p is small or our number of trials N is large enough, then we can extend it to ∞and treat it similar to a Poisson 65 distribution. But on the other hand, if our tail is sufficiently flat on the negative end, we can also extend it to −∞ and treat it like a Gaussian distribution!
So the answer to the question is “yes, Poisson is correct,” but not quite! There is indeed a maximum count rate: N, the number of total atoms in our radioactive material. So this is sort of binomial, but those events are so unlikely that we can neglect them completely.
How do we rigorize this? Remember that our binomial distribution for N trials of an event of probability a is p(n, N) = an(1 −a)N−n N n .
If we let N →∞, but we keep our mean Na = ⟨n⟩constant, then a = ⟨n⟩ N , and our distribution becomes p(n, N) = ⟨n⟩n Nn 1 −⟨n⟩n Nn N n .
We can neglect the −n in the second exponent, since n ≪N, and this middle term now approaches e−⟨n⟩. What’s more, N n = N(N−1)···(N−n+1) n!
is essentially Nn n! , and now we’re left with p(n, N) ≈⟨n⟩n Nn e−⟨n⟩Nn n! = ⟨n⟩n n! e−⟨n⟩ which is the Poisson distribution as we wanted!
Fact 129 So letting N go to infinity but adjusting the probability accordingly, we get a Poisson distribution. On the other hand (or as a subsequent step), if we make ⟨n⟩larger and larger, this gives us a Gaussian distribution by using Stirling’s approximation and using a Taylor expansion. This will yield C exp −(n −⟨n⟩)2 2⟨n⟩ .
The idea is that all that matters is the values of n and ⟨n⟩. But here’s an alternative way to go from binomial to Gaussian: keep a constant, and let N get larger. This now yields C′ exp −(n −⟨n⟩)2 2⟨n⟩ · (1 −a).
So in this case, the variance is not ⟨n⟩= Na but Na(1 −a) = ⟨n⟩(1 −a). The reason this is different is because when we went to the Poisson as an intermediate step, we forced a to be small, which meant we could neglect the 1 −a term!
So now let’s look at a = 1 2, which is the case of a random walk. So our variance is ⟨n⟩ 2 , and let’s say the step size of our random walk is 1 (so we move to the right or to the left by 1 unit each time).
Fact 130 Notice that, for example, if we have 10 steps, we expect to move to the right 5 times. But if we move to the right 6 times instead, our net walk is 6−4 = 2: in general, if our number of right moves is k more than the mean, the net walk is 2k.
So if we substitute in for a step size of our random walk x, n −⟨n⟩= x 2, and ⟨n⟩= N 2 . Rewriting our Gaussian, we will just get Brownian motion p(x) ∝e−x2/2N.
66 In general, measuring N atoms at a rate of λ for time t just yields ⟨n⟩= Nλt.
But the concept behind all the counting is to track n, the number of counts, relative to ⟨n⟩, the average number of counts.
Fact 131 If we have a probability distribution pn, the values of ⟨n⟩and var(n) are telling us data about one trial or sample from the distribution. But if we want to calculate n, which is the average of N measurements, the variance changes by a factor of 1 N (since variances add, so the variance of our sum is N var(n), and then we divide our sum by N, which divides our variance by N2).
By the way, the formula for a normalized Gaussian distribution is on the equation sheet, so we don’t have to worry about it too much.
Let’s think about the stars problem from a past exam: we have stars distributed with density ρ stars per light-year cubed. What is the probability that there are no stars within r of a given star?
Basically, we take a volume V , and we want the probability no other star is in that given volume. We can think of this as taking small pieces of volume, where each one is independent, and where there is a finite, consistent value of ⟨n⟩: average number of stars in each piece of volume. This is a Poisson distribution! So our expectation value is ⟨n⟩= ρ · V = 4πρ 3 r 3, and p(n, N) = ⟨n⟩n n! e−⟨n⟩.
But why do we take n = 0 instead of n = 1? The first star just gives us a volume to look at, so we can completely ignore it.
Fact 132 (Clearly false) If we go on an airplane, we should bring an explosive to be safe, because the probability of there being two explosives on the plane is almost zero.
The central idea here is independence! If we guarantee that we have one star, the other stars don’t care that the first star is there.
20 March 13, 2019 (Recitation) 20.1 Geometric versus binomial distribution Let’s quickly look at the tunneling problem from the quiz: we can think of having an alpha particle inside a nucleus that eventually escapes.
We know the probability that it does exactly n bounces is pn = an(1 −a).
67 This is a geometric distribution!
On a related note, if we have n + 1 atoms, what is the probability that exactly 1 of them decays? Well, this looks very similar to what we have above, but we get an extra combinatorial factor (because any of the atoms can decay): P1 = p(1 −p)n n + 1 1 , and this turns the geometric distribution into a binomial one!
Fact 133 Here’s another example of a geometric distribution: let’s say a patient needs a kidney transplant, but we need to screen donors to see if there is a match. Given a random blood test, the probability of a match is p: then the number of people we need to screen is pn = p(1 −p)n−1.
and we can replace a = 1 −p to get something similar to the tunneling problem.
So the distinction is that you try again and again until success in a geometric distribution, but there’s a finite number of trials in a binomial one.
20.2 Entropy of a probability distribution Let’s say we have an average count rate of ⟨n⟩in a Poisson distribution, meaning the variance is ⟨n⟩as well. (Think of the standard deviation as being about at 0.6 times the maximum value in a distribution that is about Gaussian.) So what does it mean to have an entropy of a distribution?
Example 134 Consider a coin toss or letters in a book: can we measure the entropy of that somehow?
The idea is to have N random events pulled from our probability distribution: what is the number of bits needed to represent that information on average? It’s kind of like image compression: using much less space to display the same data, but we don’t want any loss of resolution.
In a coin toss, we need a 1 or 0 for each toss, since all events are randomly likely. So N random events must come from N bits, and indeed, the Shannon entropy for one event S = − X pi log2 pi = 1, Let’s go to the extreme: let’s assume we have a coin which comes up heads 99 percent of the time. How many bits of information do we need to communicate the random series 0000 · · · 010 · · · 01?
Fact 135 We can just send a number that counts the number of 0s between 1s! So instead of needing about 100 bits to represent each group between 1s, we can use on average ≈log2(100) bits.
More rigorously, let’s say the probability of having a 1 (corresponding to a tail) is small: ε ≈ 1 100. If we have N coin tosses, we will need an expected N · ε differences between the 1s. Each difference is about 1 ε, and we need log2 1 ε 68 bits for each one. So the expected entropy here is Nε log 1 ε = −Nε log ε, while the theoretical Shannon entropy yields S = −N (ε log ε + (1 −ε) log(1 −ε)) and as ε →0, the second term dies out to first order! This confirms the result we had before.
Fact 136 One way to think of this is that the entropy gets smaller (we need less bits) as our sequence becomes less random and more predictable.
20.3 Entropy of a Poisson distribution If we think in terms of information, we are looking at a random sequence: we want to think of coding the resulting data. Well, what’s the data that we’re trying to code?
If we set up our system and repeatedly count, we’ll get a series of numbers pulled from the distribution. So our question is how we can code this? How concisely can we represent the stream of random numbers?
Example 137 First of all, let’s say we have a uniform distribution from 91 to 110, so there are 20 equally likely outcomes.
What’s the entropy of this system?
Well, we need log2(20) bits on average to represent the data! As a sanity check, if we go to the formula S = − X pi log pi and we have W equally probable options, then this Shannon entropy is just S = −W 1 W log 1 W = log W.
Fact 138 Ludwig Boltzmann has S = kB ln W written on his tombstone - notice that this is just kB ln 2 times the quantity we’ve been thinking about! The multiplicative factor is just a convention to connect different fields and stick to historical reasons.
So back to the probability distributions.
We can perform N measurements, each of which can be one of W possibilities (for example, lottery numbers or pieces of colored paper). How many bits do we need to convey the random series that comes up? Again, we need log2 W bits for each number.
But how would we encode these large numbers? In general, we want to index them: if there are 20 possible numbers, we should send them out as 0, 1, · · · , 19, not as their actual values.
So looking at a Poisson distribution, we care much more about the events that occur more frequently. Looking at the p log p term, as p →0, this number approaches 0. So the improbable wings of the Poisson distribution are not important: we really care about those values within a few standard deviations!
69 So the number of numbers that dominate the sequence is around p ⟨n⟩for the Poisson distribution, and thus we estimate the entropy to be log c p ⟨n⟩ ∼log2 c + 1 2 log2(⟨n⟩) where c is some order 1 number (from 1 to 10). Converting this to statistical mechanics, this gives us a 1 2 ln 2 term in leading order! Indeed, if we actually plug in S = − X pi log pi for the Poisson distribution (as we do in our homework), and in fact the actual number looks like (with x = ⟨n⟩) S(x) = 1 2 ln 2(1 + ln 2πx) = 1 2 ln 2 ln x + 1 2 ln 2(1 + ln 2π).
So our handwaving gave us the correct result asymptotically by replacing the Poisson with a Gaussian. That second factor is approximately 4.1, so indeed we have our order 1 number.
21 March 14, 2019 Happy Pi Day! Read the paper posted on the course website.
We did well on the exam; some of them haven’t been graded and will be done by the end of today. They will be brought to class on Tuesday; we can also email Professor Fakhri. If we feel like our score doesn’t reflect our understanding, also send an email.
There is a problem set due on Monday (instead of Friday) at 7pm. There will be another pset for the Friday before break, but it is short (only 2 problems). Finally, there will be an interesting problem in the problem set after that about statistics of the Supreme Court.
Last time, we introduced Shannon and thermodynamic entropy. Near the end of class, we found that the total entropy of a system is the sum of its independent parts, so entropy is an extensive quantity. Now we’ll relax this assumption and allow heat and work to be exchanged as well, and we’ll see what the new entropy becomes! This allows us to define temperature, and then we can connect those microscopic pictures to the macroscopic world. After that, we’ll talk about reversibility and quasi-equilibrium and how we can compute changes in entropy based on the initial and final state.
21.1 Entropy and thermal equilibrium Recall from last time that if we have independent, non-interacting systems A and B, then SAB = SA + SB.
This time, let’s say that we still have A and B isolated from the outside, but there is a partition between A and B. This means that NA, NB are fixed, and so are VA, VB, but heat can flow between the two systems. Our goal is to somehow define a temperature based on SAB.
We’ll let the system go to equilibrium: at that point, our entropy is maximized. Then if A has some entropy SA and internal energy UA, and B has some entropy SB and internal energy UB, we claim that SAB = SA(UA) + SB(UB) for functions SA and SB that depend on UA and UB, respectively. This is a good assumption even if we have small fluctuations.
Now let’s say some infinitesimal heat ¯ dQ passes from A to B. Then dUA = −¯ dQ and dUB = ¯ dQ. But the change 70 in SAB should be zero, since we have a maximum entropy at this point! Expanding out the differential, dSAB = ∂SA ∂UA VA,NA dUA + ∂SB ∂UB VB,NB dUB.
Now plugging in dSAB = 0 and using the fact that dUA = −dUB, we have at thermal equilibrium that ∂SA ∂UA VA,NA = ∂SB ∂UB VB,NB .
So we want to define a state function that is equal at these two different points (in two systems in thermal equilibrium)!
When we bring two systems together, the temperatures should become equal, which motivates the following definition: Definition 139 Define the temperature T of a system via ∂S ∂U V,N = 1 T .
There’s no constant of proportionality here, because we used kB in our definition of entropy.
Here are some important facts: • ∂S ∂U V,N only applies to systems at thermal equilibrium, but it’s a “hot” area of research to think about non-equilibrium states as well.
• Temperature is transitive: if A and B are at thermal equilibrium, and so are B and C, then A and C are at thermal equilibrium. This is the zeroth law of thermodynamics.
• There are other notions of equilibrium (e.g. mechanical) as well, but for now we’re ignoring ideas like partitions being able to move due to pressure.
21.2 Particles in binary states Let’s put together everything we’ve learned so far with an example!
Example 140 Let’s say we have N particles, each of which can have 2 states (for example, a bit taking on values 0 or 1, or a particle with spins in a magnetic field). One way to represent this is by placing them along a number line and representing each one with an up or down arrow. Spin up gives an energy of ε0 and spin down gives an energy of ε1: let’s say n0 is the number of particles in the lower energy state ε0; without loss of generality we let ε0 = 0.
Similarly, define n1 to be the number of particles in the upper energy state ε1 = ε.
Note that n0 + n1 = N, and we can write this as a frequency: n1 = f N = ⇒n0 = (1 −f )N.
The total internal energy of this system is ε0 · n0 + ε1 · n1 = f εN.
Let’s compute the entropy of this system. If we can count the number of microstates Γ (also called the multiplicity), all such states should be equally probable, and then we can compute the entropy from there: S = kB ln Γ.
71 If we have f N particles in the upper energy state, and we have (1 −f )N particles in the lower energy state, Γ = N!
n0!n1!.
Now by the Stirling approximation (since N is large), S = kb ln Γ = kb(N ln N −N −(n0 ln n0 −n0) −(n1 ln n1 −n1)) and since N = n0 + n1, this simplifies to S = kB(N ln n −n0 ln n0 −n1 ln n1) = kB(n0(ln n −ln n0) + n1(ln n −ln n1)) = −NkB(f ln f + (1 −f ) ln(1 −f )).
Notice that S is indeed extensive: it depends on N, the number of particles that we have. We can also find our temperature: 1 T = ∂S ∂U N = ∂S ∂f ∂f ∂U = (−N ln f + N ln(1 −f )) kB εN (here volume is not well-defined and not relevant). Moving terms around, ε kBT = ln(1 −f ) −ln f and this can be represented as comparing two kinds of energy: the difference in energy ε between states and kBT, the thermal energy. That’s the kind of comparison we’ll be doing a lot in this class, since the ratio tells us a lot about the macroscopic quantities of the system! So now defining β = 1 kBT , we can write 1 −f f = eβε.
Then f = n1 N = 1 1 + eβε , 1 −f = n0 N = 1 1 + e−βε , which also means we can rewrite n1 = Ne−βε 1 + e−βε , n0 = N 1 + e−βε .
So now we can compute our internal energy: U = f εN = Nε 1 + eβε .
If we plot n0 N and n1 N as functions of kBT = 1 β , the number of excited particles in the upper state increases to an asymptotic limit. Similarly, U εN approaches 1 2 as kBT →∞. Finally, let’s plot the heat capacity C = ∂U ∂T N: C = Nε2 kBT 2 eε/(kBT ) 1 + eε/(kBT )2 .
We’ll often look at these when T →0, T →∞: as T →0, C → 1 T 2 e−ε/(kBT ), and as T →∞, C →0. There will be physical explanations for each of these behaviors as well!
Finally, remember that we computed the entropy: we’re going to change our variables: S = S(U, N) = S(T, N) = ⇒S = −NkB(f ln f + (1 −f ) ln f ) 72 Using the fact that f = 1 1+eεβ , we now have S(N, T) = Nkb ln(1 + e−βε) + Nε T 1 1 + eβε and notice that we can split this up in a familiar way: = U(N, T) T + Nkb ln(1 + e−βε).
As T →0, S →0, and T →∞, S →Nkb ln 2. This gives us a good interpretation of information entropy: high temperature gives high uncertainty!
21.3 Back to macroscopic systems Let’s look back at our equation 1 T = ∂S ∂U V .
This is important because S is computed from the number of microstates in our system! So let’s go back to our system with two parts and an immovable thermal partition (so again, we have fixed volume). Let a small amount of heat be transferred from one part to the other.
No work is done, so ¯ dQ = dU, which means that dS = ∂S ∂U V,N dU = 1 T dU.
Here dU is our thermal energy, and since we add a small amount of heat, our systems remain close to thermal equilibrium: thus U and S remain well-defined. Finally, it’s important to note that this can be reversed without any effect on our system, since dU is infinitesimal.
Fact 141 Those three conditions are what dictate a reversible heat transfer: dU = ¯ dQ, so dS = dQrev T . However, this is only an equality when we have reversible quantities: in general, we have the inequality dS ≥¯ dQ T .
This is the first time we write our heat as the product of an extensive and intensive quantity. It’s important that S is a state function, so we can compute the change dS by looking at any path!
Example 142 Consider a system with a partition: the left half has volume V , temperature TA, and pressure PA, and the right half is empty. At time t = 0, we remove the partition, and now we have a new volume 2V , TB, PB.
Since the system is isolated, TB = TA (as no heat is transferred, and an ideal gas’s energy only depends on temperature). By the ideal gas law, then, PB = PA 2 . This is an irreducible process, so we can’t follow a specific path on the PV diagram. But S is a state function, so we can pick any path!
Let’s say we go from A to C (reversibly add heat at constant pressure P until volume 2V ), and then from C to B (reversibly remove heat at constant volume until the pressure drops to P 2 ). Then dQrev along the first part of our path is CP (T)dT, and dQrev along the second part is CV (T)dT, so our total change in entropy is SB = SA = Z TC TA dT ′ T ′ CP (T ′) − Z TC T dT T ′ CV (T ′) 73 and these integrals can be combined to Z TC T dT ′ T ′ (CP (T ′) −CV (T ′)) = NkB ln TC T Since we have an ideal gas, TC = 2T, and therefore ∆S = SB −SA = NkB ln 2, as we expect!
22 March 18, 2019 (Recitation) 22.1 Questions Let’s discuss the entropy of a Poisson ensemble from the problem set. We’ve been given a Poisson distribution with mean ⟨n⟩: then pn = ⟨n⟩n n! e−⟨n⟩.
By definition, the Shannon entropy is − X n pn log2 pn.
Mathematically, this is pretty straightforward, and we discussed last Wednesday what an entropy of a distribution actually means. For example, if ⟨n⟩≈100, the entropy lets us know how many bits we generally need to represent the random samples.
In particular, if we measure 100, 90, 105, 98, and so on, then we can encode this by taking differences from the mean. We expect σ ≈√n, so we can find a coding that only uses about log2(c√n) bits! Entropy, then, is the number of bits needed for an optimal encoding.
Fact 143 In most cases, the entropy is basically log2 of a few times the standard deviation. This means that we expect S = 1 2 log2 n + log2 c for some constant c.
Next, let’s take a look again at a biased coin: the information-theory entropy is S = −p log2 p −(1 −p) log2(1 −p).
This makes sense to be maximized at 1 2: near 0 and 1, it’s easy to expect or predict the outcome of the coin flip. So the information-theoretic limit is the Shannon entropy.
How do we construct a better coding scheme than 1 bit each? We can block our flips into larger groups and encode them one by one. We’ll find that often, we do better than 1 bit each, but not better than the Shannon theoretical limit!
74 22.2 Entropy We know that the entropy of a macrostate is S = kb ln Γ, where Γ is the number of microstates. It’s a powerful concept, but it can also be confusing.
The key concept is that entropy must always get larger. We can only lose information by not carefully controlling our experiment! The second law of thermodynamics can be formulated in many ways: dS dt > 0 is one example, but we need to be careful with that.
If we have a particle in a box, the particle can be anywhere, and we can describe the entropy of the particle. (We’ll look at this more in the future.) If the particle is in the left part of the box only, though, its entropy will decrease.
Since the number of microstates is a factor of 1 2 smaller due to entropy, we lose a factor of kb ln 2.
What if we have a delocalized particle that is then measured? We can “catch” the particle in a box of smaller volume. But this is done through a measurement, and through the process, the observer increases the entropy! So the second law applied to isolated systems, not subsystems.
Let’s say we have a reservoir that is connected to a system, and we cool down the system (for example, if we put an object in a refrigerator). The entropy flows in the same direction as the heat: the object that cools down will lose entropy as well.
Essentially, our question is “what happens to a system when we add energy?” It will have more microstates, unless we have a pathological system. Intuitively, a system (like a harmonic oscillator) has more energy for larger velocities, because we have a larger sphere of possibilities. This has to do with density of states!
Fact 144 So a system that is cold and heats up will gain entropy: entropy flows in the same direction as heat.
But maybe there are 1010 microstates in the reservoir and only 20 microstates in the small system. Let’s say we heat up the small system so it now has 100 microstates, and at the same time, the reservoir reduces its microstates to 5 × 109. Is this possible?
Let’s look at the total entropy of the reservoir plus the system! Remember that the second law of thermodynamics applies to complete systems: we should consider S = log2 Γ, where Γ is the total number of microstates.
Fact 145 So what we really care about is the product of the number of microstates, because we have multiplicity!
Since 20 · 1010 < 100 · 5 × 109, this process is indeed allowed in nature.
Entropy extends beyond energy conservation, though: it also tells us when energetically allowed processes will not happen in nature.
For example, is it possible for us to take a reservoir of heat, extract work, with only the consequence that the temperature cools down? Also, is it possible for two objects at the same temperature T to develop a temperature differential?
No, because these things violate entropy! Reversibility is an important concept here: free expansion of a gas greatly increases the number of microstates. On the other hand, we know the time-reversibility of the system means the actual number of microstates must be constant, so what does it mean for entropy to increase? How can we go from a small number of microstates to a large number?
People have done many studies of various systems and how they behave in time. These are called billiard problems: have particles bouncing offa wall, and if we have a spherical system, it’s possible that we may never fill the full region 75 of microstates! On the other hand, more irregular systems may allow much more randomness in phase space. So the important assumption in entropy is that all microstates are equally likely. We don’t have particles kept on certain trajectories, mostly due to our inability to measure perfectly. Loss of knowledge when going from highly deterministic systems to complete randomness is the central idea of entropy increase. It is in principle possible for small deviations at the microscopic level to happen (fluctuation), but it’s often immeasurably small.
23 March 19, 2019 Class is very quiet - maybe everyone is ready for spring break. Our exams are at the front of the room; the problems and solutions will be uploaded tomorrow, because there are two people still taking the exam.
There is a problem set (2 problems) due on Friday.
23.1 Overview and review As a quick review, we started by reviewing Shannon entropy and thinking of entropy as a thermodynamic quantity connecting the microscopic world with macroscopic problems. Last time, we looked at a two-level system, going from counting states to computing entropy, temperature, and heat capacity. It’s interesting, because it’s the first system where quantum effects manifest themselves in the macroscopic world. One thing to think about is the unimodal heat capacity - there are important quantum effects there!
We’ll define some terms that are useful for describing systems, and we’re going to keep thinking about changes in entropy in the PV plane. We have all the tools necessary into understanding fundamental systems! Next lecture, we’ll also look at some more quantum systems.
Fact 146 Recall that a two-level system has N particles, each of which can be in a lower or higher energy level. If we plot n N versus kBT, which is the number of particles in the higher energy state versus energy, the number of particles in the higher energy state increases to 1 2. If we plot U/(εN), where ε is the energy level of the higher energy state, this also saturates at 1 2. Finally, heat capacity c/(NkB) increases and then decreases.
Remember that we found the entropy of this system: as T →0, S →0 [this will be the third law of thermodynamics], and as T →∞, S →NkB ln 2. Let’s try to justify what we see and physically explain each of these graphs!
• At lower temperatures / energies, all energy is in the lower level. But if we add energy to our system, we have particles evenly distributed among all states.
• This is also why internal energy saturates at 1 2εN: half of the particles will be in the ε energy state. (We take groud state to be 0.) • The heat capacity is harder to explain. At high temperatures, we expect entropy to read its maximum value of NkB ln 2, and we have evenly distributed particles. So changing the temperature a little bit does not change the configuration very much, so the heat capacity is very low. This is a saturation phenomenon! Meanwhile, at very low temperatures, we have to overcome the gap of energy ε to actually change the state of the system.
This gapped behavior is also an important characteristic: we should have a vanishing of order exp h − ε kBT i .
76 • The maximum value of C (heat capacity) is coming from the scale temperature Tε = ε kB .
This is where we start exciting more and more particles. This is where we have the highest disorder.
By the way, the saturation effect means that we can’t have a system where most particles are at the higher energy level. However, we can prepare our system in a way that gives more particles than we normally have at the high energy level! It’s called a metastable state, and it’s interesting for “negative temperature.” 23.2 A closer look at the First Law Remember the First Law of Thermodynamics, which relates an exact differential to inexact differentials: dU = ¯ dQ + ¯ dW.
We know that if work is done in a quasi-equilibrium process (so that we’re pretty close to being in an equilibrium state), then pressure is defined throughout the process, so we can write ¯ dW = −PdV . Meanwhile, if heat is added in a quasi-equilibrum process, temperature is defined throughout, so we can write ¯ dQ = TdS.
This means we can write the First Law in a bit more detail: dU = TdS −PdV + µdN + σdA + · · · .
Recall that we defined (for a system at thermal equilibrium) ∂S ∂U V,N = 1 T .
This also tells us ∂U ∂V S,N = −P (if we plug into dU = TdS −PdV , the TdS term goes away), and similarly ∂S ∂V U,N = P T which is “volume equilibrium.” Remember that we had a fixed partition that only allowed heat transfer between two parts of a system: now, let’s see what happens with a movable partition!
23.3 Deriving some important cases Let’s say we have a system with A and B separated by a partition. Now, let’s say that A and B can exchange volume V , but they cannot exchange N, the number of particles, or U, the internal energy.
If A and B are at thermal equilibrium, then the entropy is maximized. What changes when we move the partition a bit? volume is being exchanged, so dV = dVA = −dVB.
Writing out changes in entropy in terms of partial derivatives like we did last time, since we’re at a maximum entropy, ∂SA ∂VA UA,NA dVA + ∂SB ∂VB UB,NB dVB = 0, 77 and this means we want ∂SA ∂VA UA,NA = ∂SB ∂VB UB,NB. Plugging in, this means PA TA = PB TB = ⇒PA = PB (since we’re at thermal equilibrium). So this means that pressures are equal at equilibrium! Similarly, if we have a partition where we can exchange only particles but not internal energy or volume, we find that ∂S ∂N U,V = −µ T is constant.
23.4 Entropy’s role in thermodynamic processes Remember that an adiabatic process has no heat transfer (because it moves too quickly, for example): dQ = 0.
Definition 147 An isoentropic process has no change in entropy: ∆S = 0.
These two are not interchangeable! It’s possible to have zero change in heat but a nonzero change in entropy. For example, free expansion is adiabatic if it happens fast enough and is isolated, but the entropy does increase. Let’s also be a bit more specific about some other words we’ve discussed: Definition 148 A quasi-equilibrium process is one where the state is always near thermodynamic equilibrium, so that state variables are defined throughout the process, meaning we can use the first law. A reversible process is a quasi-equilibrium process in which the direction of heat flow and work can be changed by infinitesimal changes in external parameters.
It’s not necessarily true that quasi-equilibrium processes are reversible though. For example, processes with friction have some dissipation of energy, which means that we can’t do them in reverse and get back to our original state.
However, if we do them sufficiently slowly, state functions can be consistently defined.
23.5 Looking at change in entropy in the PV plane Example 149 Consider the isometric heating of a gas: in other words, we hold the volume constant and dV = 0.
This means dU = ¯ dQ = TdS = ⇒dS = dU T . Therefore, ∂U ∂T V = T ∂S ∂T V = ⇒ ∂S ∂T V = CV (T) T = ⇒S(T, V ) −S(T0, V0) = Z T T0 dT ′ T ′ CV (T ′) where CV (T) is some function of T. Looking at a simple system like the monatomic ideal gas, we have CV = 3 2NkB, so ∆S = 3 2NkB ln T T0 .
78 Example 150 Now let’s look at the isobaric heating of a gas: keep pressure constant and dP = 0.
Remember from our problem set and other discussions that we can define the enthalpy H = U + PV, and in our system here, dH = dU + d(PV ) = TdS −PdV + PdV + V dP = TdS + V dP. Now, because we have an isobaric process, dH = TdS = ⇒∂H ∂T P = T ∂S ∂T P .
Rearranging this, we have that CP (T) = T ∂S ∂T P , so S(T, P) −S(T0, P) = Z T T0 dT ′ T ′ CP (T ′), so again for a monatomic ideal gas, CP = 5 2NkB, and we have ∆S = 5 2NkB ln T T0 .
Example 151 Finally, let’s think about isothermal expansion (from a volume V0 to a final volume V ): dT = 0.
Let’s rewrite dU = TdS −PdV as dS = 1 T (dU + PdV ) = 1 T ∂U ∂T V dT + ∂U ∂V T dV + PdV (since U is a function of V and T), and this simplifies to dS = 1 T CV dT + ∂U ∂V T + P dV .
Since we have an isothermal expansion, the dT term goes away, and thus dS = 1 T P + ∂U ∂V T dV.
To compute this more easily, we’ll expand by volume first: S(T, V ) −S(T, V0) = Z V V0 dV ′ T P + ∂U ∂V T For an ideal gas, U is only a function of T, so ∂U ∂V T = 0, and pressure P = NkBT V , so S(T, V ) −S(T, V0) = NkB ln V V0 .
79 Fact 152 Draw a cycle using all three of these processes: • Start at pressure P0 and volume V0, and heat it isometrically to pressure P.
• Isothermally expand the gas from volume V0 to V .
• Finally, do an isobaric compression from volume V back to volume V0.
Notice that V V0 = T T0 in an ideal gas, and in this case, the change in entropy over the whole cycle is zero!
So as long as we have a well-defined pressure, temperature, and internal energy at all points in our process, we are at thermal equilibrium, and our entropy is a state function. In general, this means we can use any path to calculate our entropy!
24 March 20, 2019 (Recitation) A lot of interesting new material was covered in class.
24.1 Reviewing the two-level system Let’s look at the two-level system, where we have N particles that can either be at low energy (0) or high energy (ε).
The lowest possible energy of the system is 0, and the highest possible is Nε.
We’ve picked three special states: 0, Nε 2 , Nε. Notice that 0 and Nε have minimum entropy, since there is only 1 possible microstate. In general, the number of microstates for having n0 objects in the lower state and n1 in the higher state is Γ(n0, n1) = n!
n1!n0!.
This is N n1 : we’re choosing n1 of the N states. This is consistent with the edge cases of 0 and Nε.
Well, what’s the entropy when the energy is half of the maximum? We know that if we have N fair coins, we have entropy of N bits, which corresponds to Nkb ln 2. So we don’t need any mathematics to understand the entropy of the system!
What about the temperature of the system? We can argue that it’s usually zero at the ground state, because we’ve “taken out all of the energy.” We claim that the temperature at the middle state is ∞and that the energy at the high state is negative zero!
How do we argue this? We can plot entropy versus energy: it peaks at the middle and starts and ends at 0. So now 1 T = ∂S ∂U .
so the temperature is infinite at Nε 2 , because the slope of the S versus E graph is 0. It turns out we have the Boltzmann factor n1 n0 = e−(E1−E0)/(kbT ); the idea is that if two groups are equally populated, and the energy states are different, then the ratio on the left hand side is 1, the exponent must be 0, and a finite energy different means T must be infinite. The idea is that at infinite temperature, energy doesn’t matter, because it is free!
80 Well, notice that the graph has positive slope for E < Nε 2 and negative slope for E > Nε 2 . So the temperature is positive at first but then becomes negative!
So let’s start plotting other quantities versus temperature. Let’s plot only in the range from T = 0 to T = ∞: this means we only care about energies between 0 and Nε 2 . If we plot E versus T, we start at 0 and then asymptotically approach Nε 2 .
Fact 153 By the way, note that T, U, E are all parametrized together, so ∂S ∂U keeps N constant, but there’s no other dependencies to worry about.
So what’s the specific heat of the system? Specific heat measures how much energy we need to change the temperature, and as T →∞, the energy is not changing anymore: it’s saturated to Nε 2 . So at high temperatures, the energy saturates, and C →0.
Fact 154 Here, we have a finite number of possible states and a bounded energy system. This is very different from a gas that can have infinite kinetic energy! We’ll also find later that there are usually more energy states for ordinary systems: the spin system is an exception.
What if T is very small? The system wants to be in the ground state, and for very low (infinitesimal) temperature, we only care about the ground state or the first excited state: there’s only a possibility of one of the N particles increasing its energy by ε. In the Boltzmann distribution, we have a factor e−E/(kBT ), so the probability of the first energy distribution is proportional to e−ε/(kBT ). This is something that characterizes any system with a ground state and an excited state with an energy gap of ε!
Then we just use the physics of combinatorics: we’ll discuss this as partition functions later, but this exponential factor is universal across all systems like this. So systems at low temperature show exponential behavior, and if we plot energy versus temperature, we’ll start offwith an exponential behavior. This is what we call gapped behavior.
Example 155 What’s an example of a system without gapped behavior?
In other words, when can the excited state have arbitrarily small energy?
We want to say a classical gas, where the kinetic energy 1 2mv 2: the velocity v can be arbitrarily small. But not really: remember that a particle in a box has a smallest energy ℏ2 L2 , so we do need to take the classical limit or use a very big box. Then we’ll see that the heat capacity does not have that exponential curve near T →0.
24.2 Negative temperature?
Let’s look back at T being negative for high energy (in our system, where E > Nε 2 . We’ve always heard that we can’t cool down and get to negative temperature: there’s an absolute zero. The idea is that there’s a lowest energy state, and there’s no way to get less energy that that lowest state.
81 So negative temperature doesn’t mean we can keep cooling to negative Kelvin. In fact, this system tells us that negative temperatures appear in a different way! We increase the energy of the system, and the temperature increases to ∞and then goes negative. So somehow ∞temperature and −∞temperature are very close to each other! This is because we inherently defined our temperature as a slope being equal to 1 T , and often, Boltzmann factors give 1 T as well.
So if we get confused, think about 1 T instead! Plotting energy versus −1 T (so that we start offwith positive temperature on the left), we have our energy increasing from 0 to Nε 2 at −1 T = 0, and then it further increases from Nε 2 to Nε! This is a continuous curve, and everything is smooth between positive and negative temperatures.
In other words, the connection between positive and negative temperatures does not happen at 0: it happens at ∞. This is only possible because we have a maximum energy state! If we were to do this with a system like a classical gas, the energy would go to ∞as T →∞. So there wouldn’t be a connection to negative temperature there.
25 March 21, 2019 Today’s physics colloquium is about turbulence, which is a very challenging problem. Part of Professor Fakhri’s group is looking at biological signals, and it turns out the dynamics of the signaling molecules in the cell follow a class of turbulence phenomena that is connected to quantum superfluid turbulence.
Remember there’s a problem set due tomorrow, and there will be another one posted soon.
25.1 Overview Today, we’re going to start discussing some general systems where we can do counting of states. The idea is to do thermodynamics and statistical physics in systems like a two-level system! We start by counting states to find multiplicity, and from there we can compute entropy, temperature, and heat capacity.
In particular, we’ll look at a particle in a box, as well as the quantum harmonic oscillator. We’ll introduce an idea called the density of states, which will tell us important information about the internal energy of the system! This is not a lecture on quantum mechanics, but we’ll get some descriptions that are related.
By the way, there is a way to count a density of states in a classical system too, even though the states are continuous! We’ll treat them in a semi-classical way.
25.2 Quantum mechanical systems: the qubit Recall that in classical mechanics, the number of states is infinite: for example, a system of N particles depends on the position and momentum (⃗ xi, ⃗ pi), and these quantities (which define states) are continuous variables.
However, in quantum mechanical systems, states are quantized and finite, so they are countable! We’ll learn how to label and count some simple examples: the qubit, quantum harmonic oscillator, and the quantum particle in a box.
A qubit is a system with only two states: |0⟩= |+⟩= |↑⟩and |1⟩= |−⟩= |↓⟩. Here, |⟩refers to a set of states.
The higher energy state |+⟩has energy E+, and the lower energy state |−⟩has energy E−.
The simplest example of a qubit is a spin 1 2 particle: for example, an electron in a magnetic field. We know (or are being told) that the magnetic moment ⃗ µ = −g e 2mc ⃗ s, 82 where g ≈2 for an electron, e is the charge of the electron, m is the mass of the electron, and c is the speed of light.
Then when a particle is placed in a magnetic field ⃗ H, the energy is E = −⃗ µ · ⃗ H = e mc ⃗ s · ⃗ H.
In particular, the different energy states are E+ = eℏ 2mc H, E−= −eℏ 2mc H, where ℏis Planck’s constant h 2π ≈1.054 × 10−34J · s. There are many systems that behave like this at low energy!
Fact 156 These are interesting derivations, but for those of us who haven’t taken 8.04, we shouldn’t worry too much about it.
In general, if we have a quantum particle with spin j in a magnetic field, the different possible energy states are Em = eℏg 2Mc mH, where m = j, j −1, · · · , −j. This has to do with systems exhibiting paramagnetic properties!
25.3 Quantum harmonic oscillator Let’s start by looking at a classical example. We have a harmonic oscillator with spring k and mass m, where the potential energy V (x) = 1 2kx2 = 1 2mω2x2 (if the natural frequency is ω = q k m.
This is used to explain vibrations of solids/liquids/gases, as well as other material properties. The energy (or Hamiltonian) of this system can be written classically as H = p2 2m + 1 2kx2, and we find that x(t) = A cos(ωt + φ), and the energy of this system is a continuous quantity mω2A2.
On the other hand, in the quantum version, we can label our quantum states as |n⟩, for n = 0, 1, 2, · · · , and there’s only a few allowed energies. They are equally spaced in units of ℏω: En = n + 1 2 ℏω, where the lower allowed energy 1 2ℏω is the zero point energy. We can also have a bunch of non-interacting particles in the same harmonic oscillator potential: then we just add the individual energies.
Fact 157 The energy of a set of non-interacting particles moving in a harmonic oscillator potential is (for example) E(n1, n2, n3) = ℏω n1 + n2 + n3 + 3 2 .
The states here are denoted as |n1, n2, n3⟩, where each nj = 0, 1, 2, · · · . It’s pretty easy to see how this generalizes.
Well, there’s a one-to-one correspondence between a one-dimensional system with N particles and an N-dimensional 83 harmonic oscillator for one particle: the energy states look the same!
EN(n1, · · · , nN) = ℏω n1 + · · · + nN + N 2 .
25.4 Particle in a box This system has applications to an ideal gas in a box, as well as to black-body radiation and Fermi and Bose gases.
Basically, we care about the limit of a large box.
Consider a particle with mass M in a three-dimensional cubic box of length L. Quantum energy states can be labeled |n1, n2, n3⟩by three integers! We can show that including boundary conditions, E(n1, n2, n3) = π2ℏ2 2ML2 (n2 1 + n2 2 + n2 3).
These energy levels are very closely spaced! Let’s compute this for an oxygen molecule. If the box measures 1 mm on a side, E ≈ 10 × 10−68 2 × (32 × 1.67 × 10−27) · 10−6 ≈10−36J ≈6 × 10−16eV.
These are very small numbers: compare this to kBT at room temperature kBT = 1 40eV.
So most of the time, we can assume the energy levels are almost smooth (since we often compare energy to kBT in our calculations), which makes our work a lot easier!
25.5 Counting states and finding density Let’s look at our particle in a box.
Fact 158 The math will be more complicated from this point on. We should review theta and delta functions!
Let’s say we want to find the number of energy states N(E) in a box of volume V with energy less than E. Once we find N(E), the number of states, we can differentiate it to find dN dE , which will give the number of states between E and E+dE (this is similar to how we differentiated a cumulative distribution to get a probability distribution function!).
Note that in our cumulative distribution N(E) will be stepwise (since we have some finite number of states for each energy level), and we will normalize it by dividing by π2ℏ2 2ML2 . But as we go to higher temperatures, the steps are very small compared to kBT, so we can do a smooth interpolation! The idea is that dN dE is a sum of delta functions (because N is a bunch of step functions), but we can approximate those with a smooth curve as well. This is called a delta comb, by the way.
So now N(E) = X n1,n2,n3 θ E −π2ℏ2 2ML2 (n2 1 + n2 2 + n2 3) , since we only count a state if the total energy is at most E. Doing our interpolation, the sum becomes an integral: since n1, n2, n3 can take on positive values, N(E) = Z ∞ 0 dn1 Z ∞ 0 dn2 Z ∞ 0 dn3θ c2 −n2 1 −n2 2 −n2 3 , 84 where c2 = 2ML2E π2ℏ2 , just to make the calculations a bit easier. To evaluate this integral, let’s let nj = cyj: N(E) = c3 Z ∞ 0 dy1 Z ∞ 0 dy2 Z ∞ 0 dy3θ(1 −y 2 1 −y 2 2 −y 2 3 ).
Notice that this is one-eighth of the volume of a unit sphere! This is because we want y1, y2, y3 to be positive and y 2 1 + y 2 2 + y 2 3 ≤1. So the volume is just π 6 , and N(E) = π 6 c3 = π 6 2ML2E π2ℏ2 3/2 and since L3 = V , we get a factor of V out as well, and we can simplify further.
For any quantum system, if E = E(n1, · · · , nN), then N(E) = X {nj} θ(E −E({nj})).
Then we can differentiate dN dE = X nj δ (E −E({nj})) .
So in this case, since N(E) = π 6 2M π2ℏ2 3/2 V E3/2, we have dN dE = π 4 2M π2ℏ2 3/2 V E1/2.
Fact 159 In general, if Ek occurs some gk number of times, this is called a degeneracy factor. Since we want to count all the states, we get a gk factor in front of our theta function! This is better explained with examples, though.
Example 160 Let’s count the density of states in a quantum harmonic oscillator.
Then En = n + 1 2 ℏω, and the number of states N(E) is a step function that increases by 1 at 1 2ℏω, 3 2ℏω, and so on. So N(E) = ∞ X n=0 θ E −ℏω n + 1 2 If E ≫ℏω, we can again make this into an integral: N(E) = Z ∞ 0 θ E hω −n −1 2 dn ≈E ℏω , so we have dN dE = 1 ℏω.
Next time, we’ll talk about how classical systems can still be counted! This is the idea of a semi-classical description.
26 April 1, 2019 (Recitation) Today’s recitation is being taught by Professor England.
85 Let’s go back to discussing the ideas of entropy! Remember that we started offby discussing information entropy, but now we want to start developing entropy more as a thermodynamic and statistical quantity.
26.1 Rewriting the first law Earlier in the class, we found that we could describe our energy dU as a product of the generalized force and generalized displacement: dU = X i ∂U ∂xi dxi.
Here, we referred to TdS as “heat” dQ, and PdV , as well as other terms, as “work” dW. But our dS = ¯ dQ T could be interpreted differently: if we define Ωas a function of state variables U, V, N, · · · as the number of different microstates consistent with our data, we could also define S = kB ln Ω.
There’s a lot of plausibility arguments that can support our theory working this way! For example, two objects in contact have maximum entropy when they have the same temperature, and indeed we want entropy to increase as time evolves.
So let’s run with this, and let’s see if we can rewrite the first law in a more sensible way. Since S is proportional to ln Ω, we have dS = kBd(ln Ω): plugging this into the first law, d(ln Ω) = dU kBT + PdV kBT −µdN kBT · · · This gives us another way to think of how to “count” our states! We can ask questions like “how does a flow of heat contribute to our entropy?” or “how does entropy change if I increase the volume?” So the right hand side is a bunch of levers that we can pull: now we can see how our statistical quantity changes when I adjust my other terms!
26.2 Applications to other calculations We can go back to our ideal gas model, and we’ll see that the calculations fall out pretty easily!
Let’s say we have N particles in a box of volume V , and we want to think about the number of microstates we can have.
Fact 161 Scaling is very useful; we don’t have to be too exact with our calculations.
Microstates are dictated by two variables: the position and the momentum of the individual particles. It’s true that if we tried to count the number of discrete “float numbers” that would work, we’d have infinitely many possibilities for ⃗ x. But there’s still a sense in which having twice as much volume gives twice as many possible positions, so the number of states here is basically proportional to V ! Similarly, momentum, which is independently assigned to particles of position, has some function a(U) in terms of the internal energy. Since we have N particles, the number of microstates here is proportional to Ω= (cV · a(U))N.
86 Fact 162 One other way to patch this up is with the uncertainty principle: dpdx ∼ℏ: we do need discrete states in that case.
To get to entropy from here, we take a logarithm and multiply by kB (which is just a units conversion factor): kB ln Ω= NkB(ln V + ln a(U) + c).
Notice that the constant term c comes from the fact that we only care about differentials and differences in the first law! For example, pressure is really just P kBT = ∂(ln Ω) ∂V U,N,··· .
Let’s try plugging that into our expression for ln Ω: this yields P kBT = N V , which rearranges to the ideal gas as desired!
Fact 163 We can always take a derivative (in principle) fixing all the other variables. It might be experimentally difficult to do this, though.
So another way to say this is that the ideal gas is just related to the scaling of states with volume.
But there’s something wrong with the entropy we’re using: we wanted the change in entropy to be an extensive quantity. U is extensive: putting two copies of the system next to each other doubles the energy. Since U and ln Ω play similar roles in the first law, we want them to both be extensive. But ln Ω= N ln V , but doubling the size of the system doubles both N and V , and this doesn’t give exact extensivity. What’s going on here?
Fact 164 This is called Gibbs’ paradox! The idea is that we treated our particles as different: if particles 1 and 2 are on the boundary and 3 is in the middle, this is different from 1 and 3 on the boundary and 2 in the middle. But there’s really no way for us to be able to distinguish our particles!
So it seems that there are N! permutations, and we want to divide through by N!. But this isn’t actually an exact answer! Remember that we have quantized states, so we’re not overcounting particle states when many particles are close to each other. So we’d run into problems where dividing by N! is artificially penalizing particles nearby - we’ll return to those ideas for now.
But we’ll deal with dilute gases, since we’re doing ideal gas calculations anyway, and then it is exact to say that Ω= (V · a(U))N N!
.
With Stirling’s approximation, this gives ln Ω= N ln V −N ln N + N = N ln V N + N.
Now this is an extensive quantity!
V N is an intensive quantity: it’s related to the density, and now scaling the system by a factor of 2 does scale ln Ωby a factor of 2 as well.
87 Example 165 If we have two independent identical systems, they have the same number of possible microstates Ω, so the total system has Ω2 possible states: this does yield twice the entropy. On the other hand, if we put the two systems to together, what happens to the entropy? Doesn’t it increase again?
This is again an issue of Gibbs’ paradox! The particles in the two systems are indistinguishable, so we should really think a bit more about those N!-type terms. Keep this in mind when we start thinking about quantum systems (particularly Bose-Einstein condensates), though.
26.3 Relating this to temperature Let’s think a bit more about our ideal gas system. The internal energy of this system is U = 3N X i=1 p2 i 2m, since each particle has 3 degrees of freedom. So if we have some given energy U, how many possible arrangements Ω(U, V, N) are there? Basically, how does Ωscale with U?
Let’s do the cheap thing: we have a bunch of independent coordinates pi whose squares add to some constant, which means that the momentum coordinates in phase space are on a hypersphere of radius √ 2mU. Really, we care about scaling: how does the size of a sphereical shell change with volume? Because the volume of the whole sphere goes as r 3N, the boundary goes as r 3N−1, but we can ignore the 1 if N is really big. So that means Ω(U, V, N) ∼V N N!
√ 2mU 3N ∼V N N! U3N/2.
Going back to our first law equation, let’s look at the part d(ln Ω) = dU kBT + · · · and differentiate with respect to U. This yields ∂(ln Ω) ∂U V,N = 1 kBT = ∂(ln U) ∂U 3N 2 = ⇒U = 3NkBT 2 .
This gives us the temperature definition out of our calculation of microstates as well!
26.4 Increasing entropy Finally, let’s do another look at the second law of thermodynamics. This law is initially encountered empirically: given a process that we design and take a system through, we can (for example) draw a loop in our PV diagram. In these cases, the total entropy will always increase: ∆S of the environment plus ∆S of our system cannot get smaller.
So this is a very empirical law, but our new description gives us a way to think about this more clearly! (By the way, it’s important to note that this is always an average law.) So suppose we have a surface of states in U, V, N space, and we follow our system by doing dynamics with Hamilton’s equations. What does it really mean for entropy to increase? There’s two notions here: one is to count the number of states on our surface and take the logarithm, but that doesn’t really tell us anything in this case.
88 Instead, we can think of some quantity R: maybe this is “the number of particles on the left side” or “some measure of correlation with a 3-dimensional figure.” The point is that we can assign a value of R to all microstates, and now we can group our microstates together by values of R: this is called “coarse-graining.” So what’s Ω(R)? Much like with Ω, it’s the number of microstates consistent with some value of R, and we can define an “entropy” kB ln Ω(R) along with it. The concept is that R will eventually stabilize to some quantity - for example, particles are not likely to unfill a region.
That means S(R) will increase generally - it may fluctuate, just as R does, but S is an average quantity. One way to think of this is that as we move in our state space, we’ll generally get stuck in the biggest volumes of R: it’s much more likely that we’ll end up there on average! Analogously, two systems at thermal equilibrium can go out of equilibrium, but they’re likely to return back into equilibrium (because it is thermodynamically favorable to do so). So entropy can decrease, but that is usually nothing more than a statistical fluke.
27 April 2, 2019 The energy level of the class is higher after the break! There’s a problem set due this Friday; we should make sure we understand the material from the required reading.
There will be an exam on April 17th, which is similar to last exam. We’ll start hearing announcements about it soon.
Recall that we started by introducing information and thermodynamic entropy. We found that counting the number of microstates consistent with a macrostate gives us a measure of entropy as well, and then we could calculate the temperature, heat capacity, and many equations of state. This then led us to introducing a few different systems, where we could actually count states.
We discussed a two-level qubit, particle in a box, and quantum harmonic oscillator, because these all have discrete states. Today, we’ll talk about how to extend this to classical systems and give them this kind of treatment as well.
The idea is to count states to find the density of states as well, which gives us a probability distribution.
27.1 Returning to the particle in a box As we calculated last time, the density of states depends on the energy E of the system: dN dE = 1 4π2ℏ3 (2M)3/2V E1/2.
Our goal is to find a connection here to a classical system: let’s say we have some volume dV that is infinitesimal on the classical scale but large enough to contain many quantum states. Then for our box, dV = dxdydz = d3x (for simplicity). If we rewrite E in terms of momentum as p2 2m, then dE = pdp m ; substitute this into our equation above, and we can pull out a factor of 4π: dN = 4π 8πℏ3 p2dpd3x Now notice that the 4πp2dp looks a lot like spherical coordinates: d3p = p2dp sin θdθdφ, and integrating out θ and φ gives the result we want. So that means (because ℏ= h 2π), dN = 1 h3 d3xd3p.
89 This means that dx and dp, which are classical degrees of freedom, yield something about our states dN, just with a normalization factor 1 h3 ! So looking back at dN dE = 1 4π2ℏ2 (2M)3/2V E1/2, we can think of 1 h3 as a “volume” that contains one state. This means that we can go from classical systems to semi-classical descriptions: our “volume” in phase space is h3 = ∆x∆px∆y∆py∆z∆pz.
In other words, we can think of our volume h3 as contain 1 state in our phase space.
Example 166 Let’s go back to the density of states calculation for a quantum harmonic oscillator, but we’ll start with a 1-dimensional classical harmonic oscillator and do this trick - let’s see if we end up with the same result.
The number of states with an energy less than E is N(E) = Z dxdp 2πℏθ E −p2 2m −1 2kx2 (where h = 2πℏis the normalization factor as before). Writing 1 2kx2 = ξ2, p2 2m = η2, our number of states can be written as N(E) = 1 πℏω Z dηdξθ(E −η2 −ξ2) and since the integral is the area of a circle with radius √ E, this is just N(E) = πE πℏω = E ℏω , which is the same result that we got with the quantum harmonic oscillator! Then we can find dN dE , which will give the number of states with energy between E and E + dE. We can think of our dxdp 2πℏidea as a “volume normalization.” 27.2 A useful application Now, we’re going to use this counting-states argument to look at thermodynamics of an ideal gas. Remember that we could count our states in our two-level system if we were given the energy of our overall system, so let’s try to think about a microcanonical ensemble: systems that are mechanically and adiabatically isolated with a fixed energy.
We’ll discuss Sackur-Tetrode entropy along the way as well.
First of all, our goal is to find the multiplicity: how many states Γ(U, V, N) are consistent with a given U, V, N?
Once we know this, we’ll write our entropy S(U) = kB ln Γ (at thermal equilibrium), and then we can calculate our temperature with the familiar equation ∂S ∂U N,V = 1 T to find our energy U in terms of V, T, N (and by extension the heat capacity). Finally, we’ll find the equation of state by taking some derivatives.
Let’s use the model where we have N molecules of a monatomic gas in a box of side length L. Then the energy 90 of our system (as a particle in a box) is E(nx, ny, nz) = π2ℏ2 2ML2 (n2 x + n2 y + n2 z).
To count the number of states, let’s assign a vector ⃗ nj to each particle, where 1 ≤j ≤N. Every particle contributes to the state, but since the particles are indistinguishable, we overcount by a factor of N!. (We’ll return to this idea later.) So the multiplicity is Γ(U, V, N) = 1 N!
X ⃗ n1 X ⃗ n2 · · · X ⃗ nN δ 2ML2U π2ℏ2 − X ⃗ nj 2 since we want to only pick out the states with a certain energy. We’ll find that we have very closely spaced states, so we can approximate this sum as an integral from 0 to ∞. Let’s then replace this as half of the integral from −∞ to ∞by symmetry: since each of N particles has 3 degrees of freedom, and there are 3 integrals, this means we now have an integral over all degrees of freedom Γ(U, V, N) = 1 23NN!
Z d3n1 · · · d3nNδ(R2 − N X j=1 ⃗ nj 2), where R2 = 2ML2U π2ℏ2 . So we now have 3N variables (basically, we want the surface area of a 3N-dimensional sphere of radius R): let’s replace our ⃗ nis with the 3N variables ξ1, ξ2, · · · , ξ3N. Our integral is now Γ = 1 23NN!
Z d3Nξδ(R2 −⃗ ξ2) and normalizing by letting ξj = Rzi, this becomes Γ = 1 23NN!R3N−2 Z d3Nzδ(⃗ z2 −1).
(The 3N −2 comes from δ(cx) = 1 c δ(x).) Since N is very large, we can approximate 3N −2 as 3N, and so now we want to deal with the integral Z d3Nzδ(⃗ z2 −1) = 1 2 Z d3Nδ(|⃗ z| −1).
The integral is now the surface area of a 3N-dimensional sphere S3N = 3N 2 π3N/2 3N 2 !
so we can plug that in: this yields Γ = R3N 23N−1N!S3N and we’re done with our first step!
27.3 Calculating the entropy of the system Now, let’s start calculating entropy and our other relevant quantities. Since S = kB ln Γ, we can plug in the values we have: we’ll approximate 23N−1 as 23N, and by Stirling’s approximation, S = kB 3N ln R 2 −N ln N + N + 3N 2 ln π −3N 2 ln 3N 2 + 3N 2 .
91 We’ll factor out an N and combine some other terms as well: S = NkB 3 ln R 2 −ln N + 3 2 ln π −3 2 ln 3N 2 + 5 2 .
We’ll now substitute in our value of R = √ 2mL2U πℏ : S = NkB 3 2 ln U + 3 ln L + 3 2 ln M 2π2ℏ2 + 3 2 ln π −5 2 ln N −3 2 ln 3 2 + 5 2 .
We’ll now combine the terms with a 3 2 in front: since 3 ln L = ln L3 = ln V , S = kBN 3 2 ln U N + ln V N + 3 2 ln M 3πℏ2 + 5 2 and now we’ve found our entropy in terms of the variables we care about! This is called the Sackur-Tetrode entropy.
27.4 A better analysis: looking at temperature So let’s try to think about the consequences of having all of the different parts here. If we didn’t have the ℏterm in our entropy, imagine we take ℏto 0. Then our entropy S goes to infinity, and we know that this isn’t supposed to happen. The idea then is that there is some length scale where quantum effects become obvious!
Also, entropy is an extensive quantity: we have the N in front of our other terms, and that’s why the N! correction term was important for us to include. Notice that if we double our volume, ∆S = NkB ln 2 (exercise), so this is consistent with what we want!
So now let’s calculate temperature: ∂S ∂U V,N = 1 T = 3NkB 2U = ⇒U = 3 2NkBT which is the same result we had before: this is consistent with the equipartition theorem! Finally, since TdS = PdV +c (where c is other terms that are not relevant to S and V ), P T = ∂S ∂V U,N = NkB V = ⇒PV = NkBT, which is the equation of state for an ideal gas! Similarly, we know that −µdN = TdS + c′ (where c′ is other terms that aren’t relevant to N and S), −µ T = S N −5 2kB = ⇒5 2NkBT = TS + µN, and this can be rewritten as 3 2NkBT + NkBT = U + PV = TS + µN , which is known as Euler’s equation.
27.5 Microcanonical ensembles 92 Definition 167 A microcanonical ensemble is a mechanically and adiabatically isolated system, so the total energy of the system is specified.
This means that in phase space, all possible microstates - that is, all members of the ensemble - must be located on the surface H(µ) = E. At thermal equilibrium, all states are equally likely, so the probability of any given state is PE(µ) = 1 Γ(E) H(µ) = E 0 otherwise.
Next class, we’ll comment a bit more about uncertainty in this surface (to create volumes)!
28 April 3, 2019 (Recitation) 28.1 The diffusion equation Let’s start with a concept from the problem set. We’ve always been discussing thermodynamic entropy as S = −kB X pi ln pi = kb ln Γ (where Γ is the number of microstates) if all states are equally likely.
But now when we look at the diffusion equation, we’re given a slightly different equation for a gas in one dimension: S = −kB Z ρ(x, t) ln ρ(x, t)dx.
What’s the connection between the two? If we take some small volume ∆V , then the probability that we find a molecule in that volume at position x is p(x) = ρ(x)∆V.
So this equation is just the probability distribution entropy, ignoring the normalization constant!
Next question: let’s look at the diffusion equation more carefully: ∂ρ ∂t = D ∂2ρ ∂x2 .
Can we argue why this always increases the diffusion of the gas from general principles? If we spread our probability over more possibilities, then entropy goes up: the larger Γ is, the more microstates we have, which means all individual probabilities are small.
Well, diffusion makes our peaks in ρ go down! So the probability distribution is getting wider and flatter, and this is a lot like making all of our probabilities go to 1 n: we’re moving toward “equilibrium.” 28.2 Black holes There’s a lot of physics going on here! We have a relation between relativistic mass and energy from special relativity: E = Mc2, and we have the Schwarzchild radius from general relativity; Rs = 2GM c2 . Black holes are a good place to do research, because they are a system where we can make new discoveries: there are big open questions about combining quantum physics with general relativity.
93 Here’s one interesting idea: if there’s nothing to say about a black hole other than its angular momentum and mass, then matter with entropy sucked into that black hole has disappeared: doesn’t this mean entropy has decreased?
We can never combine two microstates into one, so what’s going on here?
Turns out black holes aren’t completely black! They are actually at a “temperature” Th = ℏc3 8πGkBM .
This is called “Hawking radiation,” and this is a way for black holes to communicate with the outside world (using electromagnetic radiation). This means that we can actually have an entropy for black holes, and the contradiction is gone!
Fact 168 (Sort of handwavy) This is a bit past standard explanations, but the idea is that the Schwarzchild radius is an “infinite potential:” no particles can get past the event horizon.
But adding quantum fluctuation (due to quantum field theory “spontaneously” producing particle-antiparticle pairs), we can create virtual photons at the boundary: one with positive energy can escape, and another with negative energy gets sucked into the black hole.
So we can think of this as having a black-body radiation spectrum, where there is a peak at the temperature ℏω = kBTBB.
So now you can integrate to find the entropy of a black hole: we can find that it is related to the surface area of a sphere with Schwarzchild radius! This is relevant to understanding complicated materials: for example, the entropy of most systems (like an ideal gas) is dependent on volume. Black holes tell us a different story: entropy doesn’t necessarily scale with volume. After all, 1 M , so the derivative ∂S ∂E is proportional to E. So S is proportional to E2, which is proportional to R2 s !
So the fact is that we have a complicated system for which only the surface matters: this scales much slower than volume. Thus, describing superconductors and other materials is motivated by this discussion of a black hole! This is called the “holographic principle.” 28.3 Negative temperatures Let’s start by talking about heat transfer: it’s about energy conservation and increase in entropy. Usually, we have systems coupled to a reservoir: those reservoirs have a lot of energy, but we don’t want to go against what nature wants! So then a “deal” is made: if the reservoir gives an energy ∆E = T∆S, then the entropy of the reservoir decreases by ∆SR = ∆E T . So the system just needs to make sure that the entropy created by the system is larger: |∆SS| ≥|∆SR|, and then all the energy in the world can be transferred. This has to do with free energy, which we’ll talk about later!
So let’s say that we transfer the energy from our reservoir R to our system S, and this just heats up our system S. Then the increase in entropy of the system is ∆Ss = ∆E TS : then if we want total entropy to increase, clearly this only happens if TS ≤TR. So it’s only thermodynamically favorable to transfer heat ∆Q from higher temperature to lower temperature. But what happens when negative temperatures are involved?
94 Remember that when we drew our graph of S versus E for the two-level system: the temperature is 1 slope. The slope goes from positive to 0 and then 0 to some negative value, which avoids the singularity of ∞temperature! So now we want 1 TS > 1 TR : regardless of whether we have two systems at positive temperature or two systems at negative temperature, they will want to move “towards” each other. But if we look at the total entropy, the equation we care about is that heat is transferred from R to S if and only if 1 TR < 1 TS .
But now the question: if we bring a temperature with negative temperature in contact with one of positive temperature, what will happen?
1 T is continuous in this case, and two sysetms always want to equilibriate! So what is going on here is that systems at negative temperature gain entropy when you lose energy! The reservoir pays for both the energy and the entropy, so it will willingly transfer energy ∆E: then the entropy of both the reservoir and the system both go up.
So what happens is that if a system with negative temperature is in contact with an ideal gas with some positive temperature, then the gas will always gain more energy!
29 April 4, 2019 This is a reminder that the problem set is due tomorrow!
29.1 Brief overview Last class, we used our knowledge of quantum systems (namely the particle in a box) to determine thermodynamics of an ideal gas. The idea is that the internal energy of the system is fixed, so we can count the number of possible states: then because at thermal equilibrium, all microstates are equally likely, we can just use the formula S = kB ln Γ to find the entropy. From there, we could find the temperature of the system, as well as the general equation of state.
This then led us to the definition of a microcanonical ensemble.
Today, we’re going to go back to how thermodynamic started: using heat engines! We’ll see how to extract work from those engines, and we’ll find a bound on the efficiency on such engines. From there, we’ll start looking beyond having “isolated systems:” we’ll see how to do statistical physics with just parts of systems, which will lead us to the idea of free energy.
By the way, the question of N! in our calculations from last week is an important idea: this will be a problem on our next problem set.
29.2 The third law of thermodynamics We’ve talked quite a bit about entropy as information and also in terms of heat transfer, but let’s start looking beyond just those ideas!
95 Theorem 169 (Third law of thermodynamics) The following are equivalent statements: • As temperature T →0, the entropy of a perfect crystal goes to 0.
• As temperature T →0, the heat capacity C(T) →0.
• It is impossible to cool a system to T = 0 in any finite number of steps.
Let’s go through each of these concepts one by one. The idea with the first statement is that a perfect crystal should always be at the ground state (in quantum mechanics) when all thermal energy is taken away (T = 0), so there should only be Γ = 1 microstate! Thus, S = kb ln Γ = 0. However, systems do have imperfections: for example, some atoms in a piece of glass may not be in this perfect crystal shape. So it will take a very long time for the system to “relax:” this is a kind of residual entropy, and there are interesting ways to deal with this.
Thinking about the second statement, let’s write entropy in terms of the heat capacity.
Lemma 170 Entropy for a given pressure and temperature is S(T, P) = S0 + Z T 0 dT T CP (T).
where S0 is some “residual entropy” mentioned above.
Proof. We start from the first law and formula for enthalpy: dU = TdS −PdV = ⇒dH = TdS + V dP.
At constant pressure, ∂H ∂T P = T ∂S ∂T P = CP (T), where the third term disappears due to dP = 0, and now integrating both sides of ∂S ∂T P = CP (T ) T yields the result above.
But now if we take T →0, the integral will diverge unless CP (T) goes to zero! So we must have CP (T) →0.
Finally, let’s think about the third statement: how can we think about cooling a system in this way?
29.3 Engines and efficiency Definition 171 A heat engine is a machine that executes a closed path in the (phase) space of thermodynamic states by absorbing heat and doing work.
Since internal energy is a state function, a closed loop will not change the internal energy of our system! We can think of this alternatively as assigning a U to each point in our phase space. This can be written as 0 = I dU = I TdS − I PdV.
96 Note that H PdV > 0 = ⇒our engine does work, and H TdS > 0 = ⇒heat is added to the system. If we draw a clockwise loop in PV -space, the engine will do work on the surroundings, and we can make an analogous statement about TS-space.
So our goal is to construct such a closed loop and find the efficiency: how much work do we get for the amount of heat we are putting in?
Definition 172 A Carnot engine is a special engine where all heat is absorbed at a fixed temperature T + and expelled at a fixed T −.
This is an ideal engine: in other words, we are assuming that this is a reversible process and that all heat exchange only takes place between a source T + and sink T −. We can think of this as having our system between a source and sink, absorbing some heat Q+, expelling some heat Q−, and doing some work W.
By the first law, since ∆U = 0, Q+ = Q−+ W.
If we have a reversible transfer of energy between the system and the source, then the change in entropy S+ = Q+ T + .
(If our system is not ideal, we may have additional entropy gain, since we lose additional knowledge about our system.
So in general, we’d have S+ ≥Q+ T + .) Similarly, we also have a reversible transfer of energy between the source and sink, so the entropy expelled to the environment is S− = Q− T −.
(In an irreversible process, some of the entropy remains as residual entropy inside our engine, so generally we have S−≤Q− T −.
Definition 173 Define the efficiency of an engine to be η = W Q+ = Q+ −Q− Q+ = 1 −Q− Q+ to be the ratio of work we can do with the engine to the heat that we put in.
We know that in our cycle, the entropy in and out of our system should also be a state function. So ∆S = 0, and S+ = S−= ⇒Q+ T + ≤Q− T −= ⇒Q− Q+ ≥T − T + .
Plugging this in to our definition, we have the following: Proposition 174 (Carnot’s bound on efficiency of a heat engine) The maximum efficiency of a Carnot engine between a source of temperature T + and sink of temperature T −is η = 1 −T − T + .
97 Example 175 The maximum efficiency of an engine between 327◦Celsius and 27◦Celsius is 600−300 300 = 1 2 (convert to Kelvin).
This Carnot efficiency is an absolute thermodynamic bound, and it’s saturated when all heat is added at some temperature T + and expelled at some temperature T −.
If we plot this process in the TS plane, we trace out a rectangle and have to do the following steps: • Isothermally compress the gas at temperature T −: this requires expelling Q−heat and doing an equal amount of work.
• Put in work to adiabatically compress our gas from T −to T +.
• Isothermally expand the gas at temperature T +: this requires putting in Q+ heat and expelling that amount of work.
• Let the gas adiabatically expand back to temperature T −(and get work out of it).
In the PV plane, the first and third steps (for an ideal gas) follow the isotherms PV = c, and the second and fourth steps follow PV γ = c. The total work done here is the area enclosed by our curve in the PV plane! For an ideal gas, we have the following: If we say that our steps lead us between states 1, 2, 3, 4, then we can make the following table: Step Process Work by Heat 1 →2 isothermal compression −nRT ln V1 V2 −nRT −ln V1 V2 2 →3 adiabatic compression not needed 0 3 →4 isothermal expansion nRT + ln V4 V3 nRT + ln V4 V3 4 →1 adiabatic expansion not needed 0 where the “not needed” steps will cancel out.
So the total heat added is Q+ = nRT + ln V4 V3 , while the work done is Q+ −Q−= nRT + ln V4 V3 −nRT −ln V1 V2 .
We can show that V4 V3 = V1 V2 by using the fact that PV γ = c for adiabatic compression and expansion, and therefore we indeed have indeed η = W Q+ = T + −T − T + saturates the Carnot efficiency!
Does our system change when we deal with small numbers and more fluctuation? The question of “whether we can beat the Carnot efficiency” is still a good question of research today at small scales!
Next session, we will introduce an interesting engine called Stirling’s engine, whose work per cycle is better than the Carnot theoretical efficiency!
30 April 8, 2019 (Recitation) This recitation was taught by Professor England.
98 We often consider a situation where a piston is pushing on a gas. One assumption that is usually made is that we have a “quasi-equilibrium” state: the system is compressed slowly, so that we’re always close to an equilibrium state (for example on our PV diagram).
In this scenario, we do actually use exact differentials in a way that can be measured. Then if we apply some force F over an infinitesimal distance dx, we have our first law dU = ¯ dW + ¯ dQ = −PdV + TdS.
But in real life, we usually push our piston down quickly, and the compression is done rapidly. Now our question is: what’s the amount of work we do if our gas isn’t in equilibrium? (Basically, our system is not given enough time to relax.) Example 176 Let’s say our gas is placed in a bath at temperature T.
When we compress an ideal gas very slowly, dU = 0 (since the temperature stays constant). This means ¯ dQ = ¯ dW = ⇒PdV = TdS.
On the other hand, what’s the change in entropy for this process? The entropy change over the universe is ∆Suniverse = ∆Ssystem + ∆Sbath.
In a quasi-static process, ∆Suniverse = 0: we have a reversible process, because the change of entropy at a constant temperature is ∆S = ¯ dQ T ; since all heat transferred out of the system is transferred into the bath at the same rate! So the total change in a quasi-static process is ∆S = ¯ dQ T −¯ dQ T = 0.
Fact 177 Also, we can think about the fact that the entropy is proportional to −N ln ρ, where ρ is the density of our ideal gas.
Indeed, this process is reversible: if we push the piston back out slowly enough, the heat will flow back into the system, and we’ll again have equal and opposite changes in entropy.
But now let’s say we push the piston the same amount, but we do it fast enough that heat doesn’t transfer as smoothly: for example, we can imagine doing an adiabatic process, and then (once that’s finished) let the heat transfer with the bath happen. This is significantly different: in an isothermal process, PV is constant, while in an adiabatic constant, PV γ is constant: these yield different amounts of work for the same compression ∆V , because (mathematically) the integral of PdV is different or (physically) adiabatic compression means the particles will push back more, so the work we have to do is larger! So W > Z isothermal PdV ; and this argument works even when the process isn’t adiabatic. Ultimately, we are putting in some extra energy, and 99 that energy will eventually exit into the heat bath, since U = 3 2NkBT only depends on the temperature of our gas.
Ultimately, then we know that the change in entropy of our system (ideal gas) doesn’t depend on our final state: it will be ∆Ssystem = −kB∆(N ln ρ) , which is the same as what we had before in the slow case. But the difference comes from the change ∆S in the environment: because the fast work we did is larger than the isothermal work, some heat is pushed into the heat bath!
So ∆Q T , entropy change in the surrounding bath, is greater than the original R ¯ dQ T in the isothermal case, which means our process is now irreversible: ∆S > 0.
Fact 178 So in summary, a fast process means we do work that is larger than the slow work we needed to do, which increases heat flow and therefore generates entropy. That’s where the second law of thermodynamics is coming from: ∆Sfast = 0.
But it would also be nice to understand this in a microscopic scale as well. Let’s say we have a bunch of particles in our ideal gas that are flying around in this case: there will be times where (by chance) there’s some vacuum near the piston, so we can get lucky and compress the gas without doing any work (as fast as we want)! So we’ve then decreased the entropy of the gas without doing any work at all, and that seems to violate the second law of thermodynamics: how do we deal with this?
This gets us into a more current topic: let’s say we have our system and we follow some path from the initial to final state (for example, we have some fast compression h(t) which dictates the height of our piston). Now we can define a state function F = U −TS, the Helmholtz free energy. We know that dU = −PdV + µdN + · · · + TdS, so taking differentials of the free energy (by the product rule), the TdS cancels out: dF = −SdT + ¯ dW So at constant temperature, dF = ¯ dW. That means that holding the temperature constant will give a total change in free energy equal to the work that we’ve done! So it’s important to build here because a first law that holds for quasi-static processes is deeply connected to it.
To deal with the second law violation, we have to start thinking about the statistical properties of the microstates.
A slow process always averages over all possibilities: a bunch of states are getting visited. But a really fast piston will visit the system in some microstate, which explains why our work is statistically varying. So we get some distribution p(W), and this will lead us to a fact: Proposition 179 (Jarzynski, 1997) We have Z dWp(W)e−W/(kBT ) = e−∆F/(kBT ) We’ll be able to prove this later on! What does this tell us? Inherently, the right hand side is a change in a state function, and the left hand side measures our statistical fluctuations: this is a limitation on the shape of our distribution. For example, p(W) = δ(W −∆F) in a quasi-static process, and indeed this checks out with our integral.
100 This can alternatively be written as a statement about expectations: D e−W/(kBT )E = e−∆F/(kBT ).
So now we can define a quantity Wd = W −∆F which tells us how much work is dissipated. A quasi-static process has ∆F = W, but in general we can have some difference based on statistical variance. But dividing our statement above by e−∆F/(kBT ), we have D e−(W −∆F )/(kBT )E = D e−Wd/(kBT )E = 1.
Now recall that the second law is about doing extra work that we didn’t need to! Dividing our ∆F definition through by T, ∆F T = ∆U T −∆S we can think of the right side as −∆Senv −∆Ssys, which is −∆Stotal. But by convexity (specifically, Jensen’s inequality), ⟨ex⟩≥e⟨x⟩ So plugging that in, ⟨Wd⟩≥0, so we always expect to do a nonnegative amount of dissipated work!
31 April 9, 2019 Some announcements about the exam: it will take place next Thursday, April 18. It’ll cover material after quiz 1 -it won’t have any problems on probability theory, but we will need to understand those concepts to do (for example) problems on a microcanonical ensemble!
Again, previous exams will be posted online, and 3 of the 4 problems will be from previous years’ exams. There will be an optional review session as well.
Notice that there won’t be any classes next week: Tuesday has no classes, and Thursday will be the exam.
31.1 A quick overview Last time, we talked about extracting work from heat engines, and we found ways to combine different processes in the PV plane to make a cycle and extract work. We found a theoretical bound, called the Carnot efficiency, and we constructed an example that saturates that efficiency.
Fact 180 Note that it’s not possible to actually achieve this efficiency in real life because of dissipation and other processes!
Now we’re going to try to introduce some other cycles as well, such as the Stirling engine. We’ll also discuss statistical physics for systems that are actually connected to the outside world using free energy!
101 31.2 A new heat engine Recall that in a Carnot cycle, we trace a clockwise cycle in our PV plane bounded by curves of the form PV = c and PV γ = C. We derived that the efficiency here is the theoretical bound η = W Q+ = T + −T − T + .
But if we replace our adiabatic processes with isometric properties, this allows us to do more work in a single cycle!
This is called a Stirling engine. In the TS diagram, we now no longer trace out a rectangle, since we no longer have adiabatic steps that keep entropy constant. Instead, for a monoatomic ideal gas, the entropy change is S = NkB 3 2 ln T + · · · .
So in the TS-plane, our process is now bounded by horizontal lines (at constant temperature) and exponential curves (T ∝exp( 2 3 S NkB )). So our question now is how we construct such an engine? Again, let’s say our process takes us between states 1, 2, 3, 4.
Step Process Work by engine Heat added 1 →2 Isothermal compression −nRT −ln V1 V2 −nRT −ln V1 V2 2 →3 Isometric heating 0 CV (T + −T −) 3 →4 Isothermal expansion nRT + ln V1 V2 nRT + ln V1 V2 4 →1 Isometric cooling 0 −CV (T + −T −) So the efficiency here is η = W Q+ = nR(T + −T −ln(V1/V2) nRT + ln(V1/V2) + CV (T + −T −).
So the idea here is that the CV (T + −T −) is what limits us from reaching the Carnot efficiency - how do we get around this?
Proposition 181 Consider a chamber with two pistons: the left piston is for “expansion” and the right piston is for “compression.” Keep the portion of the system to the left of the chamber at temperature T +, and keep the portion of the system to the right at temperature T −. In between, we have an ideal gas (equivalently a fluid), but put a material with high heat capacity in the middle as well (called the regenerator).
So that amount of heat can be reused: it is stored and then reused in the next cycle! So now this effectively eliminates the CV (T + −T −) term, and we do indeed reach the Carnot efficiency T +−T − T + as desired.
How much work do we get for each cycle? We can find that WStirling WCarnot = 1 1 −ln(T +/T −) (γ−1) ln r where r is the ratio of volumes. Plugging in T + T −= 2, r = 10, γ = 1.4, we find that a Stirling engine gives 4 times as much work as a Carnot engine for the same ratio of volumes and temperatures!
31.3 Brief aside about the third law Remember that one of our statements of the third law was that it is impossible to cool any system to absolute zero in a finite amount of time. One example of this is by switching between two different pressures: low to high 102 with an adiabatic process and high to low with an isothermal process. This can be done using nuclear magnetization!
But both P curves end at (0, 0): that’s the main point of the third law. If we try to do any process, following some curve in the TS-plane, we will never actually be able to get to the origin. However, we’ve gotten pretty good at getting close to absolute zero: ask professor Ketterle about this!
31.4 Free energy So as a basic summary of what we’ve been talking about: we introduced S, our entropy, as a function of the number of microstates. We’ve made some statements about general properties of S, but we’re going to start looking at smaller parts of the system (and treat the rest as being a heat bath with constant temperature). So now our system exchanges energy, volume, and particles with an outside world in thermal equilibrium, so we have our temperature fixed (rather than the total internal energy).
There’s many different notions of free energy: they will be used in different constraints. Remember that in an isolated system, we need ∆S ≥0 to have a spontaneous change of an isolated system (though this isn’t necessary sufficient). Let’s now imagine that our system is in contact with a heat bath at a fixed T: now the total entropy change is (at constant temperature) ∆Stotal = ∆Ssystem + ∆Sbath.
If our system does no work (because it is at a fixed volume), ∆Sbath = ∆Q T = −∆Usystem T , and we can plug this in to find ∆Stotal = ∆Ssystem −∆Usystem T This means that we have a new necessary condition: Fact 182 We must have ∆Ssys −∆Usystem T ≥0 for a spontaneous change of our system in contact with a heat bath.
This motivates us to make a new definition: Definition 183 Define the Helmholtz free energy F of a system to be the state function F = U −TS.
We now require ∆F ≤0 to make a spontaneous change in a system at thermal equilibrium with fixed V and T. It seems that because S is a function of U, V, N in our system, F should also be a function of the four variables T, V, N, U.
But the free energy is not actually a function of the internal energy! Indeed, F = U −TbSsystem(US) = ⇒ ∂F ∂US = 1 −Tb ∂SS ∂US = 1 −Tb Ts , and at thermal equilibrium (which is the only situation where we use F to represent the system), Tb = Ts. So the derivative is 0, so US is not an independent variable here!
103 Fact 184 This is an example of a Legendre transformation, which helps us change from one type of energy or free energy to another by using a different set of independent variables. For example, we go from (U, V, N) to (T, V, N).
As another example, we can change our variables in another way: considering a system at constant temperature and pressure (this happens a lot in biology and chemistry). Then recall that dU = dQ −PdV = ⇒dH = dQ + V dP = ⇒∆Q = ∆H at constant pressure, and now we can play the same trick! The heat emitted into the environment at constant pressure is −∆H, so the total change in entropy ∆Stotal = ∆Ssystem + ∆Sbath = ∆Ssystem −∆H T , and by the second law, we must have this quantity be at least zero.
Definition 185 This motivates the definition of the Gibbs free energy ∆G = H −TS and for a spontaneous change in our system, we must have ∆G ≤0: free energy needs to decrease!
It’s important to note that this wasn’t all just to introduce a new state function: here’s a preview of what’s coming.
In some systems, we have a fixed energy and can calculate the temperature of that system: in a microcanonical ensemble, we consider all possible microstates that are consistent with it. Then we can take some subset of our system, where the temperature T is still fixed (though the energy is not), and that’s what a canonical ensemble deals with!
31.5 Natural variables and Maxwell relations The idea we’ve been developing in this class so far is to go from various constraints on our system to some mathematical relationship between thermodynamic variables. For example, the first law tells us that if we define our internal energy U in terms of S and V , then dU = TdS −PdV = ⇒∂U ∂S V = T, ∂U ∂V S = −P.
Because we have a state function U, the mixed second partial derivatives should be the same: this means ∂ ∂V ∂U ∂S V S = ∂ ∂S ∂U ∂V S V and that gives us the relation ∂T ∂V S = −∂P ∂S V .
This applies to other state functions we have too - we can apply the same logic with our free energy definitions as well! The idea is that free energies are often easier to measure experimentally under certain conditions. For example, 104 since H = U + PV , dH = TdS + V dP, so writing H as a function of S and P, ∂H ∂S P = T, ∂H ∂P S = V, and doing the mixed partial derivatives yields ∂T ∂P S = ∂V ∂S P .
Finally, since F = U −TS, dF = dU −TdS −SdT, which is also dF = −PdV −SdT. So this time, it’s natural to write F as a function of V and T. Now ∂F ∂T V = −S, ∂F ∂V T = −P = ⇒ ∂S ∂V T = ∂P ∂T V .
These partial derivative equations often relate a quantity that is easy to measure with something that is generally less experimental in nature! We’ll talk more about these concepts next time.
32 April 10, 2019 (Recitation) 32.1 Exchanging energy Let’s start with the Helmholtz free energy F = E −TS.
It’s easy to say what energy or momentum “does,” but what about free energy? One way to think about this is the “balance” between energy and entropy. Any system that interacts with a reservoir at temperature T (for example, room temperature in the world around us) cares about “free energy” to consider favorability of a reaction.
The main idea is basically that the second law must hold! If we create an entropy ∆S, then we can gain an energy of E = T∆S from the reservoir. Another way to think about this is connected to the problem set: if we have a system with (non-normalized) occupation numbers na ∝e−βEa, notice that all Boltzmann factors are equal if we have infinite temperature, and that means all states are equally populated! On the other hand, with zero temperature, only the ground state is occupied, because any higher state has exponentially smaller occupation numbers.
32.2 Looking at the Sackur-Tetrode entropy Recall the equation that was derived by these two researchers from Germany back in 1913: S(U, T, N) = kBN ln U N + ln V N + 3 2 ln 4πm 3h2 + 5 2 Note that the constants after the first two terms are just prefactors. Interestingly, though, Sackur got 3 2 instead of 5 2: this comes from ln n! = n(ln n −1), and he didn’t include the −1 in his approximation.
But does this really matter? In almost all calculations, we ignore the entropy and only consider ∆S. But what would we actually get wrong?
Notice that in the correct form of Sackur-Tetrode entropy, the entropy S goes to 0 as the temperature T goes to 105 0. It turns out we can measure absolute entropy via S(T) = Z T 0 dQ T = Z T 0 cP (T)dT T .
(You also have to add latent heat to go from solid to liquid state, and so on. This can be written as a delta function, but the details aren’t that important.) Well, some researchers followed mercury across different temperatures: knowing CP , they integrated from 0, and eventually mercury became an ideal gas at high enough temperature! So through measurements, they found agreement with the theoretical answer. (They did assume 5 2.) Fact 186 This allowed them to find an experimental value of h, Planck’s constant, and they did this to 1 percent precision!
It’s pretty amazing that we can determine this by measuring heat - this is really adding “entropy.” But if we replace 5 2 with 3 2 and try to get that experimental result for h, we actually change it by a factor of e−1/3 ≈0.72. So that would give you an incorrect experimental result by 30 percent! This is why absolute entropy does occasionally matter.
32.3 Multiple ground states In quantum mechanics, we have multiple energy levels: it’s possible that (for example at zero magnetic field) we have two low-energy states. Then we have a special symmetry in our system!
For example, if we have N distinguishable particles and each can be in 2 degenerate states, we have an extra contribution S0 to the entropy. This was actually mentioned in Sackur-Tetrode: mercury has multiple isotopes, so we have to be careful. There’s other ways quasi-degeneracy could come up as well.
32.4 Fluctuations in a microcanonical ensemble Let’s try to think about a system with two subsystems, but instead of exchanging energy, let’s think about exchanging particles. If the system is divided into two symmetric parts A and B, and we have NA + NB = N particles, we expect there to generally be N 2 particles in both halves.
The total number of microstates ΓAB is then multiplicative, because we essentially pick a microstate from both A and B. But then taking logarithms, the entropy SAB is now additive!
But let’s think a bit more about the number fluctuation. Is it true that the number of microstates for the whole system, is ΓN,A+B = Γ N 2 ,A · Γ N 2 ,B?
Not quite! Some arrangements of the system A + B don’t actually have the number of particles or the energy split exactly: there are fluctuations! So we basically have to add over all possible number of particles in system A: ΓN,A+B = X k Γk,AΓN−k,B.
But in principle, the extreme values of k are not likely - they contribute very little to the entropy. Specifically, the distribution of probabilities NA is centered around N 2 , and now we have a sharp distribution that’s basically Gaussian with standard deviation ∝ √ N.
106 That means that allowing fluctuations, ΓN,A+B = Γ N 2 ,A · Γ N 2 ,B has an approximate “width” of √ N: this gives us an approximate answer of ΓN,A+B = √ NΓ N 2 ,A · Γ N 2 ,B.
But this kind of fluctuation is very small: if we have, for example, 1026 particles, we have a precision of 10−13: this is unmeasurably small! So we wouldn’t be able to just count them. Now note that our counting of microstates is not quite multiplicative with this approximation: we have an entropy S ∝S N 2 ,A + S N 2 ,B + c ln N.
So we should always be careful! The first two terms are extensive quantities, while the last term is not. Luckily, that is almost negligible compared to the other terms.
33 April 11, 2019 33.1 Overview and review This class is about studying the connection between microscopic and macroscopic descriptions of a system. What we’ve been doing recently is imposing various constraints on our system: for example, setting a fixed energy U gives us a microcanonical ensemble, and we’ll find that setting a fixed temperature T will give us what is called a canonical ensemble.
Last time, we started discussing more relationships between our thermodynamic variables. If we have a first law condition dU = TdS −PdV, then we can define our energy U in terms of S and V : then because U is a state function, the mixed partial derivatives with respect to S and V gives us ∂U ∂S V = T, ∂U ∂V S = −P = ⇒∂T ∂V S = −∂P ∂S V .
With the same kind of argument, we can also derive values from the enthalpy H = U + PV : this yields ∂H ∂S P = T, ∂H ∂P S = V = ⇒∂T ∂P S = ∂V ∂S P .
With the Helmholtz free energy F = U −TS, ∂F ∂T V = −S, ∂F ∂V T = −P = ⇒ ∂S ∂V T = ∂P ∂T V , and finally with the Gibbs free energy G = F + PV , ∂G ∂T P = −S, ∂G ∂P T = V = ⇒ ∂S ∂P T = −∂V ∂T P .
These four relations between our state variables are known as Maxwell’s equations: we’ll soon see why they’re important. For example, if we want to see change in entropy per volume at constant temperature, there’s really no way to measure that quantity directly, since it’s really hard to measure entropy. On the other hand, we can measure ∂P ∂T V a lot more easily: just change our temperature and measure the pressure inside some fixed volume!
107 Fact 187 Don’t memorize these equations: we can always derive these from the first law directly.
More generally, we can make a table between intensive variables X and extensive conjugate variables Y : X Y −P V σ A H M F L E P µ N and in general, we can always write down our internal energy U as a function of S and Y , H as a function of S and X, F as a function of T and Y , and G as a function of T and X! We’ve mostly only been focusing on a three-dimensional gas, which is why we’ve been using P and V , but we could replace this with other pairs of variables.
33.2 An application of Maxwell’s relations Let’s try to compute heat capacity of an arbitrary material: in general, the formula CV = ∂U ∂T V measures how much internal energy of a system depends on its temperature. For an ideal gas, we know that internal energy only depends on temperature, but we may want to measure other heat capacities as well.
Example 188 What is a good way to find ∂U ∂V T ?
Let’s do this systematically so that it’s easy to understand how to do related problems! Start with the first law, dU = TdS −PdV.
This means that ∂U ∂V S = −P. Let’s write U as a function of T and V , so dU = ∂U ∂T V dT + ∂U ∂V T dV.
This includes the term we want! Going back to the first law, writing S as a function of T and V as well, ∂U ∂T V dT + ∂U ∂V T dV = dU = TdS −PdV = T ∂S ∂T V dT + ∂S ∂V T dV −PdV Now all differentials are in terms of dV and dT, so we can gather terms of dT and dV and divide to find ∂U ∂V T = T ∂S ∂V T −P.
108 If we want to make this a measurable quantity, though, we should replace the ∂S ∂V T term: this can be done with Maxwell’s relations! Since ∂S ∂V T = ∂P ∂T V , we just have ∂U ∂V T = T ∂P ∂T V −P .
So if we just have the equation of state, we have enough to find the quantity we desired!
Example 189 What is the V -dependence of CV ; that is, what is ∂CV ∂V T ?
Recall that CV is itself a partial derivative! Specifically, ∂ ∂T ∂U ∂V T = ∂ ∂V ∂U ∂T T = ∂CV ∂V T .
Since we just found the leftmost quantity, we can plug that in: ∂CV ∂V T = ∂ ∂T T ∂P ∂T V −P = T ∂2P ∂T 2 V .
Example 190 Can we find a relationship in general between CP and CV ?
Remember that we defined a quantity H = U + PV to help with this kind of problem: ∂H ∂T P = CP = ∂U ∂T P + P ∂V ∂T P .
(This came from expanding out the derivatives on both sides.) Expanding out the first term on the right hand side, ∂U ∂T P = ∂U ∂T V + ∂U ∂V T ∂V ∂T P Note that ∂U ∂T V = CV , and we also computed earlier that ∂U ∂V T = T ∂P ∂T V −P, so plugging everything in, CP = CV + ∂U ∂V T + P ∂V ∂T P = ⇒ CP = CV + T ∂P ∂T V ∂V ∂T P .
We defined response functions earlier in the class: we had variables like expansivity that are known for certain materials! So if we’re doing a problem in that realm, we can rewrite our equation in terms of coefficients like thermal expansion: α = 1 V ∂V ∂T P = ⇒CP = CV + TV α ∂P ∂T V .
So now can we say anything about our function ∂P ∂T V ? Note that we have the partial derivative identity ∂P ∂T V ∂T ∂V P ∂V ∂P T = −1 = ⇒∂P ∂T V = − 1 ∂T ∂V P ∂V ∂P T = −V α ∂V ∂P T 109 where β = −1 V ∂V ∂P T is the coefficient of isothermal compressibility. This lets us just write down ∂P ∂T V = α β : plugging this in, we get the relation CP = CV + TV α2 β .
This is basically a game of thinking like an experimental physicist: we know what’s easy to measure, so we just try to write our derivatives in terms of those quantities!
33.3 The Joule-Thomson effect This phenomenon is also known as the throttling process!
Example 191 What is the temperature change of a real gas when it is forced through a valve, if the container is isolated from the environment (so no heat exchange)?
In a real gas, there is actually interaction between the particles, so the temperature will actually change. Specifically, we then have a potential energy associated with our system as well!
To describe our system, imagine having two containers A and B connected with a small valve in the middle: let’s say there is a pressure P0 on the left container A and a pressure P1 in the right container 2. We force some volume V0 through the valve from P0 to P1: then the work done by the piston for container A is WA = P0V0, and the work done by the piston for container B is WB = −P1V1.
where V1 is the volume the gas takes up in the new container. Since there is no heat exchange, by the first law, U1 −U0 = WA + WB = P0V0 −P1V1 = ⇒U0 + P0V0 = U1 + P1V1.
This means that this is a constant enthalpy process! So does the gas that is pushed through get cooler or warmer?
We can define the Joule-Thomson coefficient µJT = ∂T ∂P H .
Since we have expansion, ∂P < 0. This means that µJT < 0 = ⇒dT > 0, so the gas warms up, and µJT > 0 = ⇒ dT < 0, so the gas cools. This actually has to do with liquifying gases at room temperature!
So how can we say things about µJT ? Starting from the enthalpy equation, if H is written as a function of T and P, since dH = 0, dH = ∂H ∂T P dT + ∂H ∂P T dP = ⇒µJT = − ∂H ∂P T ∂H ∂T P = −1 CP ∂H ∂P T .
This ∂H ∂P T term is analogous to the ∂U ∂V T term from earlier! Specifically, we can go through the analogous derivations to find µJT = −1 CP V −T ∂V ∂T P .
For an ideal gas, PV = nRT = ⇒ ∂V ∂T P = V T , so µJT = 0: we can’t see any effect on temperature. But for a van der 110 Waals gas, we have constants a and b so that (V −nb) P + a n2 V 2 = nRT; if we expand out the left side, since a and b are generally very small, the −nb an2 V 2 term is negligible, and thus PV −Pnb + an2 V = nRT; taking differentials, PdV −an2 V 2 dV = nRdT, and substituting in for P and doing the relevant calculations, µJT ≈n CP 2a RT −b .
So the temperature where µJT changes sign is the inversion temperature Tinversion = 2a Rb.
If our temperature is larger than the inversion temperature, µJH is negative, so changing the pressure will increase our temperature. Otherwise, if T is smaller, then temperature will decrease!
34 April 17, 2019 (Recitation) There is a quiz tomorrow!
34.1 Types of free energy Last time, we mentioned the Helmholtz free energy and Gibbs free energy F = U −TS G = U + PV −TS, where both of these are a kind of balance between energy and entropy.
We can think of this by looking at the Boltzmann distribution as T →0 and T →∞.
Question 192. What does free energy mean?
Let’s say we have our system S connected to a reservoir R fixed at temperature T.
If we want to see if a certain reaction can happen, we must have the total ∆Stotal > 0 to have a favorable reaction (by the second law).
Information-theoretically, we can go from more precise knowledge to less precise knowledge.
What is the change in entropy here? The change in internal energy of the reservoir is ∆U = ∆Q (since no work is being done), which is −T∆Ssys in a reversible process. This means that the change in entropy ∆Stotal = ∆Ssys + ∆Sres = ∆Ssys −∆Usys T = −1 T (∆U −T∆Ssys) ≥0.
Defining F = U −TS, any favorable reaction has to go to lower free energy!
111 Fact 193 Think of a reservoir as a swimming pool and our system as a piece of cold chalk being thrown into it. The change in temperature of the swimming pool can be assumed to be almost zero, and if not in a specific case, we can always scale the system. This is the limit we take in a canonical ensemble!
On the other hand, when we have a system at constant pressure, we often use the Gibbs free energy G = F + PV , much like how we use enthalpy for some calculations instead of internal energy.
But why is it called free energy? We can think of this as “energy that can be converted.” Looking again at a system with T and P constant (so we want to look at G), let’s say that we are part of the system, and we want to transform some internal energy U into something useful. For example, can we turn the energy gained from 2H2 + O2 →2H2O into something useful by the laws of thermodynamics? Is the energy actually available?
Well, let’s say 1 mole of a large molecule dissociates into 5 moles of smaller constituents: then there are more molecules, so to conserve the pressure P, we must increase the volume V , which does work ∆U = −P∆V.
This works in the water reaction as well! Since we now have less molecules, we have to reduce our volume, which does work on the system. This is where the PV term comes from in our equation, and we can finally account for the +T∆S from the change in entropy in our system (since that needs to be transferred to our reservoir).
So we change the internal energy by some ∆U, and this now gives us an associated work P∆V and heat transfer −T∆S. This is indeed the G = U + PV −TS that we desire!
Fact 194 In summary, we do “boundary condition” corrections for the variables we are fixing constant: in this case P and T.
34.2 Ice packs Some of these work without needing a frozen material: break something in a plastic pack, and it gets cold. How does this happen?
It turns out that it requires energy ∆U > 0 to dissolve ammonium nitrate in water. So if this is in contact with something warmer (like an arm), it will grab energy and cool down the environment. Now think of the system as the plastic pack and our body as the environment! This is allowed because it creates some entropy ∆S > 0. So now F = U −TS < 0 is indeed true because the temperature T is sufficiently large (so the entropy overcompensates for the change in internal energy).
But what happens if the ice pack were not in contact with anything? Then the water and ammonium nitrate would be completely isolated, and now this cannot dissolve!
112 34.3 Heat engines We have a theoretical bound on the efficiency of a heat engine ηC = T + −T − T + .
Let’s say someone managed to create some ˜ η > ηC: what would this look like?
We know that a reservoir T + provides heat Q+, and the sink T −gains heat Q+ −W. Well, let’s imagine doing this in reverse: pull heat Q+ −W from the T −sink and output heat Q+ to the T + reservoir. (For example, this is a refrigerator!) We can scale this in such a way that the Q+ here is the same as the Q+ in the theoretical other engine! Then putting the two engines together, the work we’ve created is ˜ W −W, and the heat we’ve extracted is −( ˜ W −W): we’ve removed the upper reservoir from the problem when we run the Carnot engine in reverse.
But then we’d create work by extracting energy from a single temperature reservoir: this doesn’t make sense thermodynamically! We can’t get work and do a refrigeration at the same time, or at least it hasn’t been observed.
Thus, the Carnot efficiency is actually the best possible bound on our heat engine! What’s more, any reversible engine has to work at exactly the Carnot efficiency, or else we could run it in reverse and combine it with the Carnot cycle forward, and we’d get the exact same contradiction.
35 April 22, 2019 (Recitation) 35.1 Engines Since we’ve been talking about engines, Professor Ketterle wanted to show us a Stirling engine!
Most engines we have in real life have some combustible material, so we have an open system with fuel and some exhaust material (and this turns out to be more efficient in general). In principle, though, we should be able to operate an engine by just having one heat source (like in our Carnot cycle).
The example at the front of class is just an alcohol burner, which acts as a hot reservoir, together with a cold reservoir at room temperature. Then we can run a motor in reverse, which is an electrical generator!
113 How does a heat engine work, generally? In principle, we can just say that we have a gas that compresses and contracts with a piston. But in this case, when we heat up the gas in the hot reservoir, a piston system actually moves the gas to the cold reservoir, where it cools down! We have pistons 90 degrees out of phase, and that lets our motor run.
Recall that last time, we showed that if we ever had an engine with efficiency η > ηrev, we could couple it together with a Carnot engine (running one in reverse) to create work for free by extracting heat from a single reservoir. This would be very convenient, but it violates the second law of thermodynamics.
Fact 195 This also told us that all reversible engines between reservoirs at T1 and T2 must have the same η efficiency. This is the maximum efficiency! The Carnot engine is just special because it’s an example of an easily described engine with that efficiency η = T2 −T1 T2 .
35.2 Free energy Recall that F = U −TS, the free energy, tells us something about whether a system’s entropy is going up or down: ∆F < 0 = ⇒∆Stotal = ∆Ssys + ∆Sres > 0 at constant temperature. Basically, defining new “energies” like F, G, H includes information about the reservoir as well: for example, G describes our environment on earth (at constant pressure and constant temperature), which is why we use G = U −TS + PV.
Basically, TS and PV tell us that we need to do some work or transfer some heat to satisfy the environment conditions!
So F and G tell us what is “left,” and that’s why their sign tells us about whether processes can happen spontaneously.
Fact 196 We can think of “spontaneous” processes as those that can give us work, and that’s how to reconcile thoughts like “spontaneous combustion.” 35.3 Looking at free expansion of a gas again Let’s say we have a gas that expands from a volume V into a volume of 2V , while keeping the internal energy U and temperature T the same. (For example, consider an ideal gas). Then F = U −TS goes down, and free energy going down means that we can extract work out of this! That means we “missed our opportunity:” we’ve increased our entropy without getting the work out of it.
But we can also just keep that whole system at constant temperature T: if we’re being formal about it, with a quasi-static process, we should actually have a piston that is isothermally expanding at temperature T. We’ll then find that the work done does satisfy |∆W| = |∆F|.
If the internal energy and temperature stay the same, where is that work coming from? Well, the gas is losing some kinetic energy when the piston is moving back! It is the heat added by the environment at constant T that keeps the 114 internal energy constant, and that’s basically coming from the TS term in the free energy.
So we create energy by reducing our internal energy, but the PV and TS terms are “taxed” by our reservoir and conditions. In our case here, we actually get a bonus from the environment to get work!
Proposition 197 Use F when we have constant temperature, and use G when we also additionally have a requirement that P is constant.
35.4 Partial derivatives We have a bunch of thermodynamic variables: P, V, S, T, H, U, F, G.
This is a system with only PV work. Well, that already gives us eight variables: we can now play in eight-dimensional space and write expressions of the form ∂a ∂b c .
But how many of these eight variables are independent? For example, in 3-dimensional space, we can use r, θ, φ versus x, y, z and do lots of coordinate transformations, but no matter what, we have three independent variables. How many do we have in this situation.
Well, let’s think about an ideal gas, where N is constant. Defining volume and temperature gives us pressure (by the ideal gas law), entropy (by the Sackur-Tetrode equation), and then we can get all types of energy. This means that we always have two independent variables! Often, we students will write things like ∂S ∂P V,T , which doesn’t actually make any sense, since fixing V and T tells us exactly what our other variables are! It’s important to not get lost in the jungle of variables: we can only keep one constant at a time (of course, unless one of the two is N).
So Maxwell’s relations come about because P, V, S, T can be written as first derivatives of the thermodynamic potentials H, U, F, G. Specifically, for a function X (which is one of U, H, F, G), we can think of ∂2x ∂a∂b = ∂2x ∂b∂a.
For example, in problem 4 of last week’s exam, we were told to compare ∂V ∂T , ∂S ∂P .
The relevant differentials here are dU = TdS −PdV, dG = −SdT + V dP.
This means U is written most naturally as a function of V and S, but G is written as a function of T and P. We can always convert between our variables, but this is the way we generally want to do it!
We are supposed to say what we keep constant in the two partial derivatives above, but we should ask the question: should we use the Maxwell’s relation for U or for G?
115 We do have V and S in the numerators, and we have T and P in the denominators, so either would work at this point. But note that we were told to actually deal with ∂V ∂T S , ∂S ∂P V .
So now looking at the V and S, it’s most natural to work with U! But G gives the answer as well, and this is what Professor Ketterle did initially! Let’s work out some of the steps: dG = −SdT + V dP = ⇒−S = ∂G ∂T P , V = ∂G ∂P T .
So now ∂2G ∂T∂P = −∂S ∂P T = ∂V ∂T P .
This isn’t actually what we want, though: we have the wrong variables held constant! There is a way to get around this though: just like we can transform CP and CV into each other, let’s look at how to relate ∂S ∂P T to ∂S ∂P V . We’re going to have to label our variables carefully: writing S as a function of P and V (since this is what we want in the end), where V is actually a function of P and T. This means we’re representing our entropy as S(P, T) = S(P, V (P, T)) = ⇒ ∂S ∂P T = ∂S ∂P V + ∂S ∂V P ∂V ∂P T by the chain rule. So it’s possible to work from there, but it is very messy! Alternatively, we could go back and do the smarter thing: looking at U as the thermodynamic potential instead, we end up with dU = TdS −PdV = ⇒∂T ∂V S = −∂P ∂S V .
This is exactly the reciprocal of what we wanted: we end up with ∂V ∂T S = −∂S ∂P V .
36 April 23, 2019 All exam grades will be posted by the end of today. We can always email the professor to schedule a meeting if we want to discuss! Also, drop date is very soon.
There is no pset due this week, because we just started some new material.
Instead, we can focus on other homework assignments we might have!
36.1 Overview It’s time to start a new section in our class - we’re going to talk about a new kind of ensemble. We found that for a system that is well-defined in a certain way (mechanically and adiabatically isolated) can be described by a microcanonical ensemble, which relates to the probability distribution of the individual microstates. The key idea there was that the energy was held constant, and in these cases, all microstates are equally probable. The temperature T is then a natural consequence!
We often deal with systems that aren’t adiabatically isolated, though. Then the microstates may have different probabilities: that’s what we’ll discuss today, and we’ll use the canonical ensemble as a tool! We’ll find that there 116 really isn’t much of a difference between the properties of a microcanonical and canonical ensemble in some aspects, but we don’t have to compute the actual number of microstates anymore.
36.2 The canonical ensemble Let’s start by making a distinction: Fact 198 In a microcanonical ensemble, we specify the internal energy U, and at thermal equilibrium we can deduce the temperature T. On the other hand, in a canonical ensemble, we specify the temperature T and use this to deduce the internal energy U.
Then our macrostates M are specified by our temperature T, as well as (potentially) other variables.
In this scenario, we’re allowing heat to be inputted, but no external work can be done on or by the system.
Proposition 199 We can have our system maintained at a fixed temperature T if it is in contact with a reservoir or heat bath at temperature T.
We make the assumption that the heat bath is sufficiently large that it has basically negligible change in temperature!
For example, a glass of boiling water in a large room will cool down to room temperature, while the room’s temperature is basically fixed.
Question 200. How do we find the probability of any given microstate PT (µ)?
Let’s say our reservoir has some microstates µR and energies associated to them HR(µR). Similarly, our system has some microstates µS and energies HS(µs). We can think of the reservoir and system as one larger system R ⊕S: this is now mechanically and adiabatically isolated, so we can think of this as a microcanonical ensemble!
Our total energy here is Etotal, and we know that (because our system is much smaller than the reservoir), we have Esys ≪Etotal. Then the probability of some microstate (µs ⊕µR) is P(µS ⊕µR) = 1 ΓS⊕R(Etotal) if our total energy HS(µS) + HR(µR) = Etotal), and 0 otherwise. This is a joint probability distribution, so if we want a specific µS for our system, we “integrate” or “sum out” all values of R: P(µS) = X {µR} P(µS ⊕µR).
Also recall that we can write the conditional probability P(S|R) = P(S, R) P(R) , so (because probability is 1 Γ, the multiplicity), this expression can also be written as P(µS) = ΓR(Etotal −HS(µS)) ΓS⊕R(Etotal) 117 since we sum over the multiplicity of R given a specific energy. Notice that the denominator here is independent of µS: it’s a constant, so we can rewrite this in terms of the entropy of our reservoir: given that S = kb ln Γ, P(µs) ∝exp 1 kB SR(Etotal −HS(µS) .
In the limit where the energy of the system is significantly smaller than the total energy of the system and reservoir, we can do a Taylor expansion of this entropy to simplify this further! Then (treating HS(µS) as our variable), because ER = 1 −HS(µS), SR(Etotal −HS(µS)) ≈SR(Etotal) −HS(µS)∂SR ∂ER .
Dropping the S subscripts for simplicity and plugging into the probability distribution, since β = 1 kBT , and exp[SR] is some constant that doesn’t depend on our microstate (to first order), we have the following fact: Theorem 201 (Canonical ensemble) For a fixed temperature, the probability of our microstate is P(µ) = e−βH(µ) Z .
This is a probability distribution, which means Z is our normalization factor! Z is taken from the German word “zustandssumme,” which means “sum over states.” That explains why it’s our normalization factor here: if we add up P(µ) for all states, we’ll get Z Z = 1.
Definition 202 Z here is called a partition function. In this case, Z = X µ e−βH(µ).
36.3 Looking more at the partition function Why are partition functions useful? Basically, many macroscopic quantities can be described in terms of our function Z. Remember that the multiplicity Γ was important in our microcanonical ensemble for finding S: now Z takes its place, and it’s a lot easier to write this down!
Fact 203 The exponential term here is called a Boltzmann factor.
So now notice that the energy is no longer fixed: it’s some random variable H, and we want to find its mean, variance, and other information. (Spoiler: the fluctuations are small, so in the thermodynamic limit, this will look a lot like the microcanonical ensemble!) First of all, we can write the probability of a given energy H as a sum P(H) = X µ P(µ)δ(H(µ) −H), 118 since we want to add up the states with our given energy. Since all probabilities in this sum are equal, this evaluates to e−βH Z X µ δ(H(µ) −H) The sum is the number of microstates with a given energy H′, which is just our multiplicity Γ(H). Thus, this can be rewritten as P(H) = Γ(H)e−βH Z .
We’ll talk more about this later, but we can write Γ in terms of our entropy: this will yield an expression of the form 1 Z exp S(H) kB − H kBT , and since this is a constant times H −TS, this is related to our Helmholtz free energy!
36.4 Evaluating the statistics Note that as H increases, Γ rapidly increases, while e−βH rapidly decreases. Therefore, if we plot this, we’ll find that the distribution is sharply peaked around some energy U. What’s the mean and variance of this distribution?
Well, the average here is ⟨H⟩= X µ H(µ)e−βH(µ) Z which can be written as a derivative −1 Z ∂ ∂β X µ e−βH(µ).
But the sum here is just Z, so ⟨H⟩= −1 Z ∂Z ∂β = −∂(ln Z) ∂β .
How sharp is this distribution - that is, how narrow is it? We can compute the variance var(H) by first considering −∂Z ∂β = X µ He−βH.
Taking another derivative with respect to β, −∂2Z ∂β2 = X µ H2e−βH, but this looks a lot like the average value of H2! We then find ⟨H2⟩= 1 Z ∂2Z ∂β2 .
Therefore, var(H) = ⟨H2⟩−⟨H⟩2 = 1 Z ∂2Z ∂β2 − 1 Z ∂Z ∂β 2 .
This can be rewritten as 1 Z ∂ ∂β ∂Z ∂β + ∂ ∂β 1 Z ∂Z ∂β 119 which is actually just ∂ ∂β 1 Z ∂Z ∂β by the product rule! If we look back, this tells us that var(H) = −∂ ∂β ⟨H⟩, where ⟨H⟩= U is our “average energy.” Since β = 1 kBT , this means we can rewrite this as var(H) = kBT 2 ∂U ∂T .
Since CV , our heat capacity, is defined to be ∂U ∂T V,N, if we define ˆ CV = CV N (the heat capacity per particle), var(H) = NkBT 2 ˆ CV .
If we think of fractional fluctuations, we want to look at the ratio of our standard deviation to the mean. The mean is proportional to N, the number of particles, since it is an extensive quantity, but the standard deviation of H is proportional to √ N. This means p var(H) mean(H) ∝ 1 √ N which is very small on the thermodynamic scale, and thus our variable is highly concentrated! This means that we can basically think of the system almost as being a microcanonical ensemble with fixed energy ⟨H⟩: it was just easier to get to this point, since all we need to do is compute our partition function Z.
36.5 Writing macroscopic quantities in terms of the partition function We found earlier that ⟨H⟩≡U = −∂ ∂β ln Z.
Since β = 1 kBT , ∂ ∂β = ∂ ∂T ∂T ∂β = ⇒ ∂ ∂β = −kBT 2 ∂ ∂T = ⇒U = kBT 2 ∂ ∂T ln Z.
Let’s next look at the entropy: by definition, S = −kB X j pj ln pj.
Plugging in our probability distribution, S = −kB X e−βEj Z (−βEj −ln Z).
Rewriting this in terms of the internal energy U, S = kBβU + kB ln Z = U T + kB ln Z .
From this, we can see that −kBT ln Z = U −TS = F, 120 which is the Helmholtz free energy! Recall that we have (by the differential dF = −SdT −PdV , ∂F ∂T V = −S, ∂F ∂V T = −P, so we can then get our other quantities by taking partial derivatives of F. This can always yield our equation of state!
37 April 24, 2019 (Recitation) Let’s start by discussing a topic from last problem set.
37.1 Mixing entropy Consider two gases (red and white) that are mixed in a box with volume V . How can we compare this situation to one where the two are in separated boxes (each with volume V )?
Specifically, we have three scenarios: in A, red and white are in the same box of volume V , in B, red and white are in two different boxes each with volume V , and in C, red and white are in two different boxes each with volume V 2 .
What can we say about the entropy? First of all, any scenario in C can occur in scenario A, so the entropy SA > SC.
But let’s go ahead and do the mixing derivation again! Recall the Sackur-Tetrode equation S = NkB · · · + ln V N .
(Everything else - masses, temperatures, and so on - are constant.) Usually, A and C are compared in textbooks: there is a partition in a box that is then removed. If this partition breaks our volume V into two parts with N1 = c1N and N2 = c2N of the total (for example, 80 percent Nitrogen on one side and 20 percent on the other), we can apply the Sackur-Tetrode equation: V and N are proportional if our system is at equal pressure on both sides: SC = kB c1N ln c1V c1N + kB c2N ln c2V c2N = NkB ln N V (before we mix). After we mix, though, we sum instead SA = kB c1N ln V c1N + kB c2N ln V c2N because each component now has the full volume instead of only a fraction! Note that we can write this as SA = kB c1N ln V N −ln c1 + kB c2N ln V N −ln c2 = SC −kBN(c1 ln c1 + c2 ln c2) , and this last term is called the mixing entropy.
Fact 204 This generalizes to more than two components as well.
Here’s a second derivation that is even simpler! We can ignore the color of the gas particles at first and go through the derivation for the Sackur-Tetrode equation in both cases. But then we have N = N1 + N2 particles, and we need to label N1 of them red and N2 of them white. In situation C, we must label all of the ones on the left red and all of 121 the ones on the right white, but we’re free to do this arbitrarily in situation A! So we get an extra multiplicity of N N1 , and that will contribute an extra ∆S = kB ln Γ = kB N!
N1!N2! = −kBN N1 N ln N1 N + N2 N ln N2 N , as expected.
But notice that situations A and B are now essentially the same! When doing the Sackur-Tetrode equation, we made the explicit assumption that we could treat the red and white gases separately. So that actually tells us that SA = SB.
Proposition 205 So is there a way for us to separate the two gases from situation A into situation B without doing any work?
We can just construct a (one-directional) special membrane that only allows one of them to pass through! For example, if one of them is large and one is small, we could have small pores - we don’t violate any real laws of physics this way.
Fact 206 Specifically, think of a red and white membrane that allow only red and white particles to pass through, respectively.
Then we can enclose our red and yellow particles in boxes and “translate” the red box until it is side-by-side with the white box!
So to get from situation A to situation B, we can place two boxes next to each other (each with volume V ), separated by a membrane. No work is done this way - the translation doesn’t do any work or allow for any transfer of heat!
To get from situation B to situation C, then, we can compress the containers of situation B: if we do this isothermally, the work that we do creates an equivalent amount of heat (since ∆U = ∆Q + ∆W = 0), and the heat ∆Q is actually just T∆S, the change in entropy of mixing.
37.2 Looking at the canonical ensemble Recall the setup: we have a system connected to a reservoir at constant temperature T. What’s the probability of our system being at an energy E?
Basically, we think of the reservoir plus the system as a microcanonical ensemble: then the probability is just proportional to the number of microstates of the system at energy E, which is the same as the reservoir being at energy U −E (where the total energy of reservoir plus system is U). Then writing this in terms of entropy, p(E) = ceSR(U−E)/kB ∝ΓR(U −E).
But now assuming E ≪U, we can do the Taylor expansion = ceSR(U)/kBe ∂SR ∂U ·−E/kB 122 and the first two terms here can be encapsulated as one constant: remembering that ∂S ∂U = 1 T , = e−E/(kBT ) Z for some constant Z. As a normalization factor, we now have our “partition function” Z = X j e−Ej/(kBT ).
Fact 207 Probably the most important idea in this derivation is thinking of the reservoir and the system as one big system, since it allows us to think about energy!
As a question: Taylor expansions are first order, so what’s the range of validity for the assumptions that we made?
Specifically, remember that the second derivative, which is related to the “curvature,” tells us how much the first derivative (which is related to temperature) is changing when we vary our parameter. So we neglect the change in temperature as we change our energy E!
So this error term only comes about when the reservoir is too small: this means that with small systems (or theoretical “canonical ensembles,” really), our derivation gives an exact result.
Recall that we did some derivations related here earlier on: on an earlier problem set, we had na particles in states of energy εa, and we had specific limits on P na = N and P naεa = U. We found then that na = N e−βεa P j e−βεj ; this was found by looking at the combinatorics of distributing particles over energy states and incorporating Lagrange parameters for the constraints on N and U. Notice that we can now translate this: we have N systems, each at some energy level, and now na N tells us the probability that a random system has energy εa!
How do we make this into the language of a “canonical ensemble?” Any given system is coupled to the combined reservoir of N −1 other systems: as long as 1 ≪N, and we are at thermal equilibrium, the temperature of the reservoir will not change very much, and we can use our canonical ensemble formula.
38 April 25, 2019 We’ll discuss the canonical ensemble some more today! We’ll also start looking at a few more examples - the main idea is that the canonical ensemble will result in significantly simpler calculations, since microcanonical and canonical ensembles predict the same results in the thermodynamic limit.
38.1 Review and remarks on the canonical ensemble Here’s a helpful table: “macrostate” tells us which macrostates are fixed or given to us as constraints.
Ensemble Macrostate p(µ) Normalization factor Microcanonical (U, x) δ(H(µ)−U) Γ S(U, x) = kB ln Γ Canonical (T, x) exp(−βH(µ)) Z F(T, x) = −kBT ln Z 123 In other words, the free energy in the canonical ensemble and the entropy in the microcanonical ensemble play similar roles.
Note that the logarithm of the partition function also gives other quantities. For example, CV = Nˆ cV = 1 kBT 2 σ2 x where σ2 x is the variance of the energy of our system at thermal equilibrium. This shows up in more sophisticated discussions of statistical physics! Since our heat capacity is a response function CV = ∂Q ∂T V , it describes how much heat is transferred given some perturbation of our temperature. But instead of having to do that perturbation, this equations tells us that observing fluctuations at thermal equilibrium works as well!
Fact 208 This is a special case of something called the fluctuation-dissipation theorem.
As another comment: we’ve mostly been summing over states to calculate the partition function, but we’re often instead given the energies as opposed to states. For example, if we’re given that our energies are {Ei|1 ≤i ≤M}, we can write our partition function as Z(T, N, V ) = X i gie−βEi, where gi is the degeneracy of the energy level (letting us know how many microstates are at that given energy Ei).
In the continuous limit, we can approximate this instead as Z(T, N, V ) = Z dE dN dE e−βE, where dN dE is the density of states we discussed earlier in class.
One more comment: since F = −kBT ln Z, we know that Z = exp(−βF). This looks a lot like the Boltzmann factor exp(−βEi) for a particular microstate! Thus we can think of exp(−βEi) as being a volume or weight in phase space for a particular state: Z is then the weight contributed by all states for a given N and V . (For a canonical ensemble, since F = U −TS is the “free energy,” it can be thought of as the amount of energy available for the system to do useful work.) 38.2 Looking at the two-state system again Example 209 Let’s go back and say that we have a system where a particle can be in two states: E↑= ε 2 and E↓= −ε 2.
The partition function here is then Z = eβε/2 + e−βε/2 = 2 cosh βε 2 , and now the internal energy of our system U = −d dβ ln Z = − ∂Z ∂β Z = −ε 2 tanh βε 2 124 goes from −ε 2 to 0 as our temperature T gets larger (which looks a lot like our microcanonical ensemble!). Then the probability of being in each of our two states is P↑= e−ε/(2kBT ) 2 cosh ε 2kBT + 1 = 1 1 + eε/(kBT ) , P↓= 1 −P↑.
Indeed, P↑goes from 0 to 1 2, and P↓goes from 1 to 1 2 as our temperature T gets larger.
Next, let’s calculate the entropy: from our free energy F, we find that S = U T + kB ln Z = −ε 2T tanh ε 2kBT + kB ln 2 cosh ε 2kBT .
Looking at the limits, this goes to −ε 2T +kB ln exp ε 2kBT →0 as T →0. This is consistent with what we already know: there’s only one possible ground state! On the other hand, as T →∞, this goes to kB ln 2, which is consistent with the microcanonical ensemble and the intuition of our system.
We can also calculate our heat capacity: C = ∂U ∂T = −ε 2 ∂ ∂T tanh ε 2kBT = kB ε 2kBT 2 1 cosh2 ε 2kBT .
Plotting this as a function of thermal energy kBT, the distribution is (just like in the microcanonical ensemble) unimodal.
Think of this in terms of fluctuations! We can calculate the standard deviation of our energy at a given temperature: σ(U) = kBT r CV kB = ε/2 cosh ε kBT , and the behavior models the microcanonical ensemble as T →0 and as T →∞: it is 38.3 Systems with a large number of particles Let’s now think about how to compute the partition function in larger systems! Specifically, how can we do this for distinguishable versus indistinguishable particles - more specifically, how many microstates are there with a given number distribution (n1, · · · , nM)?
Assume our particles are distinguishable. If we have N particles, there are N n1 ways to pick n1 of them to go in the first energy level, then N−n1 n2 ways for the next energy level, and so on: this gives the multinomial coefficient N!
n1!n2! · · · nm!.
This is the degeneracy factor for a given energy that we want! Let’s look at us having N distinct harmonic oscillators, where there are M distinct energy levels ε1, · · · , εM. Then nk denotes the number of distinct harmonic oscillators that are excited at our energy level εk. Now we can write our partition function Z = X n1+···+nm=N N!
Qm j=1 nj! exp " −β M X k=1 εknk # .
where the first term is the degeneracy factor, and the PM k=1 εknk is the total energy of our state. Let’s see if we can simplify this: first of all, we can expand the sum inside the exponential as a product: Z = X n1+···+nm=N N!
QM j=1 nj!
M Y k=1 exp[−βεk]nk.
125 Now remember the binomial theorem: (a1 + a2)N = X n1+n2=N N!
n1!n2!an1 1 an2 2 which generalizes to the multinomial theorem (a1 + · · · + aM)N = X n1+···+nM=N N!
n1! · · · nM!an1 1 · · · anM M .
This is exactly what we want here! By the multinomial theorem, we can rewrite our Z as Z = (exp[−βε1] + exp[−βε2] + · · · + exp[−βεM])N , and since the sum inside the parentheses is actually the partition function for an individual harmonic oscillator Z1, we can write Z = ZN 1 .
Proposition 210 This means that the partition function for identical distinguishable systems is multiplicative!
It’s important here that the individual oscillators here were distinguishable to get the multinomial coefficient. But if we want to do the same exercise with indistinguishable particles, the degeneracy factor becomes just 1.
Then calculations in general are a lot uglier, but we can look at a special case: let’s say we have high temperatures, which is basically looking at our harmonic oscillators in the classical limit! Then our states are basically uniquely defined by just the list of energy levels (and we rarely need to deal with overcounting issues), so we just end up with Z →1 N!ZN 1 .
Talk to Professor Ketterle when we’re not looking at the classical limit, though!
Example 211 Let’s explicitly compute the partition function for N distinct harmonic oscillators and work with it!
Then we have (using facts from quantum physics) Z1 = ∞ X n=0 exp −βℏω n + 1 2 = e−βℏω/2 ∞ X n=0 e−βh¯ ωn = e−βℏω/2 1 −e−βℏω by the geometric series formula, and Z is just ZN 1 .
Now that we have our partition function, we can compute our other macroscopic variables: U = −∂ ∂B ln Z = −N ∂ ∂β ln Z1 which evaluates out to U = Nℏω 1 e−βℏω −1 + 1 2 .
As we take our temperature T →0, we only occupy the ground energy level, which is 1 2Nℏω. On the other hand, if our energy is large, kBT ≫ 126 hbarω = ⇒T ≫ℏω kB , which is the high-temperature classical limit: think about what happens this way! It’s also good to think about how to calculate the heat capacity and entropy of this system from here.
One comment: we’ll deal with a lot of series in these kinds of calculations, so we should become comfortable with the relevant techniques!
39 April 29, 2019 (Recitation) Today’s recitation is being taught by the TA.
Recently we’ve been discussing the canonical ensemble - as a first question, when do we use it?
Basically, whenever we have a system, we have some different states: for example, we can consider them by energy level. In a microcanonical ensemble, we consider a system that is cut offfrom all of its surroundings: no mass or heat transfer, so we have conservation of energy.
But in the real world, most systems are connected to the environment, even if we have a fixed number of particles.
A canonical ensemble is just the most basic example of this!The main idea is that the probability of finding the system in an energy εi is pi = e−βεi Z , where Z is the partition function. How do we derive this? In general, our system needs to maximize entropy S S({pi}) kB = − X pi ln pi.
We have the constraints P pi = 1, and also we can say that the “average” energy of our system is constant: writing that in terms of our probabilities, we have P piεi = U.
Fact 212 An idea here is that thermodyanmics and statistical mechanics are connected by the idea of “averaging.” Now that we have our constraints, we use Lagrange multipliers! Basically, given a function f that we want to maximize and a bunch of constraints of the form {gi(x) = 0}, it’s equivalent to minimizing over x and λ (the Lagrange multipliers) F(x) = f (x) − X λigi(x).
To minimize a vector function, consider the gradient: we want ∂F ∂xi = ∂F ∂λj = 0 for all i, j. (The latter, by the way, is just saying that each gi(x) = 0.) So let’s apply this to our problem! We have G({pi}) = − X i pi ln pi −λ( X i pi −1) −β( X piεi −U) Taking the derivative with respect to pi, ∂G ∂pi = −ln pi −1 −λ −βEi = 0 = ⇒pi = e−1−λ−βEi.
Meanwhile, taking the derivative with respect to λ just yields P pi = 1: this means we have to normalize our probability distribution: this yields pi = e−βεi P a e−βεa , 127 as before!
So we can actually define temperature T directly in terms of our Lagranage multiplier: β = 1 kBT . Our partition function looks important, but can we do anything with it? First of all, we can write it in two different ways: Z = X states i e−βεi = X energies {εn} gne−βεn.
(The gn degeneracy factor is pretty important here!) Let’s try to write this in terms of some quantities that we already know about: since our energy U = ⟨E⟩= X i piεi, X i εie−βεi Z , note that −1 Z ∂Z ∂β = 1 Z X i εie−βεi = −∂ln Z ∂B = U.
Now that we have energy, we can think about this in certain kinds of systems. A real magnet is a bunch of small atoms that are magnetic dipoles: microscopically, those dipoles are oriented in magnetic domains, and this order breaks down if the magnetic heats up! (Basically, there’s no clear direction for the dipoles to point.) On the other hand, magnets at very low temperature have a defined order: there’s a quantity “magnetization” which is the average |µ| N , and it decreases to 0 at some Curie temperature TC. We might see later on that the decay is actually square-root up until that point!
Example 213 So if we have a system with a bunch of magnetic spins, we have a total energy (in a two-state system) E = −( X µi)B.
Knowing B and knowing the temperature T, we want to think about the average magnetization m = ⟨P µi⟩ N .
We can think about this in another way: consider X {µi} (P µj)eβ P i µi)B Z = 1 β ∂ln Z ∂B = Nm, and now we have what we want: we’ve found m in terms of the partition function!
So looking at probability distribution, we want the fluctuation in U (∆U)2 = ⟨E2⟩−⟨E⟩2.
The second term here is already known: it is − ∂ln Z |partialβ 2 , and then how can we find the first term? Take more derivatives! Note that ∂2 ln Z ∂β2 = ∂ ∂β −1 Z X εie−βεi , and now by the product rule, = 1 Z X ε2 i e−βεi −1 Z2 ∂Z ∂β 2 .
The first term is now the expected value of ⟨E2⟩, and therefore the fluctuations (∆U)2 are connected to partial derivatives ∂2 ln Z ∂β2 !
128 We can think of this in terms of other situations as well: remember that when we have our heat capacity C = ∂U ∂T , we can also think about the relationship between ∂ ∂T and ∂ ∂β = ∂ ∂T ∂T ∂B, and now ∂B ∂T = − 1 kBT 2 by direct substitution.
Fact 214 It will be important to switch between β and T in the future!
So then C = ∂U ∂T = − 1 kBT 2 ∂U ∂β = 1 kBT 2 ∂2 ln Z ∂β2 = ⇒ (∆U)2 = kBT 2C .
C is a response function: we can generally do something to our system and see how this changes the properties. The equation, then, connects theoretical quantities to measurable experimental quantities!
As an exercise, we can prove that if χ = ∂⟨M⟩ ∂B , we have a similar relation: then (∆M)2 = kBT 2χ.
Finally, is there a way for us to compute the entropy from our partition function? We have S = −kB X pi ln pi = −kB X pi(−ln Z −βεi).
This can be rewritten as = −kBβ X piεi + kB ln Z X pi.
The sum of the pis is 1, and the sum of the piεis is our energy ⟨E⟩, so we have S = 1 T ⟨E⟩+ kB ln Z.
This should look familiar: since the Helmholtz free energy F = U −TS, we can rewrite F = kBT ln Z = 1 β ln Z.
But now knowing F and U, we are able to find S, and then we can get whatever else we want! In fact, we can also use this to prove the first and second law.
Example 215 Let’s go back to our two-state system.
We can say that our energy is E = −B X µi = −( X σi)µB, where each σi is ±1. Intuitively, we should expect independence, so Z should be multiplicative. Indeed, our collection of spins {σi} = (±1, · · · , ±1): this is 2N configurations, and we can then write this as X {σi} e−βµB(P σi) and now we can expand this out as (since P aibj = P ai P bj) X {σi} eβµBσ1eβµBσ2 · · · = Y i X σi=±1 eβµBσi !
But we always have the same B, µ, β, σ, so each sum is equal: this means we have Z = e−βµB + eβµBN , = 2N cosh(βµB)N 129 and we’ve found Z explicitly.
Finally, looking at the magnetization one more time, ⟨M⟩= 1 β ∂ln Z ∂B = Nµ tanh(βµB).
So m, the average magnetization, goes from 1 to −1: a strong enough magnetic field will align all the spins in the same direction! We also can expand the curve around B = 0 to figure out the linear relationship between m and B at small B.
40 April 30, 2019 There is a problem set due this week on Friday - it covers a lot of concepts about the canonical ensemble, and each one is a good way to put what we have learned into practice! Today, we’ll look at more examples of canonical ensembles and see the applications to black-body radiation and other topics. (This is because we can think of black-body radiation as a bunch of oscillators, and we can learn about their equation of state to deduce further information.) 40.1 Back to the two-level system Let’s again imagine that we have N particles, each with two energy levels. This can be thought of as assigning an up or down arrow to each of N spots on a lattice - the particles are then distinguishable, because we can identify them by their position.
Let’s say the up and down spin have energy levels of ε 2 and −ε 2, respectively. We found last time that the partition function for one particle was Z1 = e−εβ/2 + eεβ/2 = 2 cosh βε 2 , and then the partition function for all N particles, by independence, is just Z = ZN 1 = 2 cosh βε 2 N .
We can now calculate the thermodynamic properties of the system: first of all, ln Z = N ln 2 cosh βε 2 This actually gives us the other quantities that we want: we can calculate U = −Nε 2 tanh ε 2kBT , S = N − ε 2kBT tanh ε 2kBT + kB ln 2 cosh ε 2kBT .
We’ll also find that the heat capacity CV is just N times the heat capacity of a single particle, and the fluctuation standard deviation of U is σ(U) = kBT r CV kB = √ Nε/2 cosh ε 2kBT .
Since U ∝N and σ ∝N, we have concentration of the energy!
130 40.2 Another look at the harmonic oscillators Remember that for a similar system we examined (the quantum harmonic oscillator), we have that Z1 = ∞ X n=0 exp(−βℏω(n + 1 2)), and then again by independence, Z = ZN 1 = ⇒U = −∂(ln Z) ∂β = Nℏω 1 eβℏω −1 + 1 2 .
A good question here: what do we know about temperature in the limits? As T →0, β →∞, and that means U = Nℏω 2 . This is the ground state: all harmonic oscillators are in their ground state of ℏω 2 .
Meanwhile, when T ≫ℏω kB , so the temperature is large enough for βℏω to go to 0, U becomes large. We can define βℏω = x: now doing a Laurent expansion because our quantity blows up at 0, 1 ex −1 = 1 x + x2 2 + x3 6 + · · · = 1 x · 1 1 + x 2 + x2 2 + · · · .
Now we can neglect higher order terms and this simplifies to 1 x 1 −x 2 + O(x2) = 1 x −1 2 + O(x).
Plugging this back in, as T →∞, we have U →Nℏω 1 βℏω −1 2 + 1 2 = NkBT, and this is the famous result from the equipartition theorem! (Harmonic oscillators contribute a potential and kinetic quadratic term to the Hamiltonian.) So now, how can we determine the probability distribution for a single oscillator? We have p(E) = 1 Z1 e−E/(kBT ) (as a property of the canonical ensemble in general), and plugging in our specific value of Z1, p(E) = (1 −e−βℏω)eβℏω/2e−β n+ 1 2 ℏω, and this is actually a probability distribution over our ns: p(E) = p(n) = (1 −e−βℏω)e−nβℏω.
This is actually a geometric distribution: if we take a = e−βℏω, the probability p(n) = (1 −a)an.
This is indeed normalized: P n p(n) = 1, and the average value of n, called the average occupation number of a harmonic oscillator, is ⟨n⟩= 1 eℏω/(kBT ) −1.
131 Finally, let’s compute the heat capacity: C −∂U ∂T = Nℏω ∂ ∂T 1 eℏω/(kBT ) −1 + 1 2 which simplifies to = NkB ℏω kBT 2 eℏω/(kBT ) (eℏω/(kBT ) −1)2 .
Again, let’s look at the limits: when T →∞, all ℏω kBT terms go to 0 (define that fraction to be y). Specifically, (ey −1)2 ∼(y + O(y 2))2 ∼y 2, ey ∼1 + · · · ∼1, and thus as T →∞, we have C →NkB.
This is a familiar result as well - at high temperatures, we’re expecting U = NkBT, so it makes sense for C = NkB.
Remember also that because we have a gap between the ground state and next lowest energy level, we have gapped behavior where C becomes exponentially small at low temperature T.
The distinguishing factor here from a two-level system is that there are always higher and higher energy levels!
40.3 Deriving the ideal gas law again Let’s try to do this derivation without needing to count states!
We have our N indistinguishable particles, so we found last time that the partition function (by overcounting arguments) is Z = 1 N!(Z1)N.
To calculate Z1, remember that we need to do our density of states argument: by the semi-classical density of states argument, Z1 = X j eEj/(kBT ) → Z d3xd3p (2πℏ)3 e−p2/(2mkBT ).
Evaluating this integral, integrating out the d3x gives us a volume, and writing d3p = 4πp2dp by spherical symmetry, Z1 = V 4π (2πℏ)3 Z ∞ 0 dpp2e−p2/(2mkBT ).
Using the change of variables y 2 = p2 2mkBT , the integral now becomes Z1 = V 4π 2mkBT 4π2ℏ2 3/2 Z ∞ 0 dyy 2e−y 2 and the known integral has value √π 4 . This gives us a final partition function for one particle of Z1 = mkBT 2π2ℏ2 3/2 V.
Notice that the units of the partition function are dimensionless! That means that the mkBT 2πℏ2 term should have units of 1 length2 , and this helps us define a new length scale λD = s 2πℏ2 mkBT = 2πℏ √2πmkBT = h √2mEthermal .
132 This is called the thermal de Broglie wavelength: how do we compare it to others? Thinking of our ideal gas as noninteracting point particles, we have an average inter-particle distance V N 1/3 .
The main point is this: if we have V N 1/3 ≫λD, we have a classical system, and we can use Maxwell-Boltzmann statistics to evaluate our system.
But if V N 1/3 ≪λD (for example, when we start lowering our temperature), quantum mechanical effects are dominant, and we describe the system either with Bose-Einstein statistics or Fermi-Dirac statistics, based on whether we have distinguishability.
Example 216 Consider an electron at room temperature. Then we have λD = h √2πmekBT ≈4.5nm, which is pretty small.
This means that in our ideal gas situation, we should be using Maxwell-Boltzmann statistics! So now we have Z1 = V λ3 D = ⇒Z = ZN 1 N! , and now let’s try to calculate our thermodynamic quantities. By Stirling’s approximation, U = −∂ ∂β ln Z = −∂ ∂β (N ln Z1 −N ln N + N), and if we work this out, we’ll derive the well-known U = 3 2 N β = 3 2NkBT.
Calculating the entropy for this system, S = U T + kB ln Z = 3 2NkB + kB ln ZN 1 N!
.
If we use Stirling’s approximation again, we’ll find that S = NkB 3 2 + ln Z1 −ln N + 1 , and we can substitute in to find S = NkB 3 2 ln mkBT 2πℏ2 + ln V N + 5 2 which is the familiar Sackur-Tetrode equation!
40.4 The Maxwell-Boltzmann equation Finally, we want to look at the probability distribution for the kinetic energy of a specific molecule in an ideal gas.
Using the semi-classical limit argument again, now that we know the partition function Z1, p(E) = 1 Z1 Z d3xd3p (2πℏ)3 e−p2/(2mkBT )δ E −p2 2m , 133 since we want to pick out those energies that are equal to p2 2m specifically. Making the same simplifications and plugging in Z1, = 2πℏ2 mkBT 3/2 1 V 1 (2πℏ)3 V · 4π Z ∞ 0 dpp2e−p2/(2mkBT )δ E −p2 2m = 2π mkBT 3/2 1 2π2 2mEe−E/kBT m |p| .
Now since we can write p = p m 2E in terms of kinetic energy, this all simplifies to p(E) = 2π 1 πkBT 3/2 √ Ee−E/(kBT ) .
The √ E factor essentially tells us about the momentum: the density of states is proportional to √ E. This is known as the Maxwell-Boltzmann distribution for the kinetic energy of a molecule in an ideal gas situation, and it works whenever we have V N 1/3 ≫λD.
We can find the average energy here: ⟨E⟩= Z Ep(E)dE = 3 2kBT = ⟨1 2mv 2⟩, which gives us the root-mean-square of the velocity: vrms = p ⟨v 2⟩= r 3kBT m .
This also gives us the velocity distribution p(v) = Z ∞ 0 dEp(E)δ v − r 2E m !
and through some routine algebra, we get the Maxwell-Boltzmann velocity p(v) = 4π m 2πkBT 3/2 v 2e−mv 2/(2kBT ) .
Next time, we’ll start talking about black-body radiation. Look at the problem set to examine some of these systems more carefully!
41 May 1, 2019 (Recitation) 41.1 General comments Let’s start with some general remarks about the partition function. We’re in a situation where we know about states, entropy, and the general behavior of nature: now, we’ll try to tie that in to the canonical ensemble using our function Z = X i e−βEi.
134 This was derived in class: in particular, the probability of a state i is e−βEi Z , and then the average value of E is ⟨E⟩= X i Ei e−βEi Z which is connected to a β-derivative. We also know that S = kB P pi ln pi can be written in terms of the derivatives, as can free energy and all our other variables! So it basically has all the knowledge we need in our system.
So in relation to the problem set, we always “sum over states,” even if some of them may look different from others!
What makes the partition function easy? The idea is that having N indistinguishable particles gives a partition function Z = 1 N!ZN 1 = ⇒ln Z = N ln Z1 −ln N!, and it’s often much easier to calculate Z1 for one particle. This wasn’t true in the microcanonical ensemble: since we had a fixed total U, we didn’t have independence between the energy states of our particles there, so our calculations were more complicated! We don’t have to make the ugly approximation for the 3N-dimensional sphere’s surface area, as we did when deriving Sackur-Tetrode with the microcanonical ensemble.
41.2 Particles in a potential There’s a lot of different ways we can work with a particle with a potential and kinetic energy. The potential energy can be constant (a box), linear (for example, due to gravity on Earth), or quadratic (a harmonic oscillator) in terms of x, and the kinetic energy can be quadratic or linear in terms of p. Finally, we can pick how many degrees of freedom d we have for our particle.
The idea is that many problems are just some combinations of these parameters! We’re going to generalize so that we can see the big picture.
First of all, if our energy H(⃗ x, ⃗ p) can be written in terms of these phase variables, our equation for the partition function Z1 (for one particle) is just (semi-classically) Z = Z d3xd3p h3 e−βH(⃗ p,⃗ x).
The first term’s 3 can be replaced with the number of degrees of freedom we have.
So now we just insert our expressions for kinetic and potential energy into H, but let’s not integrate yet. What are we trying to work with? For example, if we’re trying to find the internal energy, we don’t need Z: instead, we want U = −∂ ∂β (ln Z).
So all the normalization factors in Z can be ignored if all we want is U: all of the Gaussian integrals give multiplicative factors, and ln Z turns those into constants, which become zero under β. Given this, let’s rederive the relevant part of this calculation for the ideal gas. The d3x integration gives a volume, and we don’t care about the h3 either, so we’re left with Z1 ∝ Z ∞ −∞ d3pe−β(p2 x +p2 y +p2 z )/(2m).
This is essentially just three independent integrals! To work with this, let’s define a new variable ξ2 = βp2 x 2m : then dξ = βdpx · C - here we only care about the β-dependence, since we take the derivative with respect to β later on.
135 This gives us a scale factor of β−1/2 in each direction! So we’re left with Z1 = β−1/2 Z dξe−ξ23 = β−3/2 · C, where C has the volume, the h3 term, and so on - those aren’t relevant right now! Now ln Z is some constant plus −3 2 ln β, and now U = −∂ ∂β −3 2 ln β = 3 2β = 3 2kBT, as we expect! (This also gives us things like CV = 3 2kB for free.) Fact 217 If we need something like the entropy, then we do need the constants we’ve tossed out along the way, but the internal energy doesn’t depend on them!
So we’ve now looked at one example of a potential: let’s now think about the relativistic gas, where the kinetic energy is c|p|. Most of our derivation stays the same, except that now Z1 = Z ∞ −∞ d3pe−βc|p|/2m.
Defining ⃗ ξ = −βc⃗ p, this converts us to an integral proportional to β−3 Z d3ξe−|ξ| (with the point being that we want to convert to generic, normalized integrals which just give us fixed constants!).
Then almost everything stays the same: we just have U = 3kBT and CV = 3kB now.
In both of these cases, we’ve found that our partition function Z1 = (c)β−x, where x = d 2 is half the number of degrees of freedom for the quadratic kinetic energy and x = d is the number of degrees of freedom for the linear kinetic energy.
But we’ve only been dealing with cases where our potential is constant: what if we have a linear or quadratic potential? Well, the situation is exactly symmetric! We can make exactly the same arguments and substitutions, so in general, we’ll actually have Z1 = (c)β−x+y, where y = 1 2d if we have a harmonic oscillator (quadratic) potential and d if we have a linear potential!
Fact 218 The whole point is that this is why statistics is useful: we can just look at how everything scales with β. This gives us (at least in the semi-classical limit) the internal energy and many other quantities.
So now if we want the specific heat for a relativistic gas in two dimensions which lives in a potential V (⃗ x) = α|x|10, 136 we know that the relativistic gas integration gives us (since d = 2), x + y = 1 · 2 + 1 10 · 2 = 11 5 .
So the partition function is just going to be proportional to β−11/5. That gives us a specific heat of 11 5 kBT with very few calculations!
Fact 219 This is a generalization of the equipartition theorem, which tells us that each quadratic term gives a β−1/2 in the partition theorem, which gives a specific heat of 1 2kB.
42 May 2, 2019 In terms of exam regrades, everything will be updated later today. The problem set is due tomorrow!
We’re going to discuss black-body radiation today: the main idea is that being able to absorb light at all frequencies leads to interesting behavior in terms of the spectrum of colors. In particular, we’ll start understanding ideas like why we humans can see in the visible light range! (In particular, things would be very different if the sun was at a different temperature.) Looking ahead, we’ll look at photons as particles of a gas, and we’ll consider why the classical model of this gas isn’t good enough to understand that spectrum, particularly in the UV range! This was part of the birth of quantum mechanics, and it’ll let us actually plot the energy density with respect to our frequency ω correctly.
42.1 Black-body radiation As a reminder, quantum mechanical particles can be described by a wavefunction, and the energy eigenstates (for a particle in a box) are given by ψk = 1 √ V ei⃗ k⃗ x.
In three dimensions, we have ⃗ k = (k1, k2, k3), where each ki = 2πni L is in terms of the mode numbers ni ∈Z. It turns out this can also describe the quanta of the electromagnetic field! Each mode can be thought of associated with a quantum harmonic oscillator, and we need all plane waves to travel at the speed of light, so we have ωk = c|k|.
(Looking at the polarization, there are two modes per wavevector ⃗ k.) So we’ll treat our black-body system as a gas of photons, and let’s see what happens as our temperature T increases! We’ll start by essentially counting states: how many states are there available to a single photon whose energy is between E and E + dE? We’ll calculate this in terms of frequency: dN dω dω = (4πk2) dk dω dω 2V (2π)3 , where k2 = ω2 c2 and dk dω = 1 c . What do each of these terms mean? The first term 4πk2 is the surface area of a sphere of radius k, the second term is the thickness of a spherical shell for dω, and the last term is the density of a single particle plane-wave eigenstate wavevector (where the factor of 2 comes from the fact that there are 2 directions of polarization). So now simplifying, dN dω dω = V ω2 π2c2 dω, 137 We want to calculate the partition function, so let’s look at a particular frequency ω for our photons and then integrate later. This is essentially the partition function for a harmonic oscillator, but we’ll disregard the zero point energy: it’s a constant, and in practice, we only care about energy differences (unless we’re talking about dark energy and so on).
This means that our partition function simplifies to Zω = 1 + e−βℏω + e−2βℏω + · · · = 1 1 −e−βℏω .
We want to sum this over all possible frequencies - theoretically, we don’t have any upper limit (though in solids, we do have some physical limit due to the properties). So now we can integrate out ln Z = Z ∞ 0 dω dN dω ln Zω, which simplifies to ln Z = −V π2c3 Z ∞ 0 dωω2 ln(1 −e−βℏω).
We’ll leave the partition function like this, but we can already calculate some quantities. First of all, the average internal energy can be found via ⟨E⟩= U = −∂ ∂β ln Z = V ℏ2 π2c3 Z ∞ 0 ω3 eβℏω −1dω, (since we can swap the integral and derivative). In differential form, this says that E(ω)dω = V ℏ2 π2c3 ω3 eβℏω −1dω, which is the amount of energy carried by photons with frequency between ω and ω+dω. This is actually Planck’s distribution! If we plot the integrand with respect to ω, this has a single peak: at low frequency, the distribution is consistent with the equipartition theorem, but then the energy gaps at higher frequencies bring I back to 0 instead of going offto infinity.
Fact 220 Calculating dE(ω) dω gives Wein’s radiation law, which is a good exercise.
By the way, when the temperature is around 6000 Kelvin, which is the temperature of the Sun, the peak is mostly in the visible light range, which makes sense!
Fact 221 This is known as a “radiation gas,” which has different properties from the normal gasses that we’ve been dealing with.
42.2 Doing some derivations Looking again at our internal energy integral, we have U = Z ∞ 0 dω ω2V π2c3 ℏω 1 eβℏω −1 .
Here, the first term ω2V π2c3 is the density of modes per frequency. The ℏω is a kind of energy unit excitation, and combined with the last term, this gives the mean thermal excitation energy per mode. Substituting in x = βℏω, we 138 actually want to calculate U = V π2c3 kBT ℏ3 4 Z ∞ 0 x3dx ex −1.
That last integral is actually known because there is no upper limit on the frequency - otherwise, it’s only possible to numerically approximate it! It turns out that this is = Γ(4)ζ(4) = 6 · π4 90 = π4 15, and therefore we can think about the energy density of our gas of photons U V = ξ = π2k4 B 15ℏ3c3 T 4.
In other words, we can just write down this law as U V ∝T 4 .
Example 222 So we can imagine having a box of photons with a hole: how can we find the energy flux?
Flux is the rate of energy transfer, and it will just be ξc 4 , where c is the speed of light: the 1 4 factor comes from us not having a point source, and where the size of the hole is larger than the wavelength of the photons. This can then be written as σT 4 , and this proportionality is called the Stefan-Boltzmann law. Here, σ = π2k4 B 60ℏ3c2 .
Example 223 Let’s see if we can find the pressure of this gas. (It will be interesting, because the pressure only depends on the temperature!) First of all, let’s calculate the Helmholtz free energy F = −kBT ln Z = V kBT π2c3 Z ∞ 0 dωω2 ln(1 −e−βℏω).
Again letting x = βℏω and doing similar simplifications, F = V (kBT)4 πc3ℏ3 Z ∞ 0 dxx2 ln(1 −e−x), and now by integration by parts, the integral simplifies to Z ∞ 0 dx 1 3 d dx x3 ln(1 −e−x) = 1 3x3 ln(1 −e−x ∞ 0 −1 3 Z ∞ 0 dx x3 ex −1.
We’ve seen the second term before: it evaluates to π4 45, and therefore F = −π2(kBT)4V 45ℏ3c3 = −1 3U.
139 Now we can find our pressure: P = −∂F ∂V T = −U 3V = 4σ 3c T 4.
This equation of state tells us that interestingly, this gas’s pressure only depends on the temperature! In particular, the pressure P is 1 3 of the energy density, and this is an interesting fact in cosmology.
42.3 Continuing on Let’s try calculating some other thermodynamic quantities: the entropy of our gas is S = −∂F ∂T V = 16V σ 3c T 3, the specific heat CV = ∂U ∂T V = 16V σ c T 3, and to find the number of photons, dN dωdV = ω2 π2c3 1 eβℏω −1, and integrating out V and ω, we find that N = V π2c3 Z ∞ 0 dω ω2 eβℏω −1 ≈1.48σT 3V kBc .
We can also now calculate the fluctuations in energy: normalizing the standard deviation with respect to U, σ(E) U = kBT 4 c σT 4V s 16σT 3V kBc = r kBc 6T 3V = r 1.48 N .
These fluctuations are very small as N grows large! So we can basically treat this from the microcanonical point of view, and the thermodynamic behavior would look basically the same.
Question 224. So what does all of this mean?
First of all, this shows that classical mechanics isn’t enough to describe our systems. If we take ℏ→0 - that is, we don’t assume that we have quantization - then d2U dωdV = ℏω3 π2c3 1 eℏωβ−1 → ω2 π2c3 kBT as ℏ→0. This quadratic growth is consistent for ℏω ≪kBT with the equipartition theorem, but then quantum mechanical effects give an exponential decay eventually, which classical mechanics can’t predict. Also, this gas is very interesting, because local properties only depend on temperature.
U V , N V , S V , and P are all dependent on T and not V !
Finally, this was a gas of photons, but we can also think about vibrations of solids in terms of phonons instead.
The same derivations can be made, but we just have some different quantities: c is now the speed of sound (since we care about sound modes), and also, we have a bounded integral instead of integrating from 0 to ∞. Basically, the wavelength can’t be smaller than the inter-atom distance! That makes it interesting to think about how CV of a solid changes as we decrease the temperature - this will lead us to Einstein’s law.
An exercise: what is the chemical potential µ for this radiation gas? It will turn out to be zero, and we’ll see a connection when we discuss bosons later on!
140 43 May 6, 2019 (Recitation) 43.1 Homework questions In one of the problems of the problem set, we have particles oriented with a dipole moment ⃗ mu which gives us an energy −⃗ E · ⃗ µ = −|E||µ| cos θ.
We want to find the partition function, which is an integral over states: = Z ”dxdp” h e−βH, where H, the Hamiltonian, is the kinetic plus potential energy that we have if we use the semi-classical approximation.
The main differences here are that we have two-dimensional space (and therefore two-dimensional momentum space as well), so we really want Z1 = Z d2xd2p h2 e−βH.
What’s special is that we are integrating over angles θ and π, and we also have the canonical momenta pθ and pφ as a result.
But now we can integrate our cos θ out through our polar coordinates, and everything works out! (We will get a cos θ in the exponent.) In a different problem, when we give a gravitational potential energy −mgh to each particle, that just adds a e−βmgh to each term. This is generally solved more easily with the grand canonical ensemble (where we are actually allowed to exchange particles), but if we are to solve the problem in the way that we know, the e−βmgh factor comes out of the z-integral. Now we just get an additive constant in F (because we take ln Z) and the rest of the problem looks like an ideal gas!
Finally, considering the polymer chain, the main concept is to treat each individual monomer separately! Since we have a classical system, we can just compute Z1 for one monomer and take the Nth power. We basically have a bunch of two-level systems, for which the partition function is easy to calculate!
Now all the degeneracies of the form N n are accounted for in the binomial expansion of the two-level system 1 + e−β∆, raised to the Nth power. We don’t even need to do the combinatorics from the microcanonical ensemble!
43.2 Clicker questions Question 225. Canonical versus microcanonical ensembles: do both ensembles have the same properties?
The answer is yes! We’ll unfold more details about this soon.
Question 226. Do both ensembles have the same number of microstates?
No! In the microcanonical ensemble, we’re given a total energy E for which all our states must reside, but in the canonical ensemble, we’re given more freedom. Here’s a way to think about it: let’s say we divide a system into two halves with N 2 particles. Then the total number of microstates is Γtotal = ΓLeftΓRight, but this isn’t quite correct in general because we might not have exactly N 2 on each side! (Think of “fixing the number of particles on the left side” as “fixing the energy of our system” as in the microcanonical ensemble.) It turns out in 141 general that the right order of magnitude is to deal with the Gaussian fluctuation Γ2V = (ΓV )2√ N.
In this case, it is easy to count microstates, but it may be more difficult in other versions! Well, the reason we don’t worry about it too much is because taking log often makes the √ N negligible. We really only care about those microstates with non-negligible probability, so the Binomial distribution with mean N 2 and standard deviation on the order of √ N can be viewed as basically Gaussian, and we only care about those values within a few standard deviations.
So what is the number of microstates in a microcanonical distribution? We often have a density of states ∂N(E) ∂E , but that’s a classical description and therefore can’t be used directly for counting. So we need to have some energy width δE, and we often didn’t specify what that was!
Fact 227 Usually we don’t need to because the δE cancels out later, or because it only adds a negligible logarithmic term.
There was one example where it didn’t need to be specified: given a harmonic oscillator, there are only specific states at width ℏω apart, so perfect, identical harmonic oscillators do actually have some “exact number of states.” So in our equation Γ(E) = ∂N(E) ∂E δE, we can think of “δE = ℏω′′ as an effective width between our energies.
Question 228. So what’s the energy gap that we’ve assumed in the canonical ensemble?
We have a Boltzmann factor e−βE that exponentially decreases with E, and we have a density of states that increases very rapidly. Putting these together, we get a sharp peak at the average energy ⟨E⟩, which is why we say that the microcanonical and canonical ensembles are equivalent for lots of purposes. So how broad is the distribution?
Well, we calculated this in class: we can find ⟨E⟩and ⟨E2⟩, which allow us to find σE = kBT r C kB where C is the specific heat. Thinking about this for an ideal gas: C ∝NkB, so this is proportional to √ N. This is the same factor that’s been popping up in all the other situations, and that’s because that’s the right factor for fluctuations! Now it’s the same idea: √ N is small compared to N, so it can be neglected.
44 May 7, 2019 Some preliminary announcements: homework will be graded soon, and we should check the website to make sure everything is accurate. The last graded problem set is due on Friday! The next one (on photons, phonons, gases) is not graded but is good practice for the final. Finally, some past finals will be posted soon - there will be more problems on the final because it is longer (3 hours), but some fraction will be taken from past exams.
44.1 Overview Last time, we went over black-body radiation by thinking of photons as quanta of the electromagnetic field. We find that raising the temperature of a black-body gives a spectrum of radiation, and we can figure out the thermodynamic quantities of this body using the canonical ensemble!
142 We’re essentially going to go over the physics of solids today, examining quantities like the heat capacity with different models. We’ll also consider what happens when we look at the extremes of temperature: for example, what if the de Broglie wavelength is comparable in size to the inter-atom distance? That’s where the grand canonical ensemble comes in!
44.2 Phonons Our goal, ultimately, is to consider the heat capacity C of a solid. A phonon is essentially a quanta of vibrational mechanical energy!
Fact 229 We can discretize sound waves in solids using these quasi-particles. The energy of a phonon is E = ℏω = ℏkcs, where cs is the speed of sound.
We’ll be working with a perfect crystal of atoms as our system for convenience. The first thing we want to do is to consider the density of states of our phonons dN dω dω = 3V 2π2c2 s ω2dω.
where the 3 in the numerator comes from having a multiplicity of polarization.
Fact 230 The factor of 3 is really from having one longitudinal mode (compression) and two transverse modes (shear).
Since we have N atoms in our crystal, this is 3N different normal modes, meaning that there are 3N different types of phonons with frequency ω1, · · · , ω3N.
As a result of this, we have another difference between phonons and photons: the frequency spectrum for light waves is unbounded (it can go arbitrarily high), but the sound waves have a minimum wavelength λ = 2πcs ω , where λ is the spacing between atoms (since something needs to be able to propagate the wave). This means we have a maximum frequency corresponding to our λD ∼ 3 q V N , meaning that our maximum frequency ωD ∼ N V 1/3 cs.
To find the proportionality constant in front, note that we can count single-phonon states as Z ωD 0 dω dN dω = V ω3 D 2π2c2 s (by direct integration). One way to deal with this (with solid-state physics) is to deal with a “primitive cell,” but instead we’ll argue that this number is just 3N, the number of degrees of freedom. Thus we can find our maximum allowed 143 frequency: 3N = V ω3 D 2π2c3 s = ⇒ ωd = 6π2N V 1/3 cs .
We’ll associate ωD with a new temperature scale now: define the Debye temperature TD = ℏω kB .
44.3 Using the canonical ensemble We’ll now calculate the partition function for our system: for a fixed frequency, Zω = 1 + e−βℏω + e−2βℏω + · · · = 1 1 −e−βℏω .
Assuming the frequencies are independent of each other, the partition function is just the product over all ω Z = Y ω Zω, and now taking the log, we can approximate the sum as an integral Z = Z ωD 0 dω dN dω ln Zω.
(Note that we now have an upper frequency ωD instead of ∞.) Now the energy of our system is E = Z ∞ 0 dω dN dω ℏω eβℏω−1 , which simplifies to = 3V ℏ 2πc3 s Z ωD 0 dω ω3 eβℏω −1.
There isn’t an analytic expression for this anymore, so we’ll instead just look at the low- and high-temperature limits.
Letting x = βℏω, we can rewrite our energy as E = 3V 2π2(ℏcs)3 (kBT)4 Z TD/T 0 dx x3 ex −1.
Example 231 One extreme is where T ≪TD, so our upper limit goes to ∞. Then we’ve seen our integral before: it evaluates to π4 15.
So in this case, we can calculate CV = ∂E ∂T = 2π2V k4 B 5ℏ3c3 s T 3 = NkB 12π4 5 T TD 3 .
In other words, at low temperatures, our heat capacity is proportional to T 3. This does line up with experimental observations! In particular, CV →0 as T →0., which is consistent with our third law of thermodynamics.
144 Fact 232 Einstein has a different description of this behavior with a different kind of calculation - Debye just extended that model by allowing many different frequencies.
Example 233 On the other hand, can we recover the results we expect at high (classical) temperatures T ≫TD?
Then we integrate over very small values of x, so we can do a Taylor expansion. In these cases, Z TD/T 0 dx x3 ex −1 = Z TD/T 0 dx(x2 + O(x3)) = 1 3 TD T 3 + · · · , and now we find that our heat capacity reaches a constant CV = V k4 BT 3 D 2π2ℏ3c3 s = 3NkB, which is consistent with the Dulong-Petit law! The data doesn’t perfectly fit with this, and that’s because we have a flaw in the calculation: our dispersion relation ω = kcs isn’t quite accurate, but that can be left for a physics of solids class.
44.4 Back to the monatomic ideal gas Remember that when we talked about the ideal gas for a canonical ensemble, we treated the particles quantum mechanically in a box: we put the system at fixed temperature, and we used the partition function to pull out the thermodyanmics of our system. There, we introduced the de Broglie wavelength (or length scale), and we mentioned that we should relate that to the inter-particle distance to see whether or not quantum effects are important.
Well, when we’re close to the de Broglie wavelength, we can’t use the Z = 1 N!ZN 1 formula for our partition function anymore: we need a better way to describe our system. That’s where the grand canonical ensemble comes in: we relax our constraint on having a fixed number of particles, but we fix the chemical potential.
Remember that this length scale λ = s 2πℏ2 mkBT increases as our temperature decreases. Eventually, this becomes comparable to our interparticle spacing ∝ V N 1/3, which is where quantum effects start to have an effect.
Remember that in a microcanonical ensemble, we fix V, U, N, and in a canonical ensemble, we fix V, T, N. In both of these cases, we can calculate the energy, and the macroscopic properties are basically the same here because the fluctuations are so small.
So in a grand canonical ensemble, we fix V, T, µ instead: it’s a pressure-like term that adds particles. Specifically, our internal energy in differential form can be written as dU = TdS −PdV + µdN = ⇒ ∂U ∂N S,V = µ.
It’s usually difficult to keep S and V constant, so we instead work with the Gibbs free energy: there, we can instead 145 calculate ∂G ∂N T,P = µ, which is generally easier to work with experimentally! We can show that in a system at thermal equilibrium, ∂S ∂N U,V = −µ T , so two systems brought together at thermal equilibrium will also have the same chemical potential.
Proposition 234 This means we can think of our system as being connected to a large reservoir, so that both are at some fixed T, µ. Now the total system and reservoir are a microcanonical ensemble!
So we want the probability distribution of a given state sj for our system: then p(Sj) = ΓR ΓR⊕S , and similar to our canonical ensemble derivation, we can write the Γs in terms of our entropy: ΓR(sj) = exp(SR(Sj)/kB) = ⇒pj(sj) = 1 Z exp [SR(U −Ej, M −Nj)/kB] where Z is our normalization factor, analogous to the partition function. Now ∂pj ∂Ej = −1 kB ∂SR ∂U U−Ej pj = −1 kBT pj, and ∂pj ∂N = −1 kB ∂S ∂M M−Nj pj = µ kBT pj.
Thus, we can write our probability of any state pj(Sj) = 1 Z exp −Ej kBT + µNj kBT = 1 Z exp [(µNj −Ej)/(kBT)] .
We can now write our expression for our grand partition function Z = X j exp [(µNj −Ej)/(kBT)] .
44.5 Thermodynamics of the grand canonical ensemble How can we do physics with this new quantity? Let’s look at a similar quantity as in the canonical ensemble ∂ ∂β ln Z = 1 Z X j (µNj −Ej)e(µNj−Ej)β, which can be rewritten as = X j pj(sj)(µNj −Ej).
Thus, we actually have the average value of µNj −Ej, which is ∂ ∂β ln Z = µ⟨N⟩−⟨E⟩= µN −U.
146 We can also consider ∂ ∂µ ln Z = 1 Z X j βNje(µNj−Ej)β; again, we can extract out our probability term to get = β⟨N⟩= βN.
This means we can find the expected number of particles N, and in the thermodynamic limit, fluctuations are much smaller (in order of magnitude) than N, so this is pretty accurate almost all the time!
Is it possible for us to find the entropy of our system? We know that S = −kB X pj ln pj = −kB X j eβ(µNj−Ej) Z · (β(µNj −Ej) −ln Z) , which can be arranged as S = −1 T (µN −U) + kB ln Z = ⇒ U −TS −µN = kBT ln Z .
So the central theme is that ln Z basically gives us an energy term! This lets us create a new free energy Ω= U −TS −µN = −kBT ln Z, which is the grand potential: this is essentially a sum over all the different states!
In differential form, the grand potential can be written out as dΩ= dU −TdS −SdT −µdN −Ndµ, and writing out dU = TdS −PdV + µdN, this simplifies as dΩ= −SdT −PdV −Ndµ.
This means we can take derivatives again: we can find our entropy, pressure, and number of particles via ∂Ω ∂T V,µ = −S, ∂Ω ∂V T,µ = −P, ∂Ω ∂µ T,V = −N.
Next time, we’ll look at bosons and fermions, and we’ll try to recover the Boltzmann statistics at high temperatures!
45 May 8, 2019 (Recitation) 45.1 More on density of states Let’s start by summarizing some concepts from last recitation. We have a density of states dN dE (which usually goes as ENA if we have a mole of atoms), which is very steep. On the other hand, we have the Boltzmann factor e−E/(kBT ), which usually goes as e−N. Multiplying together these rapidly growing and decaying distributions, we get a very sharp peak!
Almost all of the action occurs around that sharp peak, because the probability of being found far away from ⟨E⟩ is very small. More quantitatively, note that the distribution is a delta function at ⟨E⟩for a microcanonical ensemble, since we fix the energy. Really, the only way to do this is to pick a bunch of harmonic oscillators with quantized energy 147 states, but there are still some perturbations there - in other words, our discrete energy states are broadened a bit.
Fact 235 To account for this, we can think of the microcanonical ensemble as having an energy width of ℏω if our energy states of the oscillators differ by ℏω.
So when we derived the Sackur-Tetrode equation, remember that we used quantized box numbers ni such that the sum of the squares of those quantities is fixed: this ended up being the surface area of a sphere, which helped us find the multiplicity of a given energy. Specifically, if we say that X i n2 xi + n2 yi + n2 zi is constant, we can integrate to find the number of states: since we have a microcanonical ensemble, we pick out a specific energy U to find that Γ ∝ Z d3Nni · δ X n2 i −U .
But there’s one thing we did not do: we never specified that we use a spherical shell with some energy width δE.
Surface areas are not volumes, so we have a density of states instead of a number of states - how can we introduce an energy uncertainty?
Well, remember that we replaced r 3N−1 with r 3N due to N ≫1: this means that we’re essentially allowing for the whole volume instead of the surface area, so our energy width is actually from 0 to our energy E. This also tells us that in an N-dimensional sphere, almost all of the volume is concentrated towards the surface area: in fact, half the volume is concentrated within an O r N distance from the surface area! That’s why it’s okay for us to also count the volume of states inside: it’s negligible compared to that part near the surface.
Fact 236 So going back to the canonical ensemble, remember that we can find the uncertainty σE = kBT r CV kB ∝ √ N ∝ E √ N .
Thus, the width of the energy peak is proportional to √ N, which is precise (proportionally) up to experimental accuracy! But when we made the approximation from r 3N−1 to r 3N, we actually get a fluctuation on the order of E N .
So it’s often okay to make what looks like a crude approximation!
45.2 Quick note about the different distributions Fact 237 As a result, we almost never use the microcanonical ensemble. An awkward situation is that if we divide a system in two, a microcanonical ensemble doesn’t stay microcanonical (because we can exchange energy between the two halves)! On the other hand, the partition function is simple: we can just multiply Z = Z1 · Z2, and that makes many calculations much easier.
148 On the other hand, allowing particle exchange means we have to use the grand canonical distribution. This helps us with quantum mechanical descriptions of particles - in particular, because of quantum indistinguishability, we no longer have independent particles, but we do have independent states, and we can take our partition function and factor it Z = Y Zi over states! We’ll probably go over this a bit more in class soon, but it’s interesting that non-interacting particles still affect independence for each other. (For example, we have the Pauli exclusion principle for fermions!) We’re going to find that some derivations are easier for the grand canonical ensemble than in the microcanonical or canonical ensemble as well. Le’ L 45.3 One last conceptual question We know that we have a Boltzmann factor e−E/(kBT ) in our probability distribution, so any energy state, no matter how huge, has some positive probability of happening.
Is that an artifact of the approximations we have made, or is it physical?
One assumption we make is that our reservoir (that our system is connected with) is large enough to sustain the fluctuations of energy! Since we treat the reservoir and system as a microcanonical ensemble with some total energy U, we can’t actually have energies E > U.
But other than that, this is indeed how nature works! Let’s assume that there is indeed some point where the energy can’t get any larger. and then we have some small probability p of getting to an energy above that. Then the entropy and energy change are approximately ∆= −kBp ln p, ∆E = pE = ⇒∆F = ∆E −T∆S = p(E −TkB ln p) So transferring some probability to a higher state changes the free energy F: ∆F < 0 if p < e−E/(kBT ), and that means that it is always favorable to allow that energy state! This means the canonical ensemble probability distribution is indeed a stable equilibrium, and any collision between atoms (or other perturbation) of the system moves us towards that exact distribution.
46 May 9, 2019 46.1 Overview Today, we’re going to continue talking about the grand canonical ensemble, which is a useful framework for trying to understand quantum effects in different kinds of gases! As a quick reminder, this is a system where instead of fixing energy or temperature (as in the microcanonical and canonical ensembles) along with the number of particles, we have to make a more careful argument. At the quantum level, we can no longer assume that all of our particles are likely to be in different states, so we can’t have a simple overcounting term like 1 N!. In addition, we now allow our system to be open.
The number of particles can now fluctuate, but we’ll find that in the thermodynamic limit again, the fluctuations are small. So this will also give similar results to the other formulations!
We’ll see some systems where we can use this model today.
149 46.2 Types of quantum particles The main idea is that in relativistic quantum field theory, there are two types of particles.
Definition 238 Bosons are particles that have an intrinsic spin of 0, 1, 2, · · · (in multiples of ℏω).
The main concept here is that we have indistinguishable particles: the quantum states are symmetric until particle exchange, which can be written as ψ(⃗ r1, ⃗ r2) = ψ(⃗ r2, ⃗ r1).
Examples of bosons include photons, gluons, Helium-4, as well as quasi-particles like phonons and plasmons.
Definition 239 On the other hand, fermions are particles that have an intrinsic spin of 1 2, 3 2, · · · (in multiples of ℏω).
Examples here include electrons in metals at low temperature, liquid Helium-3, white dwarfs, and neutron stars.
This time, quantum states are antisymmetric: we have ψ(⃗ r1, ⃗ r2) = −ψ(⃗ r2, ⃗ r1).
Fact 240 Bosons can have infinitely many particles per state, but by the Pauli exclusion principle, there can only be 0 or 1 fermion in each quantum state.
We’re going to label our states by occupation numbers {nr}, often as |n1, n2, · · · , nr⟩. At high temperatures, they will be the same as in our Boltzmann statistics.
Fact 241 By the way, these are three-dimensional particles: 2-D systems are completely different!
Remember that we have our canonical ensemble partition function Z = X {nr } e−βnr εr , where we sum over all ways of partitioning N particles into sets {nr}, with the constraint P nr = N. But we have indistinguishable particles here, and it’s hard to account for the fact that we can only have 1 particle in each energy state for fermions.
46.3 Using the grand canonical ensemble So our first step is to look at an open system: let’s fix our chemical potential µ and let N fluctuate! (We’ll also put our system in contact with a heat bath at temperature T.) Then for a given state, we have the variables Nj = X nr, Ej = X r nrεr 150 which are allowed to vary, and now our grand partition function Z = X j exp [(µNj −Ej)β] = X {nr } exp " (µ X r nr − X r nrεr)β # .
We can rewrite the sum of the exponents as a product: = X {nr } Y r exp [(µ −εr)nrβ] = Y r X {nr } exp [(µ −εr)nrβ] by swapping the sum and product. Now our path diverges for bosons and fermions: in one case, the nrs can be anything, and in the other, the nrs must be 0 or 1.
Proposition 242 For fermions, since nrs are all 0 or 1, we find that ZF D = Y r (1 −e(µ−εr )β).
This is referred to as Fermi-Dirac.
Proposition 243 Meanwhile, for bosons, since nrs can b, anything, we have an infinite sum ZBE = Y r ∞ X n=0 e(mu−εr )β !n = Y r 1 1 −e(µ−εr )β .
This is referred to as Bose-Einstein.
In both cases, we often want to deal with the logarithm of the partition function: then the product becomes a sum, and we have the following expressions: ln ZF D = X r ln(1 + e(µ−εr )β), ln ZBE = X r −ln(1 −e(µ−εr )β), It’s important to note here that P r is the sum over a single state! So if we want the ensemble average occupation number, we take (as was derived previously) N = kBT ∂ ∂µ ln Z.
For fermions, N = X r e(µ−εr )β 1 + e(µ−εr )β = X r 1 1 + e(εr −µ)β Each term here can be thought of as ⟨nr⟩, the average occupation number of state r! That means that for a given state, ⟨nr⟩= 1 e(εr −µ)β , 151 and this is known as the Fermi-Dirac occupation number. In particular, we have the Fermi function f (tr) = 1 1 + eβ(tr −µ) .
Note that µ is allowed to be both positive and negative here. If we take low temperature T, β →∞, and then chemical potential µ essentially separates our filled and empty states! The probability of having a state εr > µ is almost zero, and the probability of having a state for εr < µ is almost 1. As we increase temperature T, the jump from probability 0 to 1 becomes less steep, and we’ll see how that works in a minute.
Meanwhile, for bosons, N = X r − e(µ−εr )β 1 −e(µ−εr )β = X r 1 e(εr −µ)β −1.
Analogously, this can be thought os as a sum P r⟨nr⟩, so we know that the average occupation ⟨nr⟩= 1 eβ(tr −µ)−1 .
Notice that the main difference here from the Fermi-Dirac occupation number is the −1 instead of the +1! (By the way, remember that this is exactly the expression we saw with photons if we set µ = 0.) One important idea with this system, though, is that ⟨nr⟩≥0 for non-interacting particles.
If µ →εr, the denominator goes to 0, and thus ⟨nr⟩goes to infinity.
In fact, µ > εr means our occupation number becomes negative! This isn’t allowed, so that actually sets a bound on our allowed µ in relation to our energy states εr.
Proposition 244 The allowed values of µ for a boson system are µ ≤εrmin, where εrmin is the lowest energy state for a single particle.
By the way, Professor Ketterle may talk about Bose-Einstein condensates next Thursday!
46.4 Looking more carefully at the occupation numbers Let’s take ⟨nr⟩to high temperature in both cases, so εr −µ ≫kBT. Then we should expect that our results agree with the canonical ensemble answer of ⟨nr⟩= e−β(εr −µ) for classical, identical particles. Indeed, this is what happens in the limiting case for both FD and BE statistics, since the exponential terms dominate the 1 in the denominator! So our new model is really mostly useful at lower temperatures.
This means we really care about having our inter-atom distance on the order of of our length scale meaning that V N ∼ h √2πmekBT 3 .
152 Example 245 Electrons at room temperature have a length scale of around 4.5 nanometers, and copper has a density of 9×103 kg/m3, which means there are 8.5 × 1028 atoms per cubic meter. So if we have one “conduction electron” per atom, the volume we have to work with is about 1 8.5 × 1028 m3 per atom.
This is about (0.23nm)3, and that means the inter-atom distance is small enough for us to want to use Fermi-Dirac statistics! But looking at copper atoms instead of the electrons themselves, the relevant length scale is around 0.012 nm. In other words, copper is a bunch of non-degenerate atoms immersed in a degenerate electron gas.
46.5 Fermions at low temperature We should think of “degenerate” as having density large enough for quantum effects to be important. This is because two particles being too close gives overlapping wavefunctions.
So for the remainder of this class, we’ll be looking at degenerate Fermi systems (where we use Fermi-Dirac statistics). Examples include electrons in a conduction band of metals, white dwarf stars, neutron stars, and heavy metals.
Here our occupation numbers follow nj = 1 1 + e(εj−µ)β .
Note that when µ ≫kBT, levels with ε < µ have nj →1, and levels with ε > µ have nj →0.
We get complete degeneracy at T = 0, where we have essentially a sharp change from filled to empty states.
Let’s think a little more about such a degenerate Fermi gas: now we essentially have nj = 1 1 + e(εj−µ)∞= 1 εj < µ 0 εj > µ .
This is essentially a step function! All states with low energy are filled, and all others are empty. If we increase our temperature a little, the ∞in the exponent becomes a large number, and we get a little bit of wiggle room. Then in some interval ∆ε ∼kBT, we can have partially filled states.
Fact 246 The filled levels are known as the “Fermi sea,” and the set of states with ε = µ is known as the “Fermi surface.” We can then define a “Fermi momentum” pF = p 2mεF where εF = µ.
In three dimensions, we can calculate the density of states of our Fermi gas: dN = g d3xd3p (2πℏ)3 , where g is our degeneracy factor (2 for an electron). This can then be written as N = g Z d3xd3p (2πℏ)3 1 e(p2/2m−µ)β + 1, 153 and we can do similar tricks with the spherical integration as before: this evaluates to = 4πV g (2πℏ)3 Z ∞ 0 p2dp e(p2/2m−µ)β + 1.
We’ll finish this and calculate some of the thermodynamic properties here next time!
47 May 13, 2019 (Recitation) Professor Ketterle showed us another Stirling engine today! Basically, we have a displacer which displaces gas between a hot reservoir (hot water) and cold reservoir (room temperature air). Whenever the piston moves down towards the hot reservoir, the air above gets colder, so there is a temperature differential. This causes a pressure modulation, which allows another piston to drive the wheel forward.
47.1 Entropic force Often, we think of energy as the cause of force. But in statistics, we can think of entropy causing some kind of force as well! Recall the differential formula dU = TdS −PdV ; we can use this to find the pressure at constant S or U: P = T ∂S ∂V U = −∂U ∂V S .
On the other hand, we also know that we have the free energy F = U −TS = ⇒dF = −SdT −PdV, which leads us to P in terms of isothermal derivatives: P = −∂F ∂V T = ∂(U −TS) ∂V T = −∂U ∂V T + T ∂S ∂V T .
But the pressure is always the same, regardless of what process we’re using to get to this point (because we have a state function)!
So does pressure come from internal energy or from entropy? There’s actually contributions from both an internal energy change and an entropy change! Applying this to the ideal gas, because ∂U ∂V T = 0, we actually have P = T ∂S ∂V T , and pressure is determined by entropy alone!
Fact 247 It’s important to think about isothermal versus adiabatic compression here: in the former case, compression leads to no change in internal energy (which is only dependent on T), so we get an output of heat. On the other hand, adiabatic compression leads to an increase in the kinetic energy of the particles, which means the temperature does increase.
154 47.2 Applying this to the chain model Let’s go back to the question where we have N chains of length ℓ, each of which is either flipped to the left or to the right in a one-dimensional system. Here, assume that the system doesn’t have any internal energy, so all states are equally probable.
There are 2N possible configurations of our chain (since each chain can be flipped to the left or right independently), and this is essentially a random walk!
We can describe such a system by a binomial distribution, which can be approximated as a Gaussian for large N: since the variance of each individual chain is ℓ2, p(x) = p0 exp −1 2 x2 ℓ2N .
From this, we know that our multiplicity Γ(x) = Γ0 exp −1 2 x2 ℓ2N , so the entropy S = kB ln Γ = kB c −1 2 x2 ℓ2N : (since the Gaussian exponential nicely cancels out with the logarithm!) But now, we can calculate the force exerted by the chain: since P and V were conjugate variables in our boxed equation above, we can also replace them with F and x. Notably, if ∂U ∂x T = 0 here (notice that −PdV is work, and so is +Fdx, so we gain a negative sign), F = −T ∂S ∂x T = ⇒ F = kBT x ℓ2N .
So force is proportional to temperature, and it’s also a Hooke’s law relation! This is a “pure entropy” situation, where we just needed to count the number of microstates to find dependence of S on the length x of the chain.
In other words, we put energy into the chain when we stretch it, but there’s no way to store internal energy in this chain! So the energy of pulling will be transferred as heat, as that’s the only way we can move the chain to a lower entropy state by the second law.
Question 248. What causes the restoration here?
Let’s imagine that our chain is now vertical, and there is a mass hanging from the end. At a given temperature, this gives us the length x of the chain where we have equilibrium.
But if we increase the temperature of our reservoir, what happens? Because our weight is constant, the force in the above equation is constant, so Tx is constant. In other words, when temperature goes up, the chain gets shorter!
This is a lot like if we put a weight on a piston sitting on top of an ideal gas: increasing the temperature of the gas increases the pressure, which makes the piston move higher up.
How exactly can such a massless chain with no internal energy even hold up a mass? Remember that the chain is always connected to a reservoir at temperature T! For entropic reasons, this nudges some of the chains upward.
There’s always a process to transfer force and energy between any reservoirs that exist and the system at hand.
47.3 Blackbody radiation and the Debye model The former deals with a “gas” of photons, and the latter (which deals with specific heat of a solid) deals with a “gas” of phonons, the quantized excitations of sound.
155 In both cases, we can say that we have a bunch of energy states: then what’s the expectation value for the average occupation number of a harmonic oscillator? It’s given by the expression 1 eℏω/(kBT ) −1.
If we want to know the total energy of a bunch of harmonic oscillator quanta like this, we multiply by the energy ℏω (for each harmonic oscillator), and then we need a density of states to know how many harmonic oscillators we have per interval. So U = Z dωℏω 1 eℏω/(kBT ) −1 dNω dω , and this is essentially just a dimensional analysis argument! Integrating this out from 0 frequency to some maximum ωmax, we have found our internal energy.
But how are the two different in their descriptions? The main difference is that we have dispersion relations ω = csk and ω = ck, and in both cases, we find that dN dω is quadratic in ω2. The difference is just the factor of the speed of photons versus phonons! In addition, there is also a factor of 3 2 from the polarization factors. But most importantly, the maximum frequency is upper bounded for the Debye model because of the limitations of the solid, while the maximum frequency is unbounded for blackbody radiation. Each of these concepts is important to understand!
48 May 14, 2019 Today’s lecture is being given by Professor Ketterle.
40 minutes ago, Professor Ketterle sent a press release about the kilogram changing next Monday! It will no longer be defined by the Paris artifact but by Planck’s constant. We should celebrate this, because Planck’s constant is much more beautiful than a chunk of metal. But he wants to mention that explaining physics should be simple: it should not just be for “the physicists and the nerds.” Today’s class will go in three parts: an introduction with the essence of quantum statistics, using the Bose-Einstein distribution to describe phase transition, and how to create the first Bose-Einstein condensate in a gas.
48.1 Bose-Einstein condensate: an introduction Bose-Einstein condensate allows us to create a matter wave by having many particles in one quantum state. This is based on quantum statistics, and we can use situations from class to understand it!
Example 249 Let’s say we have 3 particles with some total energy 3.
This is a microcanonical ensemble: what’s the classical description of this? There’s 1 way to have all three particles in energy level 1, 3! = 6 ways to have 3 particles in energy levels 2, 1, 0, and 3 ways to have one in energy 3 and the others in energy 0.
But if we have indistinguishable particles, our distribution is different. Now we don’t have the multiplicity of 6 and 3; in particular, the distribution of particles in energy levels for these bosons is different from the classical model. This means bosons actually have a tendency to clump together!
Finally, there’s one more problem: if we have fermions, we’re forced into a specific case, because you can’t have two different particles in the same state!
156 Fact 250 Notice that the difference for occupancy level iis just a −1 or +1 in the denominator of 1 e(ε−µ)/(kBT ) .
In particular, photons are bosons, but they can be freely created! This means there is no “chemical potential,” and we can set µ = 0 here. This gives us the Planck blackbody spectrum n(ε) = 1 eε/(kBT ) −1.
Notably, Bose rederived Planck’s formula with a new method, and Einstein used that method to add in the µ chemical potential term! That became the “Bose-Einstein distribution.” Question 251. How do we describe Bose-Einstein condensation, then?
The idea is that the classical distribution just shifts to be skinnier and taller when we adjust our temperature T.
But then at a certain critical temperature T < TC, corresponding to a critical wavelength, the population per energy state goes to infinity. This specific singularity is described in Einstein’s paper.
But back in 1924, Einstein said (after describing the theory) “The theory is pretty, but is there also some truth to it?” The idea is that “mathematical singularities” may not need to be part of actual physics, so there was lots of skepticism. It was only in 1938 that Fritz London realized that Bose-Einstein condensation is indeed physical and observable!
48.2 Why use the formalism that we do?
We’re going to look more at the equations and understand the singularity mathematically. First of all, note that we can formulate things in many ways, but smarter choices (for example, in terms of coordinates) make our job easier.
We know that for atoms and molecules, energy is conserved, and so is the number of particles. But there is a problem here: distributing energies is not independent, and this means one of the particles needs to “pay the price.” We like to assume in classical physics that each particle is an independent entity, and fluctuations shouldn’t affect each other! That’s why we use the canonical ensemble: we can then say that N particles have a partition function Zn = ZN 1 or 1 N!ZN 1 .
But then Einstein’s 1924 paper seemed to cause some problems: the particles turned out to not be independent anymore under the canonical distribution. But indeed, descriptions under quantum physics no longer have particle independence (for example, the Pauli exclusion principle)!
Instead, our independence shifts to the quantum states themselves.
By allowing each quantum state to run through each of its possible occupation numbers, we also allow for our total number of particles N to fluctuate, and now we use the grand canonical ensemble Z = Y i Zi.
That’s the beauty of the grand canonical ensemble - we get our independence back again!
48.3 Mathematical derivation We’ve derived in class before that the occupation number under the Bose-Einstein distribution nj = 1 e(εj−µ)/(kBT ) −1.
157 Question 252. What is the population n0 (corresponding to zero momentum and lowest energy)?
We use the fact that as T →0, the chemical potential is negative but approaches 0−. Then the occupation of the zero energy state is n0 →kBT −µ →∞.
Notably, this means the chemical potential can’t be positive, or we’d have a singularity at some non-ground state!
Question 253. What is the number of particles in an excited state (all j ̸= 0)?
This is just the sum over all nonzero states N = X εj>δ 1 eβ(εj−µ) −1 where we’ll make δ →0. If we only care about having an upper bound, we can ignore the small negative value of chemical potential: N ≤ X εj>δ 1 eβεj −1.
Remember that when we sum over all states, we can do this in a semi-classical manner instead: = Z d3xd3p h3 1 eβεj −1.
The position integral just becomes V , the volume of our system, and then we can use the spherical integration trick again: = V 4π h3 Z dpp2 1 eβεj −1.
We can now replace p2dp with √εdε with a constant factor; C√ε here is our density of states! This ends up giving us (replacing h with ℏfor simplicity) an upper bound Nmax = Z ∞ δ N(ε) eβε−1 , N(ε) = V 2m ℏ 3/2 √ε 4π2 .
We’ll remove most of the ugly constants by introducing a thermal de Broglie wavelength and dimensionless variable λ = s 2πℏ2 mkBT , x = βε.
Remember that our integral is still counting the number of particles in non-ground states: this gives us = V λ3 2 √π Z ∞ δβ √xdx ex −1.
But now we can actually replace δβ with 0, since the integral converges, and we get an estimate of Nmax ≈2.612 V λ3 .
Notice that this is a fixed number dependent on T! So if we put more particles into our system at a fixed temperature, they don’t go into excited states! So we have some absolute limit on the number of excited particles in terms of V and T: all others go into the ground state.
158 Proposition 254 In other words, eventually we have a saturation of the quantum gas: when we have too many particles, the whole system must condense! This is similar to the way in which eventually water vapor at 100 degrees Celsius must start forming water droplets when the pressure is too large.
So if we lower the temperature enough, and Nmax reaches a point comparable to N, every subsequent increase in the number of particles or decrease in temperature will create a Bose-Einstein condensate.
Fact 255 Notably, if we set the boxed Nmax = N, we can find our critical temperature kBTC = N 2.612 2/3 ℏ22π L2m , where V = L3 if we assume we have a box.
Notably, the N2/3 L2 is proportional to n2/3, where n is the density of our gas!
There’s two things we should discuss here. First of all, the density we can get to realistically (when we work with individual atoms) is n = 1014, which is a factor of 105 smaller than room temperature gases. Then we find that the TC here is 100 nanoKelvin: that means verifying this experimentally requires us to get to very cold temperatures!
Question 256. If we want almost all atoms in the ground state, what’s the temperature scale that we are allowed to have?
Remember that there is a first excited state: we have gapped behavior here. In our classical model, an energy lower than that must put all the particles in the ground state! On the other hand, though, the Bose-Einstein model has an extra factor of N2/3 in it. Because bosons are indistinguishable, we get this extra factor that actually helps us: everything goes to the ground state much faster!
48.4 How do we do this experimentally?
Fact 257 In a paper that Schrodinger wrote in 1952, he expressed the opinion that van der Waals corrections (attractions and repulsions between molecules) and effects of Bose-Einstein condensation are almost impossible to separate.
He thinks that most systems will become liquids before any quantum effects can be seen.
But it turns out that there exist cooling methods that can get us to very cold atomic clouds! Atoms cannot “keep photons,” so if we shine a laser beam at an atom, the photons that hit the atom must be emitted out by fluorescence.
But now we can blue-shift the emitted radiation, every time an atom absorbs and emits a photon, it radiates away some of its energy! This is hard to implement in the laboratory, but this is the way laser cooling works.
Fact 258 Atoms are a special system: they are almost completely immune to black-body radiation, so we don’t need to do the same shielding from cameras and beams and other materials as in more complicated systems.
159 Well, this gets us to the microKelvin level: what can we do to get closer to the 100 nanoKelvin level that we want?
It turns out evaporative cooling is very easy to understand. In a thermos, steam molecules (which have the highest energy) escape, and the lower-energy molecules stay behind. In our atomic system, we have a container made by magnetic or electric fields, and then use “radio frequency spin flips” to select the particles with highest energy (This is the same as “blowing on a cup of coffee”).
In other words, if we remove our electric/magnetic field container, the gas will expand at a thermal velocity. Since the kinetic energy mv 2 2 = kBT 2 by equipartition, this is a way for us to figure out the temperature without needing to explicitly put something like a thermometer in contact with it!
Fact 259 By the way, right now the temperature of 1 Kelvin is 1 273.15 times the triple point of water. Soon it’ll be defined in terms of the Boltzmann constant instead!
Basically, when we “blow” with our radio waves, we should expect smaller clouds, and indeed, this is what happens!
There’s an elliptical object in the middle that stays put - that’s the Bose-Einstein condensate that we can observe.
Remember that a thermal gas is isotropic: the shadow should always be perfectly circular. So why is the condensate elliptical? It’s because the ground state of an elliptical container is elliptical!
Fact 260 Finally, how do we prove that the atoms act as one single wave?
The key idea is interference! Two waves that collide form a standing wave, and it turns out that we can do the same kind of interference by taking a Bose-Einstein condensate and cutting it in half. The interesting thing here is that we have now accomplished interference of matter waves, since the positive and negative part of the wavefunction add up to zero!
49 May 15, 2019 (Recitation) 49.1 Drawing more connections Let’s start by trying to understand the last concept we introduced in this class. Exchanging energy in the canonical ensemble is very similar to exchanging particles in a grand canonical ensemble! In the former, we’re connected to a temperature reservoir, and in the second, we’re connected to some chemical potential reservoir.
What are the transitions here? We fix the internal energy U in our microcanonical ensemble, but in a canonical ensemble, we essentially tweak our energy as a function of T, our temperature. In particular, we don’t clamp down the energy: one thing we can do is to maximize entropy as a function of β (using Lagrange multipliers). The mathematics here is that if we want to maximize F(x, y, z) subject to some constraint E(x, y, z) = ε0, we can introduce a new parameter β which also varies independently: F(x, y, z) −βE = 0 and then all derivatives with respect to x, y, z, β must be zero. So the microcanonical condition is enforced with the Lagrange parameter, and ultimately that parameter is adjusted (aka, the temperature is our independent variable) to reach our energy E.
160 Fact 261 So temperature plays a dual role: temperature affects internal energy, but we can also use it as a Lagrange parameter to ask questions like “how to maximize entropy.” E and β depend each other.
Well, this same dependence happens with N and µ! If we have a reservoir with some chemical potential, µ controls the number of particles N, just like the temperature T controls our energy E. We now want to maximize the free energy F, and we use a similar Lagrange parameter: this time, it is µ. So now µ can also be our “knob” that controls N.
Example 262 Consider a semiconductor piece of metal. A battery is then a source of chemical potential: increasing the voltage charges up the system and introduces more electrons!
So the battery becomes the “energetic cost” of delivering another particle. Our grand partition function Z = X j e−β(εj−njµ now takes into account that each particle being present changes our energy somehow. Remember where this all comes from: we treat our system as being connected to a reservoir, and we think of this whole system-reservoir as being a microcanonical ensemble. Then the probability of any specific microstate µj is Pr(µj) ∝ΓRes(E −εj, N −nj), since E and N are fixed across the whole system-reservoir entity. This is then proportional to eSRes, and expanding out S to first order in our Taylor expansion yields the result that we want: = ∂S ∂N (nj) + ∂S ∂E (−εj) = βµnj −βεj and that yields the −β(εj −µnj) that we want! This Taylor expansion basically tells us “how much energy it costs” to give away a particle or give away some energy.
49.2 Going to the lab Most particles in nature are fermions (quarks, electrons, and so on), rather than bosons.
Fact 263 By the way, the main difference is the spin (inherent angular momentum) of the particle, and whether it’s an integer or a half-integer. Two indistinguishable particles must satisfy ψ(x1, x2) = ±ψ(x2, x1), where the ± comes from us only observing the square of the wave function. Turns out −corresponds to fermions, and + corresponds to bosons: now the Pauli exclusion principle comes from the “spin statistic theorem!” 50 May 16, 2019 We’re going to finish talking about Fermi gas thermodynamics, and we’ll finish by talking about what lies beyond this class!
161 Fact 264 By the way, the grade cutoffs for A/B, B/C, and C/D last year were 85, 65, and 55. This fluctuates year to year, though.
We should check this weekend that everything is graded! The deadline for all grading-related things is tentatively Sunday, but everything will definitely be done by Wednesday morning. Exam 1 and 2 solutions will be posted soon, and past exams have been posted for reviewing for the final as well.
50.1 Back to Fermi systems: a gas of electrons Remember that fermions are a gas of spin 1 2. We found that because of the Pauli exclusion principle, we have an interesting result for the occupation number at zero temperature for a degenerate Fermi gas: nj(ε) = 1 1 + e(ε−µ)β this is basically a step function at ε = µ. (If we increase the temperature to some T, we get partially filled states with a width of around ∆ε ≈kBT.) At temperature T = 0, define µ = Ef to be the Fermi energy: we can then also define a Fermi momentum pF = √2mEf , which has applications in condensed matter. The terms Fermi sea and Fermi surface then refer to the filled states where E < Ef and the set of states where E = Ef , respectively.
We’re going to try to derive the thermodynamics of a 3D Fermi gas now! We start with a density of states calculation: we have dN = g d3xd3p (2πℏ)3 , where g, the degeneracy factor, is 2s + 1 (where s is the spin of our particle). Since electrons have spin 1 2, g = 2 in this case! Integrating out, N = g Z d3xd3p (2πℏ)3 1 e(p2/2m−µ)β + 1.
We’ll do the same tricks we keep doing: integrating out d3x and using spherical coordinates, this simplifies to a single integral N = 4πV g (2πℏ)3 Z ∞ 0 p2dp e(p2/2m−µ) + 1.
As we take temperature to 0, we have some upper limit equal to our Fermi momentum (since the denominator is 1 for p < pF and ∞for p > pF ), so this simplifies to (combining some constants) N = gV 2π2ℏ3 Z pF 0 p2dp = gV p3 F 6π2ℏ3 ; substituting back for our Fermi energy yields Ef = ℏ2 2m 6π2 g N V 2/3 .
The idea is that this is the energy of the last filled state, because fermions that enter our system successively fill energy levels from lowest to highest! This gives us a Fermi wavenumber pF ℏ= kf = ⇒N = g 6π2 V k3 F .
162 We can also define a Fermi temperature Tf = Ef kB ; surprisingly, for electrons in metal, this temperature is around 104 Kelvin, and for electrons in a white dwarf, this is around 107 Kelvin! Since these numbers are so large, the description of Fermi gases is very good for deriving material properties of solids, as well as other fermion systems.
So now if we want to calculate our energy, we integrate U = g Z d3xd3p (2πℏ)3 p2/2m e(p2/2m−µ)β + 1; If we take our temperature T →0, the same behavior with the denominator happens, and we’re left with U = gV 2π2ℏ3 Z pF 0 dpp2 p2 2m = gV p5 F 20mπ2ℏ3 .
Again, we can rearrange to write U in terms of N, our number of particles, and Ef , our Fermi energy: this yields U = 3 5NEF .
Finally, how do we derive the equation of state? If U is written as a function of N, V, T = 0 (in our limiting case), then P = −∂U ∂V N,T =0 .
We can rewrite our expression above for U: it turns out that we have U = constant · N5/3 V 2/3 , so taking the derivative, P = 2 3 U V = gp5 F 30π2mℏ3 , which can be rewritten in terms of Fermi energy as PV = 2 5NEf .
So even at temperature 0, there is some residual pressure! This is known as “degenerate pressure,” and it occurs because of the Pauli exclusion principle - this has stabilizing effects in certain systems.
The only problem is that we’ve assumed our electrons are free particles, but we know that this isn’t true - they’re bound to nuclei! So we need to start adding correction terms to account for interactions like this.
50.2 What’s next?
There’s a lot of cool and exciting areas that we can study after this class! Here’s some of them: • Phase transitions. These can be observed in real life, and they also have applications in biophysics and other areas like particle physics!
• Non-equilibrium physics: how does a system out of equilibrium relax into equilibrium? There’s something called “linear response theory” here, as well as a notion of “non-equilibrium steady states.” • Dynamical processes.
163 • Thermodynamics of small systems - we’ve been using large N to simplify a lot of our calculations, but there’s an exciting area of theoretical development where we start caring more about fluctuations! There’s notions of “work-fluctuation” and other strange phenomena.
We should all continue with statistical physics after this point! This class is a basis for doing other, more exciting things.
164 |
187837 | https://see.stanford.edu/Course/EE364B/108 | Stanford Engineering Everywhere | EE364B - Convex Optimization II | Lecture 7 - Example: Piecewise Linear Minimization
Stanford Engineering Everywhere
Home
Courses
Using SEE
Survey
Contact Us
Stanford University
stanford engineering
S tanford E ngineering E verywhere
S EE
Menu
Home
Courses
Using SEE
Survey
Contact Us
EE364B - Convex Optimization II
Lecture 7 - Example: Piecewise Linear Minimization
To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video
Video Player is loading.
Play Video
Play
Mute
Current Time 0:00
/
Duration-:-
Loaded: 0%
0:00
Stream Type LIVE
Seek to live, currently behind live LIVE
Remaining Time--:-
1x
Playback Rate
Chapters
Chapters
Descriptions
descriptions off, selected
Captions
captions settings, opens captions settings dialog
captions off, selected
Audio Track
Picture-in-Picture Fullscreen
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
Text Color Transparency Background Color Transparency Window Color Transparency
Font Size Text Edge Style Font Family
Reset restore all settings to the default values Done
Close Modal Dialog
End of dialog window.
Expand/Collapse Video
Bookmarks
Playlist
Info
Bookmarks
00:00:34
Recap: ACCPM Algorithm
00:02:57
Example: Piecewise Linear Minimization
00:17:07
ACCPM With Constraint Dropping
00:26:33
Epigraph ACCPM
00:28:00
Ellipsoid Method
00:33:42
Motivation (For Ellipsoid Method)
00:36:52
Ellipsoid Algorithm For Minimizing Convex Function
00:47:21
Properties Of Ellipsoid Method
00:49:58
Example (Using Ellipsoid Method)
00:51:16
Updating The Ellipsoid
01:07:57
Simple Stopping Criterion
01:11:14
Basic Ellipsoid Algorithm
01:12:39
Interpretation (Of Basic Ellipsoid Algorithm)
01:12:47
Example (Of Ellipsoid Method)
Lectures
1 - Course Logistics
2 - Recap: Subgradients
3 - Convergence Proof
4 - Project Subgradient For Dual Problem
5 - Stochastic Programming
6 - Addendum: Hit-And-Run CG Algorithm
7 - Example: Piecewise Linear Minimization
8 - Recap: Ellipsoid Method
9 - Comments: Latex Typesetting Style
10 - Decomposition Applications
11 - Sequential Convex Programming
12 - Recap: 'Difference Of Convex' Programming
13 - Recap: Conjugate Gradient Method
14 - Methods (Truncated Newton Method)
15 - Recap: Example: Minimum Cardinality Problem
16 - Model Predictive Control
17 - Stochastic Model Predictive Control
18 - Announcements
About the Lecture TITLE:Lecture 7 - Example: Piecewise Linear Minimization DURATION:1 hr 14 min TOPICS:Example: Piecewise Linear Minimization ACCPM With Constraint Dropping Epigraph ACCPM Motivation (For Ellipsoid Method) Ellipsoid Algorithm For Minimizing Convex Function Properties Of Ellipsoid Method Example (Using Ellipsoid Method) Updating The Ellipsoid Simple Stopping Criterion Basic Ellipsoid Algorithm Interpretation (Of Basic Ellipsoid Algorithm) Example (Of Ellipsoid Method)
Course Details
Show All
Course Description
Continuation of Convex Optimization I. Subgradient, cutting-plane, and ellipsoid methods. Decentralized convex optimization via primal and dual decomposition. Alternating projections. Exploiting problem structure in implementation. Convex relaxations of hard problems, and global optimization via branch & bound. Robust optimization. Selected applications in areas such as control, circuit design, signal processing, and communications. Course requirements include a substantial project.
Prerequisites: Convex Optimization I
Syllabus
DOWNLOAD All Course Materials
Instructor
Boyd, Stephen
Stephen P. Boyd is the Samsung Professor of Engineering, and Professor of Electrical Engineering in the Information Systems Laboratory at Stanford University. His current research focus is on convex optimization applications in control, signal processing, and circuit design.
Professor Boyd received an AB degree in Mathematics, summa cum laude, from Harvard University in 1980, and a PhD in EECS from U. C. Berkeley in 1985. In 1985 he joined the faculty of Stanford’s Electrical Engineering Department. He has held visiting Professor positions at Katholieke University (Leuven), McGill University (Montreal), Ecole Polytechnique Federale (Lausanne), Qinghua University (Beijing), Universite Paul Sabatier (Toulouse), Royal Institute of Technology (Stockholm), Kyoto University, and Harbin Institute of Technology. He holds an honorary doctorate from Royal Institute of Technology (KTH), Stockholm.
Professor Boyd is the author of many research articles and three books: Linear Controller Design: Limits of Performance (with Craig Barratt, 1991), Linear Matrix Inequalities in System and Control Theory (with L. El Ghaoui, E. Feron, and V. Balakrishnan, 1994), and Convex Optimization (with Lieven Vandenberghe, 2004).
Professor Boyd has received many awards and honors for his research in control systems engineering and optimization, including an ONR Young Investigator Award, a Presidential Young Investigator Award, and an IBM faculty development award. In 1992 he received the AACC Donald P. Eckman Award, which is given annually for the greatest contribution to the field of control engineering by someone under the age of 35. In 1993 he was elected Distinguished Lecturer of the IEEE Control Systems Society, and in 1999, he was elected Fellow of the IEEE, with citation: “For contributions to the design and analysis of control systems using convex optimization based CAD tools.” He has been invited to deliver more than 30 plenary and keynote lectures at major conferences in both control and optimization.
In addition to teaching large graduate courses on Linear Dynamical Systems, Nonlinear Feedback Systems, and Convex Optimization, Professor Boyd has regularly taught introductory undergraduate Electrical Engineering courses on Circuits, Signals and Systems, Digital Signal Processing, and Automatic Control. In 1994 he received the Perrin Award for Outstanding Undergraduate Teaching in the School of Engineering, and in 1991, an ASSU Graduate Teaching Award. In 2003, he received the AACC Ragazzini Education award, for contributions to control education, with citation: “For excellence in classroom teaching, textbook and monograph preparation, and undergraduate and graduate mentoring of students in the area of systems, control, and optimization.”
Handouts
Lecture Materials
SubgradientsLecture SlidesLecture Notes
Subgradient MethodsLecture SlidesLecture NotesMatlab Files
Subgradient Methods for Constrained ProblemsLecture Slides
Stochastic Subgradient MethodLecture SlidesLecture NotesMatlab Files
Localization and Cutting-plane MethodsLecture SlidesLecture Notes
Analytic Center Cutting-plane MethodLecture SlidesLecture NotesMatlab Files
Ellipsoid MethodLecture SlidesMatlab Files
Ellipsoid Method Part IILecture SlidesMatlab Files
Primal and Dual DecompositionLecture SlidesLecture NotesMatlab Files
Decomposition ApplicationsLecture Slides
Sequential Convex ProgrammingLecture SlidesMatlab Files
Conjugate-gradient MethodLecture SlidesMatlab Files
Truncated Newton MethodsLecture SlidesMatlab Files
Methods for Convex-cardinality ProblemsLecture SlidesMatlab Files
Methods for Convex-cardinality Problems, Part IILecture SlidesMatlab Files
Model Predictive ControlLecture SlidesMatlab Files
Stochastic Model Predictive ControlLecture Slides
Branch-and-Bound MethodsLecture SlidesLecture NotesPython Files
Additional Lecture Notes
Notes on relaxation and randomized methods for nonconvex QCQP
Notes on convex-concave games and minimax
Numerical linear algebra software
Resources
This page contains links to various interesting and useful sites that relate in some way to convex optimization. It goes without saying that you’ll be periodically checking things using google and wikipedia. The wikipedia entry on convex optimization (and related topics) could be improved or extended.
Stephen Boyd’s research page. There’s a lot of material there, and you don’t have to know every detail in every paper, but you should certainly take an hour or more to browse through these papers.
EE364a web page. We expect you to know what’s in these pages.
The Convex Optimization book. You’re expected to know pretty well the material in this book. Unless you have a really good memory, you should be browsing through this.
Lieven Vandenberghe’s ee236a and ee236b course pages.
Athena Scientific books on optimization. You can also check the MIT courses that use some of these books.
CVX. Be sure to check out the every extensive library of examples. (Indeed, feel free to add to it.)
CVXOPT, which also includes an extensive library of examples, and CVXMOD
YALMIP, a Matlab toolbox for optimization modeling.
SOSTOOLS, a toolbox for formulating and solving sums of squares (SOS) optimization problems.
Assignments
Homework Assignments
Assignments may require Matlab files, see Software below.
| Assignment | Solutions | Due Date |
---
| Assignment 1 | Solutions | Lecture 4 |
| Assignment 2 | Solutions | Lecture 5 |
| Assignment 3 | Solutions | Lecture 7 |
| Assignment 4 | Solutions | Lecture 11 |
| Assignment 5 | Solutions | Lecture 14 |
| Assignment 6 | Solutions | Lecture 17 |
| Assignment 7 | Solutions | Lecture 18 |
Final Project
Convex Optimization II requires an extensive project. Here are the project guidelines.
Here are the project deadlines:
Initial proposal, due Lecture 7
Revised proposal, due Lecture 12
Midterm progress report, due Lecture 14
Final report, due Lecture 18
Here is some example Latex code you can use for a template. Project proposals, reports, and posters must use Latex, with either our template or a good alternative.
| Matlab files for homework problems: |
| camera_data.m |
| flowgray.png |
| illum_data.m |
| log_normcdf.m |
| log_opt_invest.m |
| nonlin_meas_data.m |
| ps_data.m |
| pwl_fit_data.m |
| sep3way_data.m |
| sp_ln_sp_data.m |
| team_data.m |
| thrusters_data.m |
| tv_img_interp.m |
Software
Matlab Files
| Matlab files for homework problems: |
| bicommodity_data.m |
| ex_blockprecond.m |
| l1_heuristic_portfolio_data.m |
| quad2_min.m |
| sp_bayesnet_data.m |
Course Sessions (18):
Show All
Lecture 1
Watch Online:Watch Now
Download:
Right Click, and Save AsDownload
Duration:1 hr 2 min
Watch Online:Topics: Course Logistics, Course Organization, Course Topics, Subgradients, Basic Inequality, Subgradient Of A Function, Subdifferential, Subgradient Calculus, Some Basic Rules (For Subgradient Calculus), Pointwise Supremum, Weak Rule For Pointwise Supremum, Expectation, Minimization, Composition, Subgradients And Sublevel Sets, Quasigradients
Watch Online:Download:
Right Click, and Save As Duration:
Watch NowDownload1 hr 2 min
Topics: Course Logistics, Course Organization, Course Topics, Subgradients, Basic Inequality, Subgradient Of A Function, Subdifferential, Subgradient Calculus, Some Basic Rules (For Subgradient Calculus), Pointwise Supremum, Weak Rule For Pointwise Supremum, Expectation, Minimization, Composition, Subgradients And Sublevel Sets, Quasigradients
Transcripts
HTML
PDF
Lecture 2
Watch Online:Watch Now
Download:
Right Click, and Save AsDownload
Duration:1 hr 7 min
Watch Online:Topics: Recap: Subgradients, Subgradients And Sublevel Sets, Quasigradients, Optimality Conditions – Unconstrained, Example: Piecewise Linear Minimization, Optimality Conditions – Constrained, Directional Derivative And Subdifferential, Descent Directions, Subgradients And Distance To Sublevel Sets, Descent Directions And Optimality, Subgradient Method, Step Size Rules, Assumptions, Convergence Results, Aside: Example: Applying Subgradient Method To Abs(X)
Watch Online:Download:
Right Click, and Save As Duration:
Watch NowDownload1 hr 7 min
Topics: Recap: Subgradients, Subgradients And Sublevel Sets, Quasigradients, Optimality Conditions – Unconstrained, Example: Piecewise Linear Minimization, Optimality Conditions – Constrained, Directional Derivative And Subdifferential, Descent Directions, Subgradients And Distance To Sublevel Sets, Descent Directions And Optimality, Subgradient Method, Step Size Rules, Assumptions, Convergence Results, Aside: Example: Applying Subgradient Method To Abs(X)
Transcripts
HTML
PDF
Lecture 3
Watch Online:Watch Now
Download:
Right Click, and Save AsDownload
Duration:1 hr 15 min
Watch Online:Topics: Convergence Proof, Stopping Criterion, Example: Piecewise Linear Minimization, Optimal Step Size When F Is Known, Finding A Point In The Intersection Of Convex Sets, Alternating Projections, Example: Positive Semidefinite Matrix Completion, Speeding Up Subgradient Methods, A Couple Of Speedup Algorithms, Subgradient Methods For Constrained Problems, Projected Subgradient Method, Linear Equality Constraints, Example: Least L_1-Norm
Watch Online:Download:
Right Click, and Save As Duration:
Watch NowDownload1 hr 15 min
Topics: Convergence Proof, Stopping Criterion, Example: Piecewise Linear Minimization, Optimal Step Size When F Is Known, Finding A Point In The Intersection Of Convex Sets, Alternating Projections, Example: Positive Semidefinite Matrix Completion, Speeding Up Subgradient Methods, A Couple Of Speedup Algorithms, Subgradient Methods For Constrained Problems, Projected Subgradient Method, Linear Equality Constraints, Example: Least L_1-Norm
Transcripts
HTML
PDF
Lecture 4
Watch Online:Watch Now
Download:
Right Click, and Save AsDownload
Duration:1 hr 19 min
Watch Online:Topics: Project Subgradient For Dual Problem, Subgradient Of Negative Dual Function, Example (Strictly Convex Quadratic Function Over Unit Box), Subgradient Method For Constrained Optimization, Convergence, Example: Inequality Form LP, Stochastic Subgradient Method, Noisy Unbiased Subgradient, Stochastic Subgradient Method, Assumptions, Convergence Results, Convergence Proof, Stochastic Programming
Watch Online:Download:
Right Click, and Save As Duration:
Watch NowDownload1 hr 19 min
Topics: Project Subgradient For Dual Problem, Subgradient Of Negative Dual Function, Example (Strictly Convex Quadratic Function Over Unit Box), Subgradient Method For Constrained Optimization, Convergence, Example: Inequality Form LP, Stochastic Subgradient Method, Noisy Unbiased Subgradient, Stochastic Subgradient Method, Assumptions, Convergence Results, Convergence Proof, Stochastic Programming
Transcripts
HTML
PDF
Lecture 5
Watch Online:Watch Now
Download:
Right Click, and Save AsDownload
Duration:1 hr 16 min
Watch Online:Topics: Stochastic Programming, Variations (Of Stochastic Programming), Expected Value Of A Convex Function, Example: Expected Value Of Piecewise Linear Function, On-Line Learning And Adaptive Signal Processing, Example: Mean-Absolute Error Minimization, Localization And Cutting-Plane Methods, Cutting-Plane Oracle, Neutral And Deep Cuts, Unconstrained Minimization, Deep Cut For Unconstrained Minimization, Feasibility Problem, Inequality Constrained Problem, Localization Algorithm, Example: Bisection On R, Specific Cutting-Plane Methods, Center Of Gravity Algorithm, Convergence Of CG Cutting-Plane Method
Watch Online:Download:
Right Click, and Save As Duration:
Watch NowDownload1 hr 16 min
Topics: Stochastic Programming, Variations (Of Stochastic Programming), Expected Value Of A Convex Function, Example: Expected Value Of Piecewise Linear Function, On-Line Learning And Adaptive Signal Processing, Example: Mean-Absolute Error Minimization, Localization And Cutting-Plane Methods, Cutting-Plane Oracle, Neutral And Deep Cuts, Unconstrained Minimization, Deep Cut For Unconstrained Minimization, Feasibility Problem, Inequality Constrained Problem, Localization Algorithm, Example: Bisection On R, Specific Cutting-Plane Methods, Center Of Gravity Algorithm, Convergence Of CG Cutting-Plane Method
Transcripts
HTML
PDF
Lecture 6
Watch Online:Watch Now
Download:
Right Click, and Save AsDownload
Duration:1 hr 12 min
Watch Online:Topics: Addendum: Hit-And-Run CG Algorithm, Maximum Volume Ellipsoid Method, Chebyshev Center Method, Analytic Center Cutting-Plane Method, Extensions (Of Cutting-Plane Methods), Dropping Constraints, Epigraph Cutting-Plane Method, PWL Lower Bound On Convex Function, Lower Bound, Analytic Center Cutting-Plane Method, ACCPM Algorithm, Constructing Cutting-Planes, Computing The Analytic Center, Infeasible Start Newton Method Algorithm, Properties (Of Infeasible Start Newton Method Algorithm), Pruning Constraints, PWL Lower Bound On Convex Function, Lower Bound In ACCPM, Stopping Criterion, Example: Piecewise Linear Minimization
Watch Online:Download:
Right Click, and Save As Duration:
Watch NowDownload1 hr 12 min
Topics: Addendum: Hit-And-Run CG Algorithm, Maximum Volume Ellipsoid Method, Chebyshev Center Method, Analytic Center Cutting-Plane Method, Extensions (Of Cutting-Plane Methods), Dropping Constraints, Epigraph Cutting-Plane Method, PWL Lower Bound On Convex Function, Lower Bound, Analytic Center Cutting-Plane Method, ACCPM Algorithm, Constructing Cutting-Planes, Computing The Analytic Center, Infeasible Start Newton Method Algorithm, Properties (Of Infeasible Start Newton Method Algorithm), Pruning Constraints, PWL Lower Bound On Convex Function, Lower Bound In ACCPM, Stopping Criterion, Example: Piecewise Linear Minimization
Transcripts
HTML
PDF
Lecture 7
Watch Online:Now Playing
Download:
Right Click, and Save AsDownload
Duration:1 hr 14 min
Watch Online:Topics: Example: Piecewise Linear Minimization, ACCPM With Constraint Dropping, Epigraph ACCPM, Motivation (For Ellipsoid Method), Ellipsoid Algorithm For Minimizing Convex Function, Properties Of Ellipsoid Method, Example (Using Ellipsoid Method), Updating The Ellipsoid, Simple Stopping Criterion, Basic Ellipsoid Algorithm, Interpretation (Of Basic Ellipsoid Algorithm), Example (Of Ellipsoid Method)
Watch Online:Download:
Right Click, and Save As Duration:
Now PlayingDownload1 hr 14 min
Topics: Example: Piecewise Linear Minimization, ACCPM With Constraint Dropping, Epigraph ACCPM, Motivation (For Ellipsoid Method), Ellipsoid Algorithm For Minimizing Convex Function, Properties Of Ellipsoid Method, Example (Using Ellipsoid Method), Updating The Ellipsoid, Simple Stopping Criterion, Basic Ellipsoid Algorithm, Interpretation (Of Basic Ellipsoid Algorithm), Example (Of Ellipsoid Method)
Transcripts
HTML
PDF
Lecture 8
Watch Online:Watch Now
Download:
Right Click, and Save AsDownload
Duration:1 hr 11 min
Watch Online:Topics: Recap: Ellipsoid Method, Improvements (To Ellipsoid Method), Proof Of Convergence, Interpretation Of Complexity, Deep Cut Ellipsoid Method, Ellipsoid Method With Deep Objective Cuts, Inequality Constrained Problems, Stopping Criterion, Epigraph Ellipsoid Method, Epigraph Ellipsoid Example, Summary: Methods For Handling, Nondifferentiable Convex Optimization Problems Directly, Decomposition Methods, Separable Problem, Complicating Variable, Primal Decomposition, Primal Decomposition Algorithm, Example (Using Primal Decomposition), Aside: Newton's Method With A Complicating Variable, Dual Decomposition, Dual Decomposition Algorithm
Watch Online:Download:
Right Click, and Save As Duration:
Watch NowDownload1 hr 11 min
Topics: Recap: Ellipsoid Method, Improvements (To Ellipsoid Method), Proof Of Convergence, Interpretation Of Complexity, Deep Cut Ellipsoid Method, Ellipsoid Method With Deep Objective Cuts, Inequality Constrained Problems, Stopping Criterion, Epigraph Ellipsoid Method, Epigraph Ellipsoid Example, Summary: Methods For Handling, Nondifferentiable Convex Optimization Problems Directly, Decomposition Methods, Separable Problem, Complicating Variable, Primal Decomposition, Primal Decomposition Algorithm, Example (Using Primal Decomposition), Aside: Newton's Method With A Complicating Variable, Dual Decomposition, Dual Decomposition Algorithm
Transcripts
HTML
PDF
Lecture 9
Watch Online:Watch Now
Download:
Right Click, and Save AsDownload
Duration:1 hr 10 min
Watch Online:Topics: Comments: Latex Typesetting Style, Recap: Primal Decomposition, Dual Decomposition, Dual Decomposition Algorithm, Finding Feasible Iterates, Interpretation, Decomposition With Constraints, Primal Decomposition (With Constraints) Algorithm, Example (Primal Decomposition With Constraints), Dual Decomposition (With Constraints), Dual Decomposition (With Constraints) Algorithm, General Decomposition Structures, General Form, Primal Decomposition (General Structures), Dual Decomposition (General Structures), A More Complex Example, Aside: Pictorial Representation Of Primal And Dual Decomposition
Watch Online:Download:
Right Click, and Save As Duration:
Watch NowDownload1 hr 10 min
Topics: Comments: Latex Typesetting Style, Recap: Primal Decomposition, Dual Decomposition, Dual Decomposition Algorithm, Finding Feasible Iterates, Interpretation, Decomposition With Constraints, Primal Decomposition (With Constraints) Algorithm, Example (Primal Decomposition With Constraints), Dual Decomposition (With Constraints), Dual Decomposition (With Constraints) Algorithm, General Decomposition Structures, General Form, Primal Decomposition (General Structures), Dual Decomposition (General Structures), A More Complex Example, Aside: Pictorial Representation Of Primal And Dual Decomposition
Transcripts
HTML
PDF
Lecture 10
Watch Online:Watch Now
Download:
Right Click, and Save AsDownload
Duration:1 hr 17 min
Watch Online:Topics: Decomposition Applications, Rate Control Setup, Rate Control Problem, Rate Control Lagrangian, Aside: Utility Functions, Rate Control Dual, Dual Decomposition Rate Control Algorithm, Generating Feasible Flows, Convergence Of Primal And Dual Objectives, Maximum Capacity Violation, Single Commodity Network Flow Setup, Network Flow Problem, Network Flow Lagrangian, Network Flow Dual, Recovering Primal From Dual, Dual Decomposition Network Flow Algorithm, Electrical Network Analogy, Example: Minimum Queueing Delay, Optimal Flow, Convergence Of Dual Function, Convergence Of Primal Residual, Convergence Of Dual Variables, Aside: More Complicated Problems
Watch Online:Download:
Right Click, and Save As Duration:
Watch NowDownload1 hr 17 min
Topics: Decomposition Applications, Rate Control Setup, Rate Control Problem, Rate Control Lagrangian, Aside: Utility Functions, Rate Control Dual, Dual Decomposition Rate Control Algorithm, Generating Feasible Flows, Convergence Of Primal And Dual Objectives, Maximum Capacity Violation, Single Commodity Network Flow Setup, Network Flow Problem, Network Flow Lagrangian, Network Flow Dual, Recovering Primal From Dual, Dual Decomposition Network Flow Algorithm, Electrical Network Analogy, Example: Minimum Queueing Delay, Optimal Flow, Convergence Of Dual Function, Convergence Of Primal Residual, Convergence Of Dual Variables, Aside: More Complicated Problems
Transcripts
HTML
PDF
Lecture 11
Watch Online:Watch Now
Download:
Right Click, and Save AsDownload
Duration:1 hr 16 min
Watch Online:Topics: Sequential Convex Programming, Methods For Nonconvex Optimization Problems, Sequential Convex Programming (SCP), Basic Idea Of SCP, Trust Region, Affine And Convex Approximations Via Taylor Expansions, Particle Method, Fitting Affine Or Quadratic Functions To Data, Quasi-Linearization, Example (Nonconvex QP), Lower Bound Via Lagrange Dual, Exact Penalty Formulation, Trust Region Update, Nonlinear Optimal Control, Discretization, SCP Progress, Convergence Of J And Torque Residuals, Predicted And Actual Decreases In Phi, Trajectory Plan, 'Difference Of Convex' Programming, Convex-Concave Procedure
Watch Online:Download:
Right Click, and Save As Duration:
Watch NowDownload1 hr 16 min
Topics: Sequential Convex Programming, Methods For Nonconvex Optimization Problems, Sequential Convex Programming (SCP), Basic Idea Of SCP, Trust Region, Affine And Convex Approximations Via Taylor Expansions, Particle Method, Fitting Affine Or Quadratic Functions To Data, Quasi-Linearization, Example (Nonconvex QP), Lower Bound Via Lagrange Dual, Exact Penalty Formulation, Trust Region Update, Nonlinear Optimal Control, Discretization, SCP Progress, Convergence Of J And Torque Residuals, Predicted And Actual Decreases In Phi, Trajectory Plan, 'Difference Of Convex' Programming, Convex-Concave Procedure
Transcripts
HTML
PDF
Lecture 12
Watch Online:Watch Now
Download:
Right Click, and Save AsDownload
Duration:1 hr 13 min
Watch Online:Topics: Recap: 'Difference Of Convex' Programming, Alternating Convex Optimization, Nonnegative Matrix Factorization, Comment: Nonconvex Methods, Conjugate Gradient Method, Three Classes Of Methods For Linear Equations, Symmetric Positive Definite Linear Systems, CG Overview, Solution And Error, Residual, Krylov Subspace, Properties Of Krylov Sequence, Cayley-Hamilton Theorem, Spectral Analysis Of Krylov Sequence
Watch Online:Download:
Right Click, and Save As Duration:
Watch NowDownload1 hr 13 min
Topics: Recap: 'Difference Of Convex' Programming, Alternating Convex Optimization, Nonnegative Matrix Factorization, Comment: Nonconvex Methods, Conjugate Gradient Method, Three Classes Of Methods For Linear Equations, Symmetric Positive Definite Linear Systems, CG Overview, Solution And Error, Residual, Krylov Subspace, Properties Of Krylov Sequence, Cayley-Hamilton Theorem, Spectral Analysis Of Krylov Sequence
Transcripts
HTML
PDF
Lecture 13
Watch Online:Watch Now
Download:
Right Click, and Save AsDownload
Duration:1 hr 15 min
Watch Online:Topics: Recap: Conjugate Gradient Method, Recap: Krylov Subspace, Spectral Analysis Of Krylov Sequence, A Bound On Convergence Rate, Convergence, Residual Convergence, CG Algorithm, Efficient Matrix-Vector Multiply, Shifting, Preconditioned Conjugate Gradient Algorithm, Choice Of Preconditioner, CG Summary, Truncated Newton Method, Approximate Or Inexact Newton Methods, CG Initialization, Hessian And Gradient, Methods, Convergence Versus Iterations, Convergence Versus Cumulative CG Steps, Truncated PCG Newton Method, Extensions
Watch Online:Download:
Right Click, and Save As Duration:
Watch NowDownload1 hr 15 min
Topics: Recap: Conjugate Gradient Method, Recap: Krylov Subspace, Spectral Analysis Of Krylov Sequence, A Bound On Convergence Rate, Convergence, Residual Convergence, CG Algorithm, Efficient Matrix-Vector Multiply, Shifting, Preconditioned Conjugate Gradient Algorithm, Choice Of Preconditioner, CG Summary, Truncated Newton Method, Approximate Or Inexact Newton Methods, CG Initialization, Hessian And Gradient, Methods, Convergence Versus Iterations, Convergence Versus Cumulative CG Steps, Truncated PCG Newton Method, Extensions
Transcripts
HTML
PDF
Lecture 14
Watch Online:Watch Now
Download:
Right Click, and Save AsDownload
Duration:1 hr 13 min
Watch Online:Topics: Methods (Truncated Newton Method), Convergence Versus Iterations, Convergence Versus Cumulative CG Steps, Truncated PCG Newton Method, Truncated Newton Interior-Point Methods, Network Rate Control, Dual Rate Control Problem, Primal-Dual Search Direction (BV Section 11.7), Truncated Netwon Primal-Dual Algorithm, Primal And Dual Objective Evolution, Relative Duality Gap Evolution, Relative Duality Gap Evolution (N = 10^6), L_1-Norm Methods For Convex-Cardinality Problems, L_1-Norm Heuristics For Cardinality Problems, Cardinality, General Convex-Cardinality Problems, Solving Convex-Cardinality Problems, Boolean LP As Convex-Cardinality Problem, Sparse Design, Sparse Modeling / Regressor Selection, Estimation With Outliers, Minimum Number Of Violations, Linear Classifier With Fewest Errors, Smallest Set Of Mutually Infeasible Inequalities, Portfolio Investment With Linear And Fixed Costs, Piecewise Constant Fitting, Piecewise Linear Fitting, L_1-Norm Heuristic, Example: Minimum Cardinality Problem, Polishing, Regressor Selection
Watch Online:Download:
Right Click, and Save As Duration:
Watch NowDownload1 hr 13 min
Topics: Methods (Truncated Newton Method), Convergence Versus Iterations, Convergence Versus Cumulative CG Steps, Truncated PCG Newton Method, Truncated Newton Interior-Point Methods, Network Rate Control, Dual Rate Control Problem, Primal-Dual Search Direction (BV Section 11.7), Truncated Netwon Primal-Dual Algorithm, Primal And Dual Objective Evolution, Relative Duality Gap Evolution, Relative Duality Gap Evolution (N = 10^6), L_1-Norm Methods For Convex-Cardinality Problems, L_1-Norm Heuristics For Cardinality Problems, Cardinality, General Convex-Cardinality Problems, Solving Convex-Cardinality Problems, Boolean LP As Convex-Cardinality Problem, Sparse Design, Sparse Modeling / Regressor Selection, Estimation With Outliers, Minimum Number Of Violations, Linear Classifier With Fewest Errors, Smallest Set Of Mutually Infeasible Inequalities, Portfolio Investment With Linear And Fixed Costs, Piecewise Constant Fitting, Piecewise Linear Fitting, L_1-Norm Heuristic, Example: Minimum Cardinality Problem, Polishing, Regressor Selection
Transcripts
HTML
PDF
Lecture 15
Watch Online:Watch Now
Download:
Right Click, and Save AsDownload
Duration:1 hr 3 min
Watch Online:Topics: Recap: Example: Minimum Cardinality Problem, Interpretation As Convex Relaxation, Interpretation Via Convex Envelope, Weighted And Asymmetric L_1 Heuristics, Regressor Selection, Sparse Signal Reconstruction, L_1-Norm Methods For Convex-Cardinality Problems Part II, Total Variation Reconstruction, Total Variation Reconstruction, TV Reconstruction, L_2 Reconstruction, Iterated Weighted L_1 Heuristic, Sparse Solution Of Linear Inequalities, Detecting Changes In Time Series Model, Time Series And True Coefficients, TV Heuristic And Iterated TV Heuristic, Extension To Matrices, Factor Modeling, Trace Approximation Results, Summary: L_1-Norm Methods
Watch Online:Download:
Right Click, and Save As Duration:
Watch NowDownload1 hr 3 min
Topics: Recap: Example: Minimum Cardinality Problem, Interpretation As Convex Relaxation, Interpretation Via Convex Envelope, Weighted And Asymmetric L_1 Heuristics, Regressor Selection, Sparse Signal Reconstruction, L_1-Norm Methods For Convex-Cardinality Problems Part II, Total Variation Reconstruction, Total Variation Reconstruction, TV Reconstruction, L_2 Reconstruction, Iterated Weighted L_1 Heuristic, Sparse Solution Of Linear Inequalities, Detecting Changes In Time Series Model, Time Series And True Coefficients, TV Heuristic And Iterated TV Heuristic, Extension To Matrices, Factor Modeling, Trace Approximation Results, Summary: L_1-Norm Methods
Transcripts
HTML
PDF
Lecture 16
Watch Online:Watch Now
Download:
Right Click, and Save AsDownload
Duration:1 hr 19 min
Watch Online:Topics: Model Predictive Control, Linear Time-Invariant Convex Optimal Control, Greedy Control, 'Solution' Via Dynamic Programming, Linear Quadratic Regulator, Finite Horizon Approximation, Cost Versus Horizon, Trajectories, Model Predictive Control (MPC), MPC Performance Versus Horizon, MPC Trajectories, Variations On MPC, Explicit MPC, MPC Problem Structure, Fast MPC, Supply Chain Management, Constraints And Objective, MPC And Optimal Trajectories, Variations On Optimal Control Problem
Watch Online:Download:
Right Click, and Save As Duration:
Watch NowDownload1 hr 19 min
Topics: Model Predictive Control, Linear Time-Invariant Convex Optimal Control, Greedy Control, 'Solution' Via Dynamic Programming, Linear Quadratic Regulator, Finite Horizon Approximation, Cost Versus Horizon, Trajectories, Model Predictive Control (MPC), MPC Performance Versus Horizon, MPC Trajectories, Variations On MPC, Explicit MPC, MPC Problem Structure, Fast MPC, Supply Chain Management, Constraints And Objective, MPC And Optimal Trajectories, Variations On Optimal Control Problem
Transcripts
HTML
PDF
Lecture 17
Watch Online:Watch Now
Download:
Right Click, and Save AsDownload
Duration:1 hr 17 min
Watch Online:Topics: Stochastic Model Predictive Control, Causal State-Feedback Control, Stochastic Finite Horizon Control, 'Solution' Via Dynamic Programming, Independent Process Noise, Linear Quadratic Stochastic Control, Certainty Equivalent Model Predictive Control, Stochastic MPC: Sample Trajectory, Cost Histogram, Simple Lower Bound For Quadratic Stochastic Control, Branch And Bound Methods, Methods For Nonconvex Optimization Problems, Branch And Bound Algorithms, Comment: Example Problem
Watch Online:Download:
Right Click, and Save As Duration:
Watch NowDownload1 hr 17 min
Topics: Stochastic Model Predictive Control, Causal State-Feedback Control, Stochastic Finite Horizon Control, 'Solution' Via Dynamic Programming, Independent Process Noise, Linear Quadratic Stochastic Control, Certainty Equivalent Model Predictive Control, Stochastic MPC: Sample Trajectory, Cost Histogram, Simple Lower Bound For Quadratic Stochastic Control, Branch And Bound Methods, Methods For Nonconvex Optimization Problems, Branch And Bound Algorithms, Comment: Example Problem
Transcripts
HTML
PDF
Lecture 18
Watch Online:Watch Now
Download:
Right Click, and Save AsDownload
Duration:1 hr 19 min
Watch Online:Topics: Announcements, Recap: Branch And Bound Methods, Basic Idea, Unconstrained, Nonconvex Minimization, Lower And Upper Bound Functions, Branch And Bound Algorithm, Comment: Picture Of Branch And Bound Algorithm In R^2, Comment: Binary Tree, Example, Pruning, Convergence Analysis, Bounding Condition Number, Small Volume Implies Small Size, Mixed Boolean-Convex Problem, Solution Methods, Lower Bound Via Convex Relaxation, Upper Bounds, Branching, New Bounds From Subproblems, Branch And Bound Algorithm (Mixed Boolean-Convex Problem), Minimum Cardinality Example, Bounding X, Relaxation Problem, Algorithm Progress, Global Lower And Upper Bounds, Portion Of Non-Pruned Sparsity Patterns, Number Of Active Leaves In Tree, Global Lower And Upper Bounds,
Watch Online:Download:
Right Click, and Save As Duration:
Watch NowDownload1 hr 19 min
Topics: Announcements, Recap: Branch And Bound Methods, Basic Idea, Unconstrained, Nonconvex Minimization, Lower And Upper Bound Functions, Branch And Bound Algorithm, Comment: Picture Of Branch And Bound Algorithm In R^2, Comment: Binary Tree, Example, Pruning, Convergence Analysis, Bounding Condition Number, Small Volume Implies Small Size, Mixed Boolean-Convex Problem, Solution Methods, Lower Bound Via Convex Relaxation, Upper Bounds, Branching, New Bounds From Subproblems, Branch And Bound Algorithm (Mixed Boolean-Convex Problem), Minimum Cardinality Example, Bounding X, Relaxation Problem, Algorithm Progress, Global Lower And Upper Bounds, Portion Of Non-Pruned Sparsity Patterns, Number Of Active Leaves In Tree, Global Lower And Upper Bounds,
Transcripts
HTML
PDF
Stanford Center for Professional Development
Contact Us
Facebook
Twitter
LinkedIn
YouTube
Google+
Stanford University
Stanford Home
Maps & Directions
Search Stanford
Emergency Info
Terms of Use
Privacy
Copyright
Trademarks
Non-Discrimination
Accessibility
© Stanford University, Stanford, California 94305 |
187838 | https://en.wikipedia.org/wiki/Reaction_rate_constant | Jump to content
Search
Contents
(Top)
1 Elementary steps
2 Relationship to other parameters
3 Dependence on temperature
3.1 Comparison of models
4 Units
5 Plasma and gases
6 Rate constant calculations
7 Divided saddle theory
8 See also
9 References
Reaction rate constant
العربية
Català
Čeština
Deutsch
Eesti
Español
فارسی
Français
한국어
Bahasa Indonesia
Italiano
עברית
Magyar
Nederlands
日本語
Norsk bokmål
Polski
Romnă
Русский
Српски / srpski
Srpskohrvatski / српскохрватски
Suomi
தமிழ்
Українська
中文
Edit links
Article
Talk
Read
Edit
View history
Tools
Actions
Read
Edit
View history
General
What links here
Related changes
Upload file
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Print/export
Download as PDF
Printable version
In other projects
Wikidata item
Appearance
From Wikipedia, the free encyclopedia
Coefficient of rate of a chemical reaction
In chemical kinetics, a reaction rate constant or reaction rate coefficient () is a proportionality constant which quantifies the rate and direction of a chemical reaction by relating it with the concentration of reactants.
For a reaction between reactants A and B to form a product C,
a A + b B → c C
where
: A and B are reactants
: C is a product
: a, b, and c are stoichiometric coefficients,
the reaction rate is often found to have the form:
Here is the reaction rate constant that depends on temperature, and [A] and [B] are the molar concentrations of substances A and B in moles per unit volume of solution, assuming the reaction is taking place throughout the volume of the solution. (For a reaction taking place at a boundary, one would use moles of A or B per unit area instead.)
The exponents m and n are called partial orders of reaction and are not generally equal to the stoichiometric coefficients a and b. Instead they depend on the reaction mechanism and can be determined experimentally.
Sum of m and n, that is, (m + n) is called the overall order of reaction.
Elementary steps
[edit]
For an elementary step, there is a relationship between stoichiometry and rate law, as determined by the law of mass action. Almost all elementary steps are either unimolecular or bimolecular. For a unimolecular step
A → P
the reaction rate is described by , where is a unimolecular rate constant. Since a reaction requires a change in molecular geometry, unimolecular rate constants cannot be larger than the frequency of a molecular vibration. Thus, in general, a unimolecular rate constant has an upper limit of k1 ≤ ~1013 s−1.
For a bimolecular step
A + B → P
the reaction rate is described by , where is a bimolecular rate constant. Bimolecular rate constants have an upper limit that is determined by how frequently molecules can collide, and the fastest such processes are limited by diffusion. Thus, in general, a bimolecular rate constant has an upper limit of k2 ≤ ~1010 M−1s−1.
For a termolecular step
A + B + C → P
the reaction rate is described by , where is a termolecular rate constant.
There are few examples of elementary steps that are termolecular or higher order, due to the low probability of three or more molecules colliding in their reactive conformations and in the right orientation relative to each other to reach a particular transition state. There are, however, some termolecular examples in the gas phase. Most involve the recombination of two atoms or small radicals or molecules in the presence of an inert third body which carries off excess energy, such as O + O2 + N2 → O3 + N2. One well-established example is the termolecular step 2 I + H2 → 2 HI in the hydrogen-iodine reaction. In cases where a termolecular step might plausibly be proposed, one of the reactants is generally present in high concentration (e.g., as a solvent or diluent gas).
Relationship to other parameters
[edit]
For a first-order reaction (including a unimolecular one-step process), there is a direct relationship between the unimolecular rate constant and the half-life of the reaction: . Transition state theory gives a relationship between the rate constant and the Gibbs free energy of activation , a quantity that can be regarded as the free energy change needed to reach the transition state. In particular, this energy barrier incorporates both enthalpic () and entropic () changes that need to be achieved for the reaction to take place: The result from transition state theory is , where h is the Planck constant and R the molar gas constant. As useful rules of thumb, a first-order reaction with a rate constant of 10−4 s−1 will have a half-life (t1/2) of approximately 2 hours. For a one-step process taking place at room temperature, the corresponding Gibbs free energy of activation (ΔG‡) is approximately 23 kcal/mol.
Dependence on temperature
[edit]
The Arrhenius equation is an elementary treatment that gives the quantitative basis of the relationship between the activation energy and the reaction rate at which a reaction proceeds. The rate constant as a function of thermodynamic temperature is then given by:
The reaction rate is given by:
where Ea is the activation energy, and R is the gas constant, and m and n are experimentally determined partial orders in [A] and [B], respectively. Since at temperature T the molecules have energies according to a Boltzmann distribution, one can expect the proportion of collisions with energy greater than Ea to vary with e−Ea⁄RT. The constant of proportionality A is the pre-exponential factor, or frequency factor (not to be confused here with the reactant A) takes into consideration the frequency at which reactant molecules are colliding and the likelihood that a collision leads to a successful reaction. Here, A has the same dimensions as an (m + n)-order rate constant (see Units below).
Another popular model that is derived using more sophisticated statistical mechanical considerations is the Eyring equation from transition state theory:
where ΔG‡ is the free energy of activation, a parameter that incorporates both the enthalpy and entropy change needed to reach the transition state. The temperature dependence of ΔG‡ is used to compute these parameters, the enthalpy of activation ΔH‡ and the entropy of activation ΔS‡, based on the defining formula ΔG‡ = ΔH‡ − TΔS‡. In effect, the free energy of activation takes into account both the activation energy and the likelihood of successful collision, while the factor kBT/h gives the frequency of molecular collision.
The factor (c⊖)1-M ensures the dimensional correctness of the rate constant when the transition state in question is bimolecular or higher. Here, c⊖ is the standard concentration, generally chosen based on the unit of concentration used (usually c⊖ = 1 mol L−1 = 1 M), and M is the molecularity of the transition state. Lastly, κ, usually set to unity, is known as the transmission coefficient, a parameter which essentially serves as a "fudge factor" for transition state theory.
The biggest difference between the two theories is that Arrhenius theory attempts to model the reaction (single- or multi-step) as a whole, while transition state theory models the individual elementary steps involved. Thus, they are not directly comparable, unless the reaction in question involves only a single elementary step.
Finally, in the past, collision theory, in which reactants are viewed as hard spheres with a particular cross-section, provided yet another common way to rationalize and model the temperature dependence of the rate constant, although this approach has gradually fallen into disuse. The equation for the rate constant is similar in functional form to both the Arrhenius and Eyring equations:
where P is the steric (or probability) factor and Z is the collision frequency, and ΔE is energy input required to overcome the activation barrier. Of note, , making the temperature dependence of k different from both the Arrhenius and Eyring models.
Comparison of models
[edit]
All three theories model the temperature dependence of k using an equation of the form
for some constant C, where α = 0, 1⁄2, and 1 give Arrhenius theory, collision theory, and transition state theory, respectively, although the imprecise notion of ΔE, the energy needed to overcome the activation barrier, has a slightly different meaning in each theory. In practice, experimental data does not generally allow a determination to be made as to which is "correct" in terms of best fit. Hence, all three are conceptual frameworks that make numerous assumptions, both realistic and unrealistic, in their derivations. As a result, they are capable of providing different insights into a system.
Units
[edit]
The units of the rate constant depend on the overall order of reaction.
If concentration is measured in units of mol·L−1 (sometimes abbreviated as M), then
For order (m + n), the rate constant has units of mol1−(m+n)·L(m+n)−1·s−1 (or M1−(m+n)·s−1)
For order zero, the rate constant has units of mol·L−1·s−1 (or M·s−1)
For order one, the rate constant has units of s−1
For order two, the rate constant has units of L·mol−1·s−1 (or M−1·s−1)
For order three, the rate constant has units of L2·mol−2·s−1 (or M−2·s−1)
For order four, the rate constant has units of L3·mol−3·s−1 (or M−3·s−1)
Plasma and gases
[edit]
Calculation of rate constants of the processes of generation and relaxation of electronically and vibrationally excited particles are of significant importance. It is used, for example, in the computer simulation of processes in plasma chemistry or microelectronics. First-principle based models should be used for such calculation. It can be done with the help of computer simulation software.
Rate constant calculations
[edit]
Rate constant can be calculated for elementary reactions by molecular dynamics simulations. One possible approach is to calculate the mean residence time of the molecule in the reactant state. Although this is feasible for small systems with short residence times, this approach is not widely applicable as reactions are often rare events on molecular scale. One simple approach to overcome this problem is Divided Saddle Theory. Such other methods as the Bennett Chandler procedure, and Milestoning have also been developed for rate constant calculations.
Divided saddle theory
[edit]
The theory is based on the assumption that the reaction can be described by a reaction coordinate, and that we can apply Boltzmann distribution at least in the reactant state. A new, especially reactive segment of the reactant, called the saddle domain, is introduced, and the rate constant is factored:
where αSDRS is the conversion factor between the reactant state and saddle domain, while kSD is the rate constant from the saddle domain. The first can be simply calculated from the free energy surface, the latter is easily accessible from short molecular dynamics simulations
See also
[edit]
Reaction rate
Equilibrium constant
Molecularity
References
[edit]
^ "Chemical Kinetics Notes". www.chem.arizona.edu. Archived from the original on 31 March 2012. Retrieved 5 May 2018.
^ Lowry, Thomas H. (1987). Mechanism and theory in organic chemistry. Richardson, Kathleen Schueller (3rd ed.). New York: Harper & Row. ISBN 978-0060440848. OCLC 14214254.
^ Moore, John W.; Pearson, Ralph G. (1981). Kinetics and Mechanism (3rd ed.). John Wiley. pp. 226–7. ISBN 978-0-471-03558-9.
^ The reactions of nitric oxide with the diatomic molecules Cl2, Br2 or O2 (e.g., 2 NO + Cl2 → 2 NOCl, etc.) have also been suggested as examples of termolecular elementary processes. However, other authors favor a two-step process, each of which is bimolecular: (NO + Cl2 ⇄ NOCl2, NOCl2 + NO → 2 NOCl). See: Compton, R.G.; Bamford, C. H.; Tipper, C.F.H., eds. (2014) . "5. Reactions of the Oxides of Nitrogen §5.5 Reactions with Chlorine". Reactions of Non-metallic Inorganic Compounds. Comprehensive Chemical Kinetics. Vol. 6. Elsevier. p. 174. ISBN 978-0-08-086801-1.
^ Sullivan, John H. (1967-01-01). "Mechanism of the Bimolecular Hydrogen—Iodine Reaction". The Journal of Chemical Physics. 46 (1): 73–78. Bibcode:1967JChPh..46...73S. doi:10.1063/1.1840433. ISSN 0021-9606.
^ Kotz, John C. (2009). Chemistry & chemical reactivity. Treichel, Paul., Townsend, John R. (7th ed.). Belmont, Calif.: Thomson Brooks/ Cole. p. 703. ISBN 9780495387039. OCLC 220756597.
^ Laidler, Keith J. (1987). Chemical Kinetics (3rd ed.). Harper & Row. p. 113. ISBN 0-06-043862-2.
^ Steinfeld, Jeffrey I.; Francisco, Joseph S.; Hase, William L. (1999). Chemical Kinetics and Dynamics (2nd ed.). Prentice Hall. p. 301. ISBN 0-13-737123-3.
^ Carpenter, Barry K. (1984). Determination of organic reaction mechanisms. New York: Wiley. ISBN 978-0471893691. OCLC 9894996.
^ Blauch, David. "Differential Rate Laws". Chemical Kinetics.
^ a b Daru, János; Stirling, András (2014). "Divided Saddle Theory: A New Idea for Rate Constant Calculation" (PDF). J. Chem. Theory Comput. 10 (3): 1121–1127. doi:10.1021/ct400970y. PMID 26580187.
^ Chandler, David (1978). "Statistical mechanics of isomerization dynamics in liquids and the transition state approximation". J. Chem. Phys. 68 (6): 2959–2970. Bibcode:1978JChPh..68.2959C. doi:10.1063/1.436049.
^ Bennett, C. H. (1977). Christofferson, R. (ed.). Algorithms for Chemical Computations, ACS Symposium Series No. 46. Washington, D.C.: American Chemical Society. ISBN 978-0-8412-0371-6.
^ West, Anthony M.A.; Elber, Ron; Shalloway, David (2007). "Extending molecular dynamics time scales with milestoning: Example of complex kinetics in a solvated peptide". The Journal of Chemical Physics. 126 (14): 145104. Bibcode:2007JChPh.126n5104W. doi:10.1063/1.2716389. PMID 17444753.
Retrieved from "
Category:
Chemical kinetics
Hidden categories:
Articles with short description
Short description is different from Wikidata
Reaction rate constant
Add topic |
187839 | https://en.wikipedia.org/wiki/Direction_cosine | Jump to content
Direction cosine
বাংলা
Català
Deutsch
Español
فارسی
Français
Italiano
Қазақша
Norsk nynorsk
Polski
Romnă
Svenska
தமிழ்
Українська
Tiếng Việt
中文
Edit links
From Wikipedia, the free encyclopedia
Cosines of the angles between a vector and the coordinate axes
| | |
--- |
| | This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. Please help improve this article by introducing more precise citations. (January 2017) (Learn how and when to remove this message) |
In analytic geometry, the direction cosines (or directional cosines) of a vector are the cosines of the angles between the vector and the three positive coordinate axes. Equivalently, they are the contributions of each component of the basis to a unit vector in that direction.
Three-dimensional Cartesian coordinates
[edit]
Further information: Cartesian coordinates
If v is a Euclidean vector in three-dimensional Euclidean space,
where ex, ey, ez are the standard basis in Cartesian notation, then the direction cosines are
It follows that by squaring each equation and adding the results
Here α, β, γ are the direction cosines and the Cartesian coordinates of the unit vector and a, b, c are the direction angles of the vector v.
The direction angles a, b, c are acute or obtuse angles, i.e., 0 ≤ a ≤ π, 0 ≤ b ≤ π and 0 ≤ c ≤ π, and they denote the angles formed between v and the unit basis vectors ex, ey, ez.
General meaning
[edit]
More generally, direction cosine refers to the cosine of the angle between any two vectors. They are useful for forming direction cosine matrices that express one set of orthonormal basis vectors in terms of another set, or for expressing a known vector in a different basis. Simply put, direction cosines provide an easy method of representing the direction of a vector in a Cartesian coordinate system.
Applications
[edit]
Determining angles between two vectors
[edit]
If vectors u and v have direction cosines (αu, βu, γu) and (αv, βv, γv) respectively, with an angle θ between them, their unit vectors are
Taking the dot product of these two unit vectors yield,
where θ is the angle between the two unit vectors, and is also the angle between u and v.
Since θ is a geometric angle, and is never negative. Therefore only the positive value of the dot product is taken, yielding us the final result,
See also
[edit]
Cartesian tensor
Euler angles
References
[edit]
Kay, D. C. (1988). Tensor Calculus. Schaum’s Outlines. McGraw Hill. pp. 18–19. ISBN 0-07-033484-6.
Spiegel, M. R.; Lipschutz, S.; Spellman, D. (2009). Vector analysis. Schaum’s Outlines (2nd ed.). McGraw Hill. pp. 15, 25. ISBN 978-0-07-161545-7.
Tyldesley, J. R. (1975). An introduction to tensor analysis for engineers and applied scientists. Longman. p. 5. ISBN 0-582-44355-5.
Tang, K. T. (2006). Mathematical Methods for Engineers and Scientists. Vol. 2. Springer. p. 13. ISBN 3-540-30268-9.
Weisstein, Eric W. "Direction Cosine". MathWorld.
Retrieved from "
Categories:
Algebraic geometry
Vectors (mathematics and physics)
Geometry stubs
Hidden categories:
Articles with short description
Short description is different from Wikidata
Articles lacking in-text citations from January 2017
All articles lacking in-text citations
All stub articles |
187840 | https://www.onlinemathlearning.com/compare-unit-fractions-worksheet.html | Compare Unit Fractions Worksheets (answers, printable, online, grade 3)
OnlineMathLearning.com - Do Not Process My Personal Information
If you wish to opt-out of the sale, sharing to third parties, or processing of your personal or sensitive information for targeted advertising by us, please use the below opt-out section to confirm your selection. Please note that after your opt-out request is processed you may continue seeing interest-based ads based on personal information utilized by us or personal information disclosed to third parties prior to your opt-out. You may separately opt-out of the further disclosure of your personal information by third parties on the IAB’s list of downstream participants. This information may also be disclosed by us to third parties on the IAB’s List of Downstream Participants that may further disclose it to other third parties.
Personal Data Processing Opt Outs
CONFIRM
×
Compare Unit Fractions Worksheets
Related Pages
Math Worksheets
Lessons for Third Grade
Free Printable Worksheets
Printable Math Worksheets
Discover more
Mathematics
math
Math
College prep math
Assessments
Calculators graphing scientific
Video math lessons
Math puzzles games
Math problem solving
Tutoring services
Share this page to Google Classroom
Printable “Fraction” worksheets:
Equal Parts
Introduction to Fractions
Compare Unit Fractions
Compare Fractions with Same Numerator
Fractions on the Number Line
Compare Fractions on the Number Line
Compare Fractions
Order Fractions
Printable Math Worksheets
Compare Unit Fractions Worksheets
In these free math worksheets, you will learn how to represent and compare unit fractions.
How to compare unit fractions?
Comparing unit fractions involves determining which of two or more unit fractions is greater or smaller. A unit fraction is a fraction where the numerator (top number) is 1.
We can use visual aids such as fraction strips, fraction bars, circles, or number lines to represent the unit fractions.
For example, we will be able to see that 1/3 is greater than 1/4.
Imagine a pizza cut into 3 slices and another same size pizza cut into 4 slices. We can visualize that the slice of pizza that was cut into 3 parts would be bigger than the slice of pizza that was cut into 4 parts.
The fraction with the larger denominator has smaller pieces, so it represents a smaller fraction of the whole. In this case, 1/4 is smaller than 1/3.
Printable Math Worksheets Algebra Textbooks
2. We can also compare the denominators of the unit fractions. A smaller denominator means larger pieces, indicating a bigger fraction.
For example, 1/4 is greater than 1/6.
Have a look at this video if you need to learn how to compare unit fractions.
Compare Unit Fractions
Click on the following worksheet to get a printable pdf document.
Scroll down the page for more Compare Unit Fractions Worksheets.
Learning Software
Printable Math Worksheets
More Compare Unit Fractions Worksheets
Printable
(Answers on the second page.)
Compare Unit Fractions Worksheet #1
Compare Unit Fractions Worksheet #2
Compare Unit Fractions Worksheet #3
Compare Unit Fractions Worksheet #4
Related Lessons & Worksheets
Compare Unit Fractions
Equivalent Fractions
Reduce Proper Fractions
Simplify Proper & Improper Fractions
Improper Fractions to Mixed Numbers
Mixed Numbers to Improper Fractions
Printable Math Worksheets
More Printable Worksheets
Discover more
Mathematics
Math
math
School supplies
Assessments
Purchase GRE preparation materials
Purchase physics textbook
Classroom Tools
Math Lesson Plans
Printable Math Worksheets
Discover more
Math
math
Mathematics
Educational assessment
Education
Learning Software
Buy TOEFL test prep
Purchase calculus online course
Math Lesson Plans
Educational Games
Try out our new and fun Fraction Concoction Game.
Add and subtract fractions to make exciting fraction concoctions following a recipe. There are four levels of difficulty: Easy, medium, hard and insane. Practice the basics of fraction addition and subtraction or challenge yourself with the insane level.
×
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
×
Back to Top | Interactive Zone | Home
Copyright © 2005, 2025 - OnlineMathLearning.com.
Embedded content, if any, are copyrights of their respective owners.
×
Home
Math By Grades
Back
Pre-K
Kindergarten
Grade 1
Grade 2
Grade 3
Grade 4
Grade 5
Grade 6
Grades 7 & 8
Grades 9 & 10
Grades 11 & 12
Basic Algebra
Intermediate Algebra
High School Geometry
Math By Topics
Back
Arithmetic
Algebra
Geometry
Trigonometry
Statistics
Probability
Word Problems
Pre-Calculus
Calculus
Set Theory
Matrices
Vectors
Math Curriculum
Back
NY Common Core
Illustrative Math
Common Core Standards
Singapore Math
Free Math Worksheets
Back
Worksheets by Topics
Worksheets by Grades
Printable & Online
Common Core Worksheets
Interactive Zone
Math Tests
Back
SAT Math
ACT Math
GMAT
GRE Math
GED Math
High School-Regents
California Standards
Math Fun & Games
Back
Math Trivia
Math Games
Fun Games
Mousehunt Guide
Math Video Lessons
Back
College Algebra
College Calculus
Linear Algebra
Engineering Math
Test/Exam Prep
Back
SAT Preparation
ACT Preparation
GMAT Preparation
GRE Preparation
GED Preparation
ESL, IELTS, TOEFL
GCSE/IGCSE/A Level
Back
KS3/CheckPoint 1 Science
KS3/CheckPoint 2 Science
KS3/CheckPoint 3 Science
GCSE/IGCSE Maths
GCSE/IGCSE Biology
CGSE/IGCSE Chemistry
CGSE/IGCSE Physics
A-Level Maths
Others
Back
Biology
Chemistry
Science Projects
High School Biology
High School Chemistry
High School Physics
Animal Facts
English Help
English For Kids
Programming
Calculators
Privacy Policy
Tutoring Services
What's New
Contact Us |
187841 | https://www.elephango.com/index.cfm/pg/k12learning/lcid/10961/Prime_Factorization | Home
Tour
Elephango for Families
Elephango for Schools
Discover
Pricing
Standards Search
About
Login
Sign Up
Family Sign-Up
School Account Sign Up
Student
Parent
Educator
-1) {
var s = source.indexOf("", e);
// Add to scripts array
scripts.push(source.substring(s_e+1, e));
// Strip from source
source = source.substring(0, s) + source.substring(e_e+1);
}
// Loop through every script collected and eval it
for(var i=0; i
FAVORITE
Like
Prime Factorization
Lesson ID: 10961
How do prime numbers "factor" into your life? Numbers have unique fingerprints, and you will learn about them by watching a video, playing online games, online practice, and, of course, your computer!
categories
Pre-Algebra, Whole Numbers and Operations
subject
Math
learning style
Visual
personality style
Lion
Grade Level
Middle School (6-8)
Lesson Type
Skill Sharpener
Lesson Plan - Get It!
Audio:
Did you know that, just like you have a unique fingerprint, numbers also have a unique fingerprint?
It is called their prime factorization.
Other than zero (0) and one (1), all numbers have a prime factorization.
Prime factorization is when numbers are broken down into factors of prime numbers.
A prime number is any number that cannot be divided down into smaller numbers.
If you need to review prime numbers, check out our lesson found under Additional Resources in the right-hand sidebar.
Look at this example of prime factorization:
Next, watch this great Math Antics - Prime Factorization video:
What is the definition of prime factorization?
If you said, "The set of prime numbers multiplied together to get another number," you are correct!
If you said, "The action of finding the set of prime numbers multiplied to get a number," you are also correct!
Keep going in the Got It? section!
Resources and Extras
Supplies
printer
paper
scissors
Resources Referenced in the Lesson
Factors of Numbers Game
Additional Resources
Prime Suspects
Suggested Lessons
Skip Counting by Sixes
Math | Intermediate (3-5)
Solving One-Step Equation Mysteries
Math | Middle School (6-8)
Greater Than and Less Than
Math | PreK/K, Primary (K-2)
Mr. D Math - The Distributive Property in Action!
Math | Middle School (6-8)
Facebook
Twitter
Pinterest
Email
Copy Link
By Subject
College & Career
English / Language Arts
Fine Arts
Geography
Government
History
Life Skills
Math
Reading
Science
Social Studies
Teaching Tips & Tools
Technology
By Grade
PreK/K
Primary (K-2)
Intermediate (3-5)
Middle School (6-8)
High School (9-12)
Adult Learning
Parent Resources
By Time
Less than 30 minutes
30 minutes to an hour
1-2 hours
3-4 hours
5+ hours
By Content Type
Interactive
Video
By Lesson Type
Dig Deeper
Quick Query
Skill Sharpener
Auditory
Kinesthetic
Visual
Beaver
Golden Retriever
Lion
Otter
Elephango © 2025 |
Contact Us |
About |
Terms & Conditions |
Privacy Policy |
Acceptable Use Policy |
187842 | https://mathoverflow.net/questions/30156/demystifying-complex-numbers | Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Demystifying complex numbers
Ask Question
Asked
Modified 1 year, 4 months ago
Viewed 22k times
$\begingroup$
At the end of this month I start teaching complex analysis to 2nd year undergraduates, mostly from engineering but some from science and maths. The main applications for them in future studies are contour integrals and Laplace transform, but this should be a "real" complex analysis course which I could later refer to in honours courses. I am now confident (after this discussion, especially Gausss complaints given in Keiths comment) that the name "complex" is discouraging to average students.
Why do we need to study numbers which do not belong to the real world?
We all know that the thesis is wrong and I have in mind some examples where the use of complex variable functions simplify solving considerably (I give two below). The drawback is that all them assume some knowledge from students already.
So I would be happy to learn elementary examples which may convince students that complex numbers and functions of a complex variable are useful. As this question runs in the community wiki mode, I would be glad to see one example per answer.
Thank you in advance!
Here are the two promised examples. I was reminded of the second one by several answers and comments about trigonometric functions (and also by the notification that "the bounty on your question Trigonometry related to RogersRamanujan identities expires within three days"; it seems to be harder than I expected).
Example 1. What is the Fourier expansion of the (unbounded) periodic function $$ f(x)=\ln\Bigl\lvert\sin\frac x2\Bigr\rvert\ ? $$
Solution. The function $f(x)$ is periodic with period $2\pi$ and has poles at the points $2\pi k$, $k\in\mathbb Z$.
Consider the function on the interval $x\in[\varepsilon,2\pi-\varepsilon]$. The series $$ \sum_{n=1}^\infty\frac{z^n}n, \qquad z=e^{ix}, $$ converges for all values $x$ from the interval. Since $$ \Bigl\lvert\sin\frac x2\Bigr\rvert=\sqrt{\frac{1-\cos x}2} $$ and $\operatorname{Re}\ln w=\ln\lvert w\rvert$, where we choose $w=\frac12(1-z)$, we deduce that $$ \operatorname{Re}\Bigl(\ln\frac{1-z}2\Bigr)=\ln\sqrt{\frac{1-\cos x}2} =\ln\Bigl\lvert\sin\frac x2\Bigr\rvert. $$ Thus, $$ \ln\Bigl\lvert\sin\frac x2\Bigr\rvert =-\ln2-\operatorname{Re}\sum_{n=1}^\infty\frac{z^n}n =-\ln2-\sum_{n=1}^\infty\frac{\cos nx}n. $$ As $\varepsilon>0$ can be taken arbitrarily small, the result remains valid for all $x\ne2\pi k$.
Example 2. Let $p$ be an odd prime number. $\newcommand\Legendre{\genfrac(){}{}}$For an integer $a$ relatively prime to $p$, the Legendre symbol $\Legendre ap$ is $+1$ or $-1$ depending on whether the congruence $x^2\equiv a\pmod{p}$ is solvable or not. Using the elementary result (a consequence of Fermat's little theorem) that $$ \Legendre ap \equiv a^{(p-1)/2}\pmod p, \tag{}\label{star} $$ show that $$ \Legendre 2p=(-1)^{(p^2-1)/8}. $$
Solution. In the ring $\mathbb Z+\mathbb Zi=\Bbb Z[i]$, the binomial formula implies $$ (1+i)^p\equiv1+i^p\pmod p. $$ On the other hand, $$ (1+i)^p =\bigl(\sqrt2e^{\pi i/4}\bigr)^p =2^{p/2}\biggl(\cos\frac{\pi p}4+i\sin\frac{\pi p}4\biggr) $$ and $$ 1+i^p =1+(e^{\pi i/2})^p =1+\cos\frac{\pi p}2+i\sin\frac{\pi p}2 =1+i\sin\frac{\pi p}2. $$ Comparing the real parts implies that $$ 2^{p/2}\cos\frac{\pi p}4\equiv1\pmod p, $$ hence from $\sqrt2\cos(\pi p/4)\in{\pm1}$ we conclude that $$ 2^{(p-1)/2}\equiv\sqrt2\cos\frac{\pi p}4\pmod p. $$ Then using the elementary result \eqref{star}: $$ \Legendre2p \equiv2^{(p-1)/2} \equiv\sqrt2\cos\frac{\pi p}4 =\begin{cases} 1 & \text{if } p\equiv\pm1\pmod8, \cr -1 & \text{if } p\equiv\pm3\pmod8, \end{cases} $$ which is exactly the required formula.
soft-question
cv.complex-variables
teaching
Improve this question
edited Jan 4, 2022 at 18:50
community wiki
8 revs, 5 users 62%Wadim Zudilin
$\endgroup$
14
$\begingroup$ Maybe an option is to have them understand that real numbers also do not belong to the real world, that all sort of numbers are simply abstractions. $\endgroup$
Mariano Suárez-Álvarez
– Mariano Suárez-Álvarez
2010-07-01 14:50:45 +00:00
Commented Jul 1, 2010 at 14:50
$\begingroup$ Probably your electrical engineering students understand better than you do that complex numbers (in polar form) are used to represent amplitude and frequency in their area of study. $\endgroup$
Gerald Edgar
– Gerald Edgar
2010-07-01 15:36:55 +00:00
Commented Jul 1, 2010 at 15:36
$\begingroup$ Not an answer, but some suggestions: try reading the beginning of Needham's Visual Complex Analysis (usf.usfca.edu/vca/) and the end of Levi's The Mathematical Mechanic (amazon.com/Mathematical-Mechanic-Physical-Reasoning-Problems/dp/…). $\endgroup$
Qiaochu Yuan
– Qiaochu Yuan
2010-07-01 17:05:29 +00:00
Commented Jul 1, 2010 at 17:05
$\begingroup$ Your example has a hidden assumption that a student actually admits the importance of calculating F.S. of $\ln\left|\sin{x\over 2}\right|$, which I find dubious. The examples with an oscillator's ODE is more convincing, IMO. $\endgroup$
Paul Yuryev
– Paul Yuryev
2010-07-02 03:02:50 +00:00
Commented Jul 2, 2010 at 3:02
$\begingroup$ @Mariano, Gerald and Qiaochu: Thanks for the ideas! Visual Complex Analysis sounds indeed great, and I'll follow Levi's book as soon as I reach the uni library. @Paul: I give the example (which I personally like) and explain that I do not consider it elementary enough for the students. It's a matter of taste! I've never used Fourier series in my own research but it doesn't imply that I doubt of their importance. We all (including students) have different criteria for measuring such things. $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-02 05:06:23 +00:00
Commented Jul 2, 2010 at 5:06
| Show 9 more comments
44 Answers 44
Reset to default
2 Next
$\begingroup$
The nicest elementary illustration I know of the relevance of complex numbers to calculus is its link to radius of convergence, which student learn how to compute by various tests, but more mechanically than conceptually. The series for $1/(1-x)$, $\log(1+x)$, and $\sqrt{1+x}$ have radius of convergence 1 and we can see why: there's a problem at one of the endpoints of the interval of convergence (the function blows up or it's not differentiable). However, the function $1/(1+x^2)$ is nice and smooth on the whole real line with no apparent problems, but its radius of convergence at the origin is 1. From the viewpoint of real analysis this is strange: why does the series stop converging? Well, if you look at distance 1 in the complex plane...
More generally, you can tell them that for any rational function $p(x)/q(x)$, in reduced form, the radius of convergence of this function at a number $a$ (on the real line) is precisely the distance from $a$ to the nearest zero of the denominator, even if that nearest zero is not real. In other words, to really understand the radius of convergence in a general sense you have to work over the complex numbers. (Yes, there are subtle distinctions between smoothness and analyticity which are relevant here, but you don't have to discuss that to get across the idea.)
Similarly, the function $x/(e^x-1)$ on the real line is smooth, but its power series at $x = 0$ has finite radius of convergence $2\pi$ (not sure if you can make this numerically apparent). Again, on the real line the reason for this is not visible, but in the complex plane there is a good explanation. If someone is not happy about that function looking initially problematic at $x = 0$, where its value is $1$, use $x/(e^x+1)$ instead and the radius of convergence of its power series at $x = 0$ has radius of convergence $\pi$ rather than $2\pi$.
Share
Improve this answer
edited May 4, 2024 at 23:48
community wiki
2 revsKConrad
$\endgroup$
5
1
$\begingroup$ Thanks, Keith! That's a nice point which I always mention for real analysis students as well. The structure of singularities of a linear differential equation (under some mild conditions) fully determines the convergence of the series solving the DE. The generating series for Bernoulli numbers does not produce sufficiently good approximations to $2\pi$, but it's just beautiful by itself. $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-02 05:14:35 +00:00
Commented Jul 2, 2010 at 5:14
$\begingroup$ Ī̲ wouldnt say it demystifies anything. $\endgroup$
Incnis Mrsi
– Incnis Mrsi
2021-04-14 04:24:23 +00:00
Commented Apr 14, 2021 at 4:24
3
$\begingroup$ @IncnisMrsi my answer was posted almost 11 years ago, so I had to remind myself what the OP's original request was. It was not about the title, but about the following, taken from the OP's post: "to learn elementary examples which may convince students in usefulness of complex numbers and functions in complex variable." СÑÐ´Ñ Ð¿Ð¾ ÑÑой пÑоÑÑбе, Ñ Ð´ÑмаÑ, ÑÑо вÑе в поÑÑдке $\endgroup$
KConrad
– KConrad
2021-04-14 05:18:39 +00:00
Commented Apr 14, 2021 at 5:18
$\begingroup$ My guess is that the main example you cite is the sort of thing that is actually appealing to the already-convinced, i.e. mathematicians who are reading this question, and yet doesn't necessarily appeal to someone who feels strong and comfortable with some real analysis but doesn't want to do complex analysis. i.e. it is not at all "strange" from the point of view of real analysis as to why the series doesn't converge: The tests that a student has learned for convergence of basic series will show it easily. $\endgroup$
SBK
– SBK
2024-06-11 14:41:34 +00:00
Commented Jun 11, 2024 at 14:41
1
$\begingroup$ @SBK I of course am already convinced, but I still think an appropriately thoughtful student might wonder why the real analysis convergence tests are saying the radius of convergence is a value $R$ when nothing appears to be going wrong at $x = \pm R$ (a contrast to the series for $1/(1-x)$ and $\log(1+x)$ at $x = 0$, where $R = 1$). $\endgroup$
KConrad
– KConrad
2024-06-11 23:57:12 +00:00
Commented Jun 11, 2024 at 23:57
Add a comment |
71
$\begingroup$
You can solve the differential equation $y''+y=0$ using complex numbers. Just write $$(\partial^2 + 1) y = (\partial +i)(\partial -i) y$$ and you are now dealing with two order one differential equations that are easily solved $$(\partial +i) z =0,\qquad (\partial -i)y =z.$$ The multivariate case is a bit harder and uses quaternions or Clifford algebras. This was done by Dirac for the Schrodinger equation ($-\Delta \psi = i\partial_t \psi$), and that led him to the prediction of the existence of antiparticles (and to the Nobel prize).
Share
Improve this answer
edited May 5, 2024 at 4:33
community wiki
2 revs, 2 users 83%coudy
$\endgroup$
5
$\begingroup$ Thanks, coudy! This is really nice (Pietro's answer only hints a possible use of 2nd order DEs). $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-01 12:24:00 +00:00
Commented Jul 1, 2010 at 12:24
2
$\begingroup$ I was taught a similar trick as a second-year honours undergraduate (same target that the OP suggests), and I don't find it helpful at all. The problem is that you'll probably need to spend a full lecture trying to formalize it and explain why it works, otherwise it just looks like a magical unmotivated trick that abuses notation. $\endgroup$
Federico Poloni
– Federico Poloni
2014-01-27 11:30:51 +00:00
Commented Jan 27, 2014 at 11:30
1
$\begingroup$ @Frederico: why do you say it abuses notation? That would be true in higher dimensional case without introducing Clifford algebras. But here it's perfectly valid. You just need to know what complex numbers are. $\endgroup$
Marek
– Marek
2014-01-27 15:09:25 +00:00
Commented Jan 27, 2014 at 15:09
1
$\begingroup$ How is Dirac theory based on (parabolic) Schrödinger equation? We know that a solution to Dirac equation is a solution to KleinGordon equation which is hyperbolic. $\endgroup$
Incnis Mrsi
– Incnis Mrsi
2021-04-14 04:29:30 +00:00
Commented Apr 14, 2021 at 4:29
11
$\begingroup$ @Marek I see only now this old comment but I'll answer anyway just in case. The 'abuse of notation' is that it at that point students do not know that $\partial$ belongs to an operator algebra and that it makes sense to factor out $\partial^2 y +y = (\partial^2 +1)y$, let alone decompose it as $(\partial + i)(\partial -i)$. The fact that the derivative notation $\partial^2 y$ reads like a square followed by a product by $y$ at that point is just notation, so this seems to belong to the same category as simplifying out $dx$ to change variables in integrals. $\endgroup$
Federico Poloni
– Federico Poloni
2021-04-14 06:43:56 +00:00
Commented Apr 14, 2021 at 6:43
Add a comment |
63
$\begingroup$
Students usually find the connection of trigonometric identities like $\sin(a+b)=\sin a\cos b+\cos a\sin b$ to multiplication of complex numbers striking.
Share
Improve this answer
edited Jul 1, 2010 at 14:12
community wiki
2 revisions
$\endgroup$
6
25
$\begingroup$ Not sure about the students, but I do. :-) $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-01 12:21:09 +00:00
Commented Jul 1, 2010 at 12:21
$\begingroup$ Well, I am not claiming this about all students, but whenever I mentioned this (or same things in terms of rotation matrices) in class, I always get excited feedback at least from some of them. $\endgroup$
Yuri Bakhtin
– Yuri Bakhtin
2010-07-01 12:24:53 +00:00
Commented Jul 1, 2010 at 12:24
10
$\begingroup$ This is an excellent suggestion. I can never remember these identities off the top of my head. Whenever I need one of them, the simplest way (faster than googling) is to read them off from $(a+ib)(c+id)=(ac-bd) + i(ad+bc)$. $\endgroup$
alex
– alex
2010-07-01 20:35:54 +00:00
Commented Jul 1, 2010 at 20:35
15
$\begingroup$ When I first started teaching calculus in the US, I was surprised that many students didn't remember addition formulas for trig functions. As the years went by, it's gotten worse: now the whole idea of using an identity like that to solve a problem is alien to them, e.g. even if they may look it up doing the homework, they "get stuck" on the problem and "don't get it". What is there to blame: calculators? standard tests that neglect it? teachers who never understood it themselves? Anyway, it's a very bad omen. $\endgroup$
Victor Protsak
– Victor Protsak
2010-07-02 01:43:42 +00:00
Commented Jul 2, 2010 at 1:43
8
$\begingroup$ @Victor: It can be worse... When I taught Calc I at U of Toronto to engineering students, I was approached by some students who claimed they had heard words "sine" and "cosine" but were not quite sure what they meant. $\endgroup$
Yuri Bakhtin
– Yuri Bakhtin
2010-07-02 08:51:34 +00:00
Commented Jul 2, 2010 at 8:51
| Show 1 more comment
45
$\begingroup$
From "Birds and Frogs" by Freeman Dyson [Notices of Amer. Math. Soc. 56 (2009) 212--223]:
One of the most profound jokes of nature is the square root of minus one that the physicist Erwin Schrödinger put into his wave equation when he invented wave mechanics in 1926. Schrödinger was a bird who started from the idea of unifying mechanics with optics. A hundred years earlier, Hamilton had unified classical mechanics with ray optics, using the same mathematics to describe optical rays and classical particle trajectories. Schrödingers idea was to extend this unification to wave optics and wave mechanics. Wave optics already existed, but wave mechanics did not. Schrödinger had to invent wave mechanics to complete the unification. Starting from wave optics as a model, he wrote down a differential equation for a mechanical particle, but the equation made no sense. The equation looked like the equation of conduction of heat in a continuous medium. Heat conduction has no visible relevance to particle mechanics. Schrödingers idea seemed to be going nowhere. But then came the surprise. Schrödinger put the square root of minus one into the equation, and suddenly it made sense. Suddenly it became a wave equation instead of a heat conduction equation. And Schrödinger found to his delight that the equation has solutions corresponding to the quantized orbits in the Bohr model of the atom. It turns out that the Schrödinger equation describes correctly everything we know about the behavior of atoms. It is the basis of all of chemistry and most of physics. And that square root of minus one means that nature works with complex numbers and not with real numbers. This discovery came as a complete surprise, to Schrödinger as well as to everybody else. According to Schrödinger, his fourteen-year-old girl friend Itha Junger said to him at the time, "Hey, you never even thought when you began that so much sensible stuff would come out of it." All through the nineteenth century, mathematicians from Abel to Riemann and Weierstrass had been creating a magnificent theory of functions of complex variables. They had discovered that the theory of functions became far deeper and more powerful when it was extended from real to complex numbers. But they always thought of complex numbers as an artificial construction, invented by human mathematicians as a useful and elegant abstraction from real life. It never entered their heads that this artificial number system that they had invented was in fact the ground on which atoms move. They never imagined that nature had got there first.
Share
Improve this answer
answered Jul 9, 2010 at 12:12
community wiki
Wadim Zudilin
$\endgroup$
1
3
$\begingroup$ As discussed at physics.stackexchange.com/questions/428033/… putting in an imaginary diffusion coefficient into the heat equation changes the equation to become invariant against time reversal $t \rightarrow -t$ like a wave equation. This is what Dyson means with "it suddenly made sense". $\endgroup$
user4503
– user4503
2018-10-07 19:57:12 +00:00
Commented Oct 7, 2018 at 19:57
Add a comment |
33
$\begingroup$
If the students have had a first course in differential equations, tell them to solve the system
$$x'(t) = -y(t)$$ $$y'(t) = x(t).$$
This is the equation of motion for a particle whose velocity vector is always perpendicular to its displacement. Explain why this is the same thing as
$$(x(t) + iy(t))' = i(x(t) + iy(t))$$
hence that, with the right initial conditions, the solution is
$$x(t) + iy(t) = e^{it}.$$
On the other hand, a particle whose velocity vector is always perpendicular to its displacement travels in a circle. Hence, again with the right initial conditions, $x(t) = \cos t, y(t) = \sin t$. (At this point you might reiterate that complex numbers are real $2 \times 2$ matrices, assuming they have seen this method for solving systems of differential equations.)
Share
Improve this answer
answered Jul 1, 2010 at 17:11
community wiki
Qiaochu Yuan
$\endgroup$
2
1
$\begingroup$ Thanks, Qiaochu! They have some background in DEs, and this is a very good way to get DEs, trig identities and matrices at the same time. $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-02 05:43:39 +00:00
Commented Jul 2, 2010 at 5:43
$\begingroup$ I just used basically this in a first course in differential equations to prove Euler's formula for them. One of us thought it was really cool, anyway... $\endgroup$
Ryan Reich
– Ryan Reich
2013-03-22 14:10:32 +00:00
Commented Mar 22, 2013 at 14:10
Add a comment |
31
$\begingroup$
Here are two simple uses of complex numbers that I use to try to convince students that complex numbers are "cool" and worth learning.
(Number Theory) Use complex numbers to derive Brahmagupta's identity expressing $(a^2+b^2)(c^2+d^2)$ as the sum of two squares, for integers $a,b,c,d$.
(Euclidean geometry) Use complex numbers to explain Ptolemy's Theorem. For a cyclic quadrilateral with vertices $A,B,C,D$ we have $$\overline{AC}\cdot \overline{BD}=\overline{AB}\cdot \overline{CD} +\overline{BC}\cdot \overline{AD}$$
Share
Improve this answer
edited Jul 1, 2010 at 16:15
community wiki
2 revisions
$\endgroup$
3
27
$\begingroup$ And even more amazingly, one can completely solve the diophantine equation $x^2+y^2=z^n$ for any $n$ as follows: $$x+yi=(a+bi)^n, \ z=a^2+b^2.$$ I learned this from a popular math book while in elementary school, many years before studying calculus. $\endgroup$
Victor Protsak
– Victor Protsak
2010-07-02 01:21:18 +00:00
Commented Jul 2, 2010 at 1:21
$\begingroup$ @Byron: Thanks! 2 examples in one answer: I can vote only once. :-( @Victor: I am indeed amazed! This elementary knowledge is new to me. I probably spent too much on complex DEs... $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-02 05:21:43 +00:00
Commented Jul 2, 2010 at 5:21
$\begingroup$ How does the vector calculus identity explain complex numbers? It is about an inner product space, true in every dimension (and not necessary for a positive-definite scalar product). There is dot product in every $\mathbb{R}^n, n\ge 1$, but only for $n = 1, 2, 4$ multiplication exists (and only for $n = 1, 2$ is it commutative). Downvote. $\endgroup$
Incnis Mrsi
– Incnis Mrsi
2021-04-14 04:49:02 +00:00
Commented Apr 14, 2021 at 4:49
Add a comment |
23
$\begingroup$
One cannot over-emphasize that passing to complex numbers often permits a great simplification by linearizing what would otherwise be more complex nonlinear phenomena. One example familiar to any calculus student is the fact that integration of rational functions is much simpler over $\mathbb C$ (vs. $\mathbb R$) since partial fraction decompositions involve at most linear (vs quadratic) polynomials in the denominator. Similarly one reduces higher-order constant coefficient differential and difference equations to linear (first-order) equations by factoring the linear operators over $\mathbb C$. More generally one might argue that such simplification by linearization was at the heart of the development of abstract algebra. Namely, Dedekind, by abstracting out the essential linear structures (ideals and modules) in number theory, greatly simplified the prior nonlinear theory based on quadratic forms. This enabled him to exploit to the hilt the power of linear algebra. Examples abound of the revolutionary power that this brought to number theory and algebra - e.g. for one little-known gem see my recent post explaining how Dedekind's notion of conductor ideal beautifully encapsulates the essence of elementary irrationality proofs of n'th roots.
Share
Improve this answer
edited Jun 15, 2020 at 7:27
community wiki
4 revsBill Dubuque
$\endgroup$
Add a comment |
20
$\begingroup$
If you really want to "demystify" complex numbers, I'd suggest teaching what complex multiplication looks like with the following picture, as opposed to a matrix representation:
If you want to visualize the product "z w", start with '0' and 'w' in the complex plane, then make a new complex plane where '0' sits above '0' and '1' sits above 'w'. If you look for 'z' up above, you see that 'z' sits above something you name 'z w'. You could teach this picture for just the real numbers or integers first -- the idea of using the rest of the points of the plane to do the same thing is a natural extension.
You can use this picture to visually "demystify" a lot of things:
Why is a negative times a negative a positive? --- I know some people who lost hope in understanding math as soon as they were told this fact
i^2 = -1
(zw)t = z(wt) --- I think this is a better explanation than a matrix representation as to why the product is associative
|zw| = |z| |w|
(z + w)v = zv + wv
The Pythagorean Theorem: draw (1-it)(1+it) = 1 + t^2 etc.
One thing that's not so easy to see this way is the commutativity (for good reasons).
After everyone has a grasp on how complex multiplication looks, you can get into the differential equation: $\frac{dz}{dt} = i z , z(0) = 1$ which Qiaochu noted travels counterclockwise in a unit circle at unit speed. You can use it to give a good definition for sine and cosine -- in particular, you get to define $\pi$ as the smallest positive solution to $e^{i \pi} = -1$. It's then physically obvious (as long as you understand the multiplication) that $e^{i(x+y)} = e^{ix} e^{iy}$, and your students get to actually understand all those hard/impossible to remember facts about trig functions (like angle addition and derivatives) that they were forced to memorize earlier in their lives. It may also be fun to discuss how the picture for $(1 + \frac{z}{n})^n$ turns into a picture of that differential equation in the "compound interest" limit as $n \to \infty$; doing so provides a bridge to power series, and gives an opportunity to understand the basic properties of the real exponential function more intuitively as well.
But this stuff is less demystifying complex numbers and more... demystifying other stuff using complex numbers.
Here's a link to some Feynman lectures on Quantum Electrodynamics (somehow prepared for a general audience) if you really need some flat out real-world complex numbers
Share
Improve this answer
answered Jul 2, 2010 at 2:50
community wiki
Phil Isett
$\endgroup$
0
Add a comment |
19
$\begingroup$
One of my favourite elementary applications of complex analysis is the evaluation of infinite sums of the form $$\sum_{n\geq 0} \frac{p(n)}{q(n)}$$ where $p,q$ are polynomials and $\deg q > 1 + \deg p$, by using residues.
Share
Improve this answer
answered Jul 1, 2010 at 9:25
community wiki
José Figueroa-O'Farrill
$\endgroup$
1
$\begingroup$ Thanks, José! This was not my list (even I use this very often in serious problems). I only wonder whether it is possible to start a course with such example. $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-01 10:01:24 +00:00
Commented Jul 1, 2010 at 10:01
Add a comment |
13
$\begingroup$
They're useful just for doing ordinary geometry when programming.
A common pattern I have seen in a great many computer programs is to start with a bunch of numbers that are really ratios of distances. Theses numbers get converted to angles with inverse trig functions. Then some simple functions are applied to the angles and the trig functions are used on the results.
Trig and inverse trig functions are expensive to compute on a computer. In high performance code you want to eliminate them if possible. Quite often, for the above case, you can eliminate the trig functions. For example $\cos(2\cos^{-1} x) = 2x^2-1$ (for $x$ in a suitable range) but the version on the right runs much faster.
The catch is remembering all those trig formulae. It'd be nice to make the compiler do all the work. A solution is to use complex numbers. Instead of storing $\theta$ we store $(\cos\theta,\sin\theta)$. We can add angles by using complex multiplication, multiply angles by integers and rational numbers using powers and roots and so on. As long as you don't actually need the numerical value of the angle in radians you need never use trig functions. Obviously there comes a point where the work of doing operations on complex numbers may outweigh the saving of avoiding trig. But often in real code the complex number route is faster.
(Of course it's analogous to using quaternions for rotations in 3D. I guess it's somewhat in the spirit of rational trigonometry except I think it's easier to work with complex numbers.)
Share
Improve this answer
answered Jul 2, 2010 at 4:23
community wiki
Dan Piponi
$\endgroup$
1
$\begingroup$ Thanks! Your answer and Victor's comments reminded me about a related deduction in number theory (I'll try to add one more example to the original question). $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-02 08:46:25 +00:00
Commented Jul 2, 2010 at 8:46
Add a comment |
13
$\begingroup$
This answer is an expansion of the answer of Yuri Bakhtin.
Here is a kind of mime show.
Silently write the formulas for $\cos(2x)$ and $\sin(2x)$ lined up on the board, something like this: $$\cos(2x) = \cos^2(x) \hphantom{+ 2 \cos(x) \sin(x)} - \sin^2(x) $$ $$\sin(2x) = \hphantom{\cos^2(x)} + 2 \cos(x) \sin(x) \hphantom{- \sin^2(x)} $$
Do the same for the formulas for $\cos(3x)$ and $\sin(3x)$, and however far you want to go: $$\cos(3x) = \cos^3(x) \hphantom{+ 3 \cos^2(x) \sin(x)} - 3 \cos(x) \sin^2(x) \hphantom{- \sin^3(x)} $$ $$\sin(3x) = \hphantom{\cos^3(x)} + 3 \cos^2(x) \sin(x) \hphantom{- 3 \cos(x) \sin^2(x)} - \sin^3(x) $$
Maybe then let out a loud noise like "hmmmmmmmmm... I recognize those numbers..."
Then, on a parallel board, write out Pascal's triangle, and parallel to that write the application of Pascal's triangle to the binomial expansions $(x+y)^n$. Make some more puzzling sounds regarding those pesky plus and minus signs.
Then maybe it's time to actually say something: "Eureka! We can tie this all together by use of an imaginary number $i = \sqrt{-1}$". Then write out the binomial expansion of $$(\cos(x) + i\,\sin(x))^n $$ break it into its real and imaginary parts, and demonstrate equality with $$\cos(nx) + i\, \sin(nx). $$
Share
Improve this answer
edited Aug 7, 2013 at 23:24
community wiki
2 revs, 2 users 86%Lee Mosher
$\endgroup$
Add a comment |
12
$\begingroup$
Several motivating physical applications are listed on wikipedia
Why do we need to study numbers which do not belong to the real world?
You may want to stoke the students' imagination by disseminating the deeper truth - that the world is neither real, complex nor p-adic (these are just completions of Q). Here is a nice quote by Yuri Manin picked from here
On the fundamental level our world is neither real nor p-adic; it is adelic. For some reasons, reflecting the physical nature of our kind of living matter (e.g. the fact that we are built of massive particles), we tend to project the adelic picture onto its real side. We can equally well spiritually project it upon its non-Archimediean side and calculate most important things arithmetically. The relations between "real" and "arithmetical" pictures of the world is that of complementarity, like the relation between conjugate observables in quantum mechanics. (Y. Manin, in Conformal Invariance and String Theory, (Academic Press, 1989) 293-303 )
Share
Improve this answer
answered Jul 1, 2010 at 12:52
community wiki
SandeepJ
$\endgroup$
2
3
$\begingroup$ Thanks for the tip! I'll better not cite Yuri Ivanovich to my electrical engineers; this will hardly encourage them to do complex analysis. :-) $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-01 13:24:07 +00:00
Commented Jul 1, 2010 at 13:24
$\begingroup$ There is some hype about alleged physical relevance of p-adic numbers (in fact, Ī̲ know in person several researchers who built their career upon it), butas for $\mathbb{Q}_p$these are either overreaching generalizations or barren conjectures, whereas complex numbers are justified by quantum mechanics alone. $\endgroup$
Incnis Mrsi
– Incnis Mrsi
2021-04-14 05:04:24 +00:00
Commented Apr 14, 2021 at 5:04
Add a comment |
12
$\begingroup$
If they have a suitable background in linear algebra, I would not omit the interpretation of complex numbers in terms of conformal matrices of order 2 (with nonnegative determinant), translating all operations on complex numbers (sum, product, conjugate, modulus, inverse) in the context of matrices: with special emphasis on their multiplicative action on the plane (in particular, "real" gives "homotety" and "modulus 1" gives "rotation").
The complex exponential, defined initially as limit of $(1+z/n)^n$, should be a good application of the above geometrical ideas. In particular, for $z=it$, one can give a nice interpretation of the (too often covered with mystery) equation $e^{i\pi}=-1$ in terms of the length of the curve $e^{it}$ (defined as classical total variation).
A brief discussion on (scalar) linear ordinary differential equations of order 2, with constant coefficients, also provides a good motivation (and with some historical truth).
Related to the preceding point, and especially because they are from engineering, it should be worth recalling all the useful complex formalism used in Electricity.
Not on the side of "real world" interpretation, but rather on the side of "useful abstraction" a brief account of the history of the third degree algebraic equation, with the embarrassing "casus impossibilis" (three real solutions, and the solution formula gives none, if taken in terms of "real" radicals!) should be very instructive. Here is also the source of such terms as "imaginary".
Share
Improve this answer
edited Jul 1, 2010 at 20:09
community wiki
3 revisions
$\endgroup$
4
$\begingroup$ Thanks a lot, Pietro, for so many fruitful suggestions! Except for the point on $(1+z/n)^n$ (unfortunately, this is not the usual way to define the exponential), I can use all them. $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-01 11:52:10 +00:00
Commented Jul 1, 2010 at 11:52
$\begingroup$ In fact the equivalence with the definition of exp(z) by the exponential series may be a nice exercise about dominated convergence for series. BTW, I realized only now that you asked for one suggestion for answer...sorry :-) $\endgroup$
Pietro Majer
– Pietro Majer
2010-07-01 12:49:22 +00:00
Commented Jul 1, 2010 at 12:49
$\begingroup$ No worries, Pietro, this is a standard requirement for wiki community questions. Yes, it's a nice exercise, but most probably not for the level I get. (Last year I taught ODEs and, in particular, the linear systems where I needed to compute the exponential of matrices. The limit definition was mentioned as equivalent form of the series, and it was needed for proving $e^{\operatorname{tr}(A)}=\operatorname{det}(e^A)$. But that wasn't really accepted... :-( ) $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-01 13:00:30 +00:00
Commented Jul 1, 2010 at 13:00
3
$\begingroup$ @Wadim: The $(1+z/n)^n$ definition of the exponential is exactly what you get by applying Euler's method to the defining diff Eq of the exponential function, if you travel along the straight line from 0 to z in the domain, and use n equal partitions. $\endgroup$
Steven Gubkin
– Steven Gubkin
2012-08-27 13:24:12 +00:00
Commented Aug 27, 2012 at 13:24
Add a comment |
12
$\begingroup$
This is a specific example where complex numbers aid a task in elementary real analysis; I haven't thought about the extent to which it generalizes.
In my first year, I was given the task of formally proving that the Taylor series for arctan is $$ \arctan(x) = x - \frac{x^3}{3} + \frac{x^5}{5} - \ldots, $$ where "formally" meant not simply integrating the series for $1/(1+x^2)$ termwise, since we hadn't yet seen any theorems that said you could do that. We had, however, seen Taylor's theorem.
Hence the problem was to determine the values of all derivatives of $f(x)=\arctan(x)$, or of $f'(x)=1/(1+x^2)$, at $x=0$. However, it's not so easy to find a closed-form expression for the $n$-th derivative of $1/(1+x^2)$, unless you write it as $$ f'(x) = \frac{1}{2i} \left( \frac{1}{x-i}-\frac{1}{x+i} \right), $$ which then immediately yields $$ f^{(n)}(x) = \frac{(-1)^{n-1} (n-1)!}{2i} \cdot \left( \frac{1}{(x-i)^n}-\frac{1}{(x+i)^n} \right) $$ for $n>0$, which then gives the answers $f^{(2n)}(0)=0$ and $f^{(2n+1)}(0)=(-1)^n \cdot (2n)!$. Combining this with Taylor's theorem gives the desired series.
I still think this is pretty neat. There really isn't any obvious way to cut the complex numbers and still have as painless a calculation as the one above.
Share
Improve this answer
answered Aug 28, 2016 at 12:10
community wiki
R.P.
$\endgroup$
Add a comment |
11
$\begingroup$
In answer to
"Why do we need to study numbers which do not belong to the real world?"
you might simply state that quantum mechanics tells us that complex numbers arise naturally in the correct description of probability theory as it occurs in our (quantum) universe.
I think a good explanation of this is in Chapter 3 of the third volume of the Feynman lectures of physics, although I don't have a copy handy to check. (In particular, similar to probability theory with real numbers, the complex amplitude of one of two independent events A or B occuring is just the sum of the amplitude of A and the amplitude of B. Furthermore, the complex amplitude of A followed by B is just the product of the amplitudes. After all intermediate calculations one just takes the magnitude of the complex number squared to get the usual (real number) probability.)
Share
Improve this answer
answered Jul 1, 2010 at 22:06
community wiki
Jon
$\endgroup$
3
1
$\begingroup$ Perhaps you are referring to Feynman's book QED? $\endgroup$
S. Carnahan
– S. Carnahan ♦
2010-07-02 04:41:02 +00:00
Commented Jul 2, 2010 at 4:41
$\begingroup$ Yes, I think it was there. (The "strange theory of light and matter" book.) $\endgroup$
Jon
– Jon
2010-07-02 16:19:37 +00:00
Commented Jul 2, 2010 at 16:19
$\begingroup$ This is a conceptual mess. Basic probability theory absolutely doesnt need complex numbers butapplied to the real worldit is a simplification just like any other theory, QM included. Surely we can (and do) reduce complex amplitudes to dumb probability measures using |·|², but it has nothing to do with correct description of probability theory. It is indeed about real-world randomness which extends far beyond Kolmogorov-style probability. Can anybody rewrite, please? $\endgroup$
Incnis Mrsi
– Incnis Mrsi
2021-04-14 05:22:46 +00:00
Commented Apr 14, 2021 at 5:22
Add a comment |
10
$\begingroup$
I never took a precalculus class because every identity I've ever needed involving sines and cosines I could derive by evaluating a complex exponential in two different ways. Perhaps you could tell them that if they ever forget a trig identity, they can rederive it using this method?
Share
Improve this answer
answered Jul 1, 2010 at 16:05
community wiki
Dylan Wilson
$\endgroup$
2
2
$\begingroup$ I especially like the complex derivation of $\cos^n x$ and $\sin^n x$ in terms of trig functions of multiple angle, which is very useful if you need to integrate them. $\endgroup$
Victor Protsak
– Victor Protsak
2010-07-02 01:32:59 +00:00
Commented Jul 2, 2010 at 1:32
$\begingroup$ @Dylan, thanks! You expand Yuri's answer. @Victor: aren't these standard problems for 1st year in algebra? ;-) $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-02 05:38:09 +00:00
Commented Jul 2, 2010 at 5:38
Add a comment |
10
$\begingroup$
Tristan Needham's book Visual Complex Analysis is full of these sorts of gems. One of my favorites is the proof using complex numbers that if you put squares on the sides of a quadralateral, the lines connecting opposite centers will be perpendicular and of the same length. After proving this with complex numbers, he outlines a proof without them that is much longer.
The relevant pages are on Google books:
Share
Improve this answer
answered Jul 2, 2010 at 15:12
community wiki
Eric O. Korman
$\endgroup$
1
$\begingroup$ The same as for mathoverflow.net/a/30185/56921 vector calculus is not complex calculus! Complex numbers certainly are a good example of an inner product space, and to teach vectors you can use the example of , but complex multiplication and (especially) division go beyond the vector-based intuition. $\endgroup$
Incnis Mrsi
– Incnis Mrsi
2021-04-14 05:38:50 +00:00
Commented Apr 14, 2021 at 5:38
Add a comment |
9
$\begingroup$
How about how the holomorphicity of a function $f(z)=x+yi$ relates to, e.g., the curl of the vector $(x,y)\in\mathbb{R}^2$? This relates nicely to why we can solve problems in two dimensional electromagnetism (or 3d with the right symmetries) very nicely using "conformal methods." It would be very easy to start a course with something like this to motivate complex analytic methods.
Share
Improve this answer
answered Jul 1, 2010 at 10:21
community wiki
jeremy
$\endgroup$
7
$\begingroup$ Jeremy, can you expand your answer or provide a reference where your example is worked in details? $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-01 10:30:04 +00:00
Commented Jul 1, 2010 at 10:30
$\begingroup$ This is in a number of advanced math methods for physics, and graduate electromagnetism texts, but one offhand I know it's in is "Electrodynamics of Continuous Media" by Landau and Lifshitz, chapter 1 section 3, on methods of solving electrostatic problems. But there are other books out there with more involved and more sophisticated treatments, but offhand I don't know any titles. If you search amazon's "search inside this book" for "the method of conformal mapping" you can find part of the discussion. But the basics of it are elementary enough that that book should be sufficient. $\endgroup$
jeremy
– jeremy
2010-07-01 11:20:27 +00:00
Commented Jul 1, 2010 at 11:20
1
$\begingroup$ Thanks, Jeremy! I'll definitely do the search, the magic word "the method of conformal mapping" is really important here. $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-01 11:45:20 +00:00
Commented Jul 1, 2010 at 11:45
4
$\begingroup$ I think most older Russian textbooks on complex analysis (e.g. Lavrentiev and Shabat or Markushevich) had examples from 2D hydrodynamics (Euler-D'Alambert equations $\iff$ Cauchy-Riemann equations). Also, of course, the Zhukovsky function and airwing profile. They serve more as applications of theory than motivations, since nontrivial mathematical work is required to get there. $\endgroup$
Victor Protsak
– Victor Protsak
2010-07-02 02:04:02 +00:00
Commented Jul 2, 2010 at 2:04
$\begingroup$ Yes, those examples are great! I was thinking about those examples, too, when I said "conformal methods," but they are a little less basic than the E&M example. There are many more examples, too, though, such as classical gravity, or just about anything that can be described with a potential. They would make excellent topics to visit after developing some of the formalism more carefully, since they can lead to a lot of intuition about why things are constructed like they are, and why they're useful! $\endgroup$
jeremy
– jeremy
2010-07-02 03:02:02 +00:00
Commented Jul 2, 2010 at 3:02
| Show 2 more comments
9
$\begingroup$
From the perspective of complex analysis, the theory of Fourier series has a very natural explanation. I take it that the students had seen Fourier series first, of course. I had mentioned this elsewhere too. I hope the students also know about Taylor theorem and Taylor series. Then one could talk also of the Laurent series in concrete terms, and argue that the Fourier series is studied most naturally in this setting.
First, instead of cos and sin, define the Fourier series using complex exponential. Then, let $f(z)$ be a complex analytic function in the complex plane, with period $1$.
Then write the substitution $q = e^{2\pi i z}$. This way the analytic function $f$ actually becomes a meromorphic function of $q$ around zero, and $z = i \infty$ corresponds to $q = 0$. The Fourier expansion of $f(z)$ is then nothing but the Laurent expansion of $f(q)$ at $q = 0$.
Thus we have made use of a very natural function in complex analysis, the exponential function, to see the periodic function in another domain. And in that domain, the Fourier expansion is nothing but the Laurent expansion, which is a most natural thing to consider in complex analysis.
I am am electrical engineer; I have an idea what they all study; so I can safely override any objections that this won't be accessible to electrical engineers. Moreover, the above will reduce their surprise later in their studies when they study signal processing and wavelet analysis.
Share
Improve this answer
edited Apr 13, 2017 at 12:58
community wiki
2 revsAnweshi
$\endgroup$
Add a comment |
8
$\begingroup$
This answer doesn't show how the complex numbers are useful, but I think it might demystify them for students. Most are probably already familiar with its content, but it might be useful to state it again. Since the question was asked two months ago and Professor Zudilin started teaching a month ago, it's likely this answer is also too late.
If they have already taken a class in abstract algebra, one can remind them of the basic theory of field extensions with emphasis on the example of $\mathbb C \cong \mathbb R[x]/(x^2+1).$
It seems that most introductions give complex numbers as a way of writing non-real roots of polynomials and go on to show that if multiplication and addition are defined a certain way, then we can work with them, that this is consistent with handling them like vectors in the plane, and that they are extremely useful in solving problems in various settings. This certainly clarifies how to use them and demonstrates how useful they are, but it still doesn't demystify them. A complex number still seems like a magical, ad hoc construction that we accept because it works. If I remember correctly, and has probably already been discussed, this is why they were called imaginary numbers.
If introduced after one has some experience with abstract algebra as a field extension, one can see clearly that the complex numbers are not a contrivance that might eventually lead to trouble. Beginning students might be thinking this and consequently, resist them, or require them to have faith in them or their teachers, which might already be the case. Rather, one can see that they are the result of a natural operation. That is, taking the quotient of a polynomial ring over a field and an ideal generated by an irreducible polynomial, whose roots we are searching for.
Multiplication, addition, and its 2-dimensional vector space structure over the reals are then consequences of the quotient construction $\mathbb R[x]/(x^2+1).$ The root $\theta,$ which we can then relabel to $i,$ is also automatically consistent with familiar operations with polynomials, which are not ad hoc or magical. The students should also be able to see that the field extension $\mathbb C = \mathbb R(i)$ is only one example, although a special and important one, of many possible quotients of polynomial rings and maximal ideals, which should dispel ideas of absolute uniqueness and put it in an accessible context. Finally, if they think that complex numbers are imaginary, that should be corrected when they understand that they are one example of things naturally constructed from other things they are already familiar with and accept.
Reference: Dummit & Foote: Abstract Algebra, 13.1
Share
Improve this answer
edited Sep 5, 2010 at 19:46
community wiki
2 revisions
$\endgroup$
1
$\begingroup$ Thanks for this point, Anthony! Although "my" students struggle abstract algebra as well, I would incorporate these ideas in my next year lectures (I should teach complex analysis again). $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-10-10 22:05:10 +00:00
Commented Oct 10, 2010 at 22:05
Add a comment |
8
$\begingroup$
This is not exactly an answer to the question, but it is the simplest thing I know to help students appreciate complex numbers. (I got the idea somewhere else, but I forgot exactly where.)
It's something even much younger students can appreciate. Recall that on the real number line, multiplying a number by -1 "flips" it, that is, it rotates the point 180 degrees about the origin. Introduce the imaginary number line (perpendicular to the real number line) then introduce multiplication by i as a rotation by 90 degrees. I think most students would appreciate operations on complex numbers if they visualize them as movements of points on the complex plane.
Share
Improve this answer
answered Jan 20, 2011 at 14:57
community wiki
JRN
$\endgroup$
1
1
$\begingroup$ I now remember where I got the idea: mathoverflow.net/questions/47214/… $\endgroup$
JRN
– JRN
2011-02-03 13:48:36 +00:00
Commented Feb 3, 2011 at 13:48
Add a comment |
7
$\begingroup$
Another classic I haven't seen mentioned yet is the proof of the Machin formula $$ \frac{\pi}{4} = 4\arctan \frac{1}{5}-\arctan \frac{1}{239}. $$ I honestly don't know a proof of this that avoids complex numbers, but surely it can be nowhere near as simple as the elementary computation needed to prove the identity $$ 2+2i = \frac{(5+i)^4}{239+i}. $$ Taking $\arg$ on both sides yields Machin's formula.
Share
Improve this answer
answered Aug 28, 2016 at 12:47
community wiki
R.P.
$\endgroup$
3
8
$\begingroup$ It follows from the arctan summation formula: $\operatorname{arctan}(x) + \operatorname{arctan}(y) = \operatorname{arctan}\frac{x+y}{1-xy}$, which in turn can be derived from the sine angle addition formula. $\endgroup$
S. Carnahan
– S. Carnahan ♦
2016-08-29 01:38:04 +00:00
Commented Aug 29, 2016 at 1:38
1
$\begingroup$ But surely $e^{i(x+y)}=e^{ix}e^{iy}$ is easier to remember than the addition formulae for sine and cosine. $\endgroup$
Michael Renardy
– Michael Renardy
2017-09-28 15:51:30 +00:00
Commented Sep 28, 2017 at 15:51
1
$\begingroup$ A simpler formula like $\pi/4=2\arctan(1/3)+\arctan(1/7)$ can be easily shown without words: Draw the integer coordinate points $A:=(9,0)$, $B:=(9,3)$, $C:=(8,6)$, $D:=(7,7)$ and argue on the angles $\hat {AOB}$, $\hat{BOC}$, $\hat{COD}$, $\hat{AOD}$ by similarity of rectangle triangles. An analogous picture shows Machin formula (although one need to start from $(625,0)$ to get all required points with integer coordinates). Of course this is just the elementary geometry version of the complex number computation. $\endgroup$
Pietro Majer
– Pietro Majer
2019-04-23 19:49:54 +00:00
Commented Apr 23, 2019 at 19:49
Add a comment |
6
$\begingroup$
Motivating complex analysis
The physics aspect of motivation should be the strongest for engineering students. No complex numbers, no quantum mechanics, no solid state physics, no lasers, no electrical or electronic engineering (starting with impedance), no radio, TV, acoustics, no good simple way of understanding of the mechanical analogues of RLC circuits, resonance, etc., etc.
Then the "mystery" of it all. Complex numbers as the consequence of roots, square, cubic, etc., unfolding until one gets the complex plane, radii of convergence, poles of stability, all everyday engineering. Then the romance of it all, the "self secret knowledge", discovered over hundreds of years, a new language which even helps our thinking in general. Then the wider view of say Smale/Hirsch on higher dimensional differential equations, chaos etc. They should see the point pretty quickly. This is a narrow door, almost accidentally discovered, through which we see and understand entire new realms, which have become our best current, albeit imperfect,descriptions of how to understand and manipulate a kind of "inner essence of what is" for practical human ends, i.e. engineering. (True, a little over the top, but then pedagogical and motivational).
For them to say that they just want to learn a few computational tricks is a little like a student saying, "don't teach me about fire, just about lighting matches". It's up to them I suppose, but they will always be limited.
There might be some computer software engineer who needs a little more, but then I suppose there is also modern combinatorics. :-)
Share
Improve this answer
answered Jul 2, 2010 at 3:18
community wiki
sigoldberg1
$\endgroup$
2
$\begingroup$ Thanks! I'll borrow some of your wording for my lecture notes. (If you don't object.) $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-02 08:54:21 +00:00
Commented Jul 2, 2010 at 8:54
$\begingroup$ Absolutely, Sure. $\endgroup$
sigoldberg1
– sigoldberg1
2010-07-02 15:43:27 +00:00
Commented Jul 2, 2010 at 15:43
Add a comment |
6
$\begingroup$
Maybe artificial, but a nice example (I think) demonstrating analytic continuation (NOT just the usual $\mathrm{Re}(e^{i \theta})$ method!) I don't know any reasonable way of doing this by real methods.
As a fun exercise, calculate $$ I(\omega) = \int_0^\infty e^{-x} \cos (\omega x) \frac{dx}{\sqrt{x}}, \qquad \omega \in \mathbb{R} $$ from the real part of $F(1+i \omega)$, where $$ F(k) = \int_0^\infty e^{-kx} \frac{dx}{\sqrt{x}}, \qquad \mathrm{Re}(k)>0 $$ (which is easily obtained for $k>0$ by a real substitution) and using analytic continuation to justify the same formula with $k=1+i \omega$.
You need care with square roots, branch cuts, etc.; but this can be avoided by considering $F(k)^2$, $I(\omega)^2$.
Of course all the standard integrals provide endless fun examples! (But the books don't have many requiring genuine analytic continuation like this!)
Share
Improve this answer
answered Jul 8, 2010 at 0:52
community wiki
Zen Harper
$\endgroup$
4
4
$\begingroup$ I rather suspect analytic continuation is a conceptual step above what the class in question coud cope with... $\endgroup$
Yemon Choi
– Yemon Choi
2010-07-08 01:22:41 +00:00
Commented Jul 8, 2010 at 1:22
$\begingroup$ Thanks, Zen! Although this seems to be hard for the class, I keep your complexified example for my personal collection. $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-08 03:47:30 +00:00
Commented Jul 8, 2010 at 3:47
$\begingroup$ That is a very nice example. $\endgroup$
Mariano Suárez-Álvarez
– Mariano Suárez-Álvarez
2010-07-08 03:59:08 +00:00
Commented Jul 8, 2010 at 3:59
$\begingroup$ Thanks, and hi again to my friend Yemon! I agree that this is too hard for most classes, but I couldn't resist showing it! It needs a clear understanding of analytic continuation to do it, so it's only appropriate for courses which actually cover that rigorously (or at semi-rigorously). $\endgroup$
Zen Harper
– Zen Harper
2010-07-10 12:33:20 +00:00
Commented Jul 10, 2010 at 12:33
Add a comment |
6
$\begingroup$
Consider the function f(x)=1/(1+x^2) on the real line. Using the geometric progression formula, you can expand f(x)=1-x^2+... . This series converges for |x|<1 but diverges for all other x. Why this is so? The function looks nice and smooth everywhere on the real line.
This example is taken from the Introduction of the textbook by B. V. Shabat.
Share
Improve this answer
answered Jul 28, 2010 at 8:30
community wiki
Alex Eremenko
$\endgroup$
1
$\begingroup$ Alex, thanks for your contribution. But it's already here: see Keith Conrad's answer above, mathoverflow.net/questions/30156/demystifying-complex-numbers/…. $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-28 08:57:50 +00:00
Commented Jul 28, 2010 at 8:57
Add a comment |
5
$\begingroup$
Try this: compare the problems of finding the points equidistant in the plane from (-1, 0) and (1, 0), which is easy, with finding the points at twice the distance from (-1, 0) that they are from (1, 0). The idea that "real" concepts are the only ones of use in the "real world" is of course a fallacy. I suppose it is more than a century since electrical engineers admitted that complex numbers are useful.
Share
Improve this answer
answered Jul 1, 2010 at 10:17
community wiki
Charles Matthews
$\endgroup$
2
$\begingroup$ Charles, you easily guessed that "my" engineers are electrical! The problem is (and I clearly see it after discussions with colleagues from EE) that they needed it one century ago but hardly need (in a reasonable generality) nowadays. Only concepts and rules of manipulation are required. :-( +1 As for your example, I don't see serious benefits for changing $\mathbb R^2$ by $\mathbb C$ (what happens if I am later asked about the same or similar problem in $\mathbb R^3$?). $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-01 10:28:09 +00:00
Commented Jul 1, 2010 at 10:28
2
$\begingroup$ I do see an undeniable benefit. If you are later asked about it in $\mathbb{R}^3$ then you use vectors and dot product. The historical way would have been to use quaternions; indeed, this is how the notion of dot product crystallized in the work of Gibbs, and more relevantly for your EE students, Oliver Heaviside. $\endgroup$
Victor Protsak
– Victor Protsak
2010-07-02 01:26:03 +00:00
Commented Jul 2, 2010 at 1:26
Add a comment |
5
$\begingroup$
I always like to use complex dynamics to illustrate that complex numbers are "real" (i.e., they are not just a useful abstract concept, but in fact something that very much exist, and closing our eyes to them would leave us not only devoid of useful tools, but also of a deeper understanding of phenomena involving real numbers.) Of course I am a complex dynamicist so I am particularly partial to this approach!
Start with the study of the logistic map $x\mapsto \lambda x(1-x)$ as a dynamical system (easy to motivate e.g. as a simple model of population dynamics). Do some experiments that illustrate some of the behaviour in this family (using e.g. web diagrams and the Feigenbaum diagram), such as:
The period-doubling bifurcation
The appearance of periodic points of various periods
The occurrence of "period windows" everywhere in the Feigenbaum diagram.
Then let x and lambda be complex, and investigate the structure both in the dynamical and parameter plane, observing
The occurence of beautiful and very "natural"-looking objects in the form of Julia sets and the (double) Mandelbrot set;
The explanation of period-doubling as the collision of a real fixed point with a complex point of period 2, and the transition points occuring as points of tangency between interior components of the Mandelbrot set;
Period windows corresponding to little copies of the Mandelbrot set.
Finally, mention that density of period windows in the Feigenbaum diagram - a purely real result, established only in the mid-1990s - could never have been achieved without complex methods.
There are two downsides to this approach: It requires a certain investment of time; even if done on a superficial level (as I sometimes do in popular maths lectures for an interested general audience) it requires the better part of a lecture It is likely to appeal more to those that are mathematically minded than engineers who could be more impressed by useful tools for calculations such as those mentioned elsewhere on this thread.
However, I personally think there are few demonstrations of the "reality" of the complex numbers that are more striking. In fact, I have sometimes toyed with the idea of writing an introductory text on complex numbers which uses this as a primary motivation.
Share
Improve this answer
answered Jul 28, 2010 at 9:34
community wiki
Lasse Rempe
$\endgroup$
5
$\begingroup$ Thanks a lot, Lasse, for the nice example. Can you give a link to a place where the dynamics is discussed in the complex case as well as the density of period windows in the Feigenbaum diagrams is discussed? $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-28 09:42:34 +00:00
Commented Jul 28, 2010 at 9:42
$\begingroup$ Hm, I am not sure what the best place is - I guess you don't want me to reference a textbook on holomorphic dynamics! Devaney's "A first course in chaotic dynamical systems" has some things on Julia sets and the Mandelbrot set; I'll have a closer look if I get a chance. His "An introduction to chaotic dynamical systems", which is on a slightly more advanced level, also does, according to the table of contents - I don't have it handy. There are many other references, but I'll have to think a little bit whether I can come up with one that makes it easy for you to find what you need. $\endgroup$
Lasse Rempe
– Lasse Rempe
2010-07-28 10:54:16 +00:00
Commented Jul 28, 2010 at 10:54
$\begingroup$ Density of period windows in the Feigenbaum diagram is the celebrated result on "density of hyperbolicity" (or "density of Axiom A") in the quadratic family. It was established independently by Lyubich and by Graczyk and Swiatek in the 90s. Lyubich had an article on the quadratic family in the AMS Notices a while back, I think on his stronger "Regular or Stochastic" theorem. $\endgroup$
Lasse Rempe
– Lasse Rempe
2010-07-28 10:56:00 +00:00
Commented Jul 28, 2010 at 10:56
$\begingroup$ If you look for even more recent results in the same vein - all of which use complex methods - density of hyperbolicity was established for real polynomials by Kozlovski, Shen and van Strien a few years ago (appeared in the Annals of Mathematics). Even more recently, Sebastian van Strien and I extended this to some families of real transcendental entire functions such as $x\mapsto a\sin(x)+b\cos(x)$. In this proof, we, in particular, consider complex points that tend to $\infty$ under iteration by such a map - even though the map is bounded on the real line. (Available on the arXiv.) $\endgroup$
Lasse Rempe
– Lasse Rempe
2010-07-28 11:00:11 +00:00
Commented Jul 28, 2010 at 11:00
$\begingroup$ Thanks for the tips, Lasse! I promise to follow them, although slowly, without bothering myself about the students, more educating myself. Thank you very much. $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-28 22:54:42 +00:00
Commented Jul 28, 2010 at 22:54
Add a comment |
5
$\begingroup$
From the point of view of enginieers, the most obvious application of complex numbers is computing alternating currents.
Consider first direct current. If you have a network of resistors, and want to compute the current in this network, or the potential of a node, then Kirchhoff's rules reduce this problem to a system of linear equations. Kirchhoff's rules are obvious, essentially saying that ellectric current cannot just disappear.
If you have alternating current, you have capacities and inductions in addition to the resistors, but if you consider them as imaginary resistance depending on the frequency, the computations are exactly the same as in the direct case, just over another field. The alternative would be computing the phase shift separately from the current, which is much more effort and only works for very simple networks, e.g. oscillators.
Once you learned Fourier analysis, this approach immediately tells you how a filter works, and whether a given network acts as a filter.
Share
Improve this answer
answered Mar 16, 2019 at 9:24
community wiki
Jan-Christoph Schlage-Puchta
$\endgroup$
Add a comment |
5
$\begingroup$
"Why do we need to study numbers which do not belong to the real world?"
I don't think you can answer this in a single class. The best answer I can come up with is to show how complicated calculus problems can be solved easily using complex analysis.
As an example, I bet most of your students hated solving the problem $\int e^{-x}\cos(x) dx$. Solve it for them the way they learned it in calculus, by repeated integration by parts and then by $\int e^{-x}\cos(x) dx\ \ =\ \ \Re \int e^{-x(1-i)}dx$. They should notice how much easier it was to use complex analysis. If you do this enough they might come to appreciate numbers that do not belong to the real world.
Share
Improve this answer
edited Apr 14, 2021 at 6:07
community wiki
2 revs, 2 users 86%Daniel Parry
$\endgroup$
1
2
$\begingroup$ That idea to compute the integral does not use complex analysis :) $\endgroup$
Mariano Suárez-Álvarez
– Mariano Suárez-Álvarez
2011-01-20 02:07:59 +00:00
Commented Jan 20, 2011 at 2:07
Add a comment |
4
$\begingroup$
As an example to demonstrate the usefulness of complex analysis in mechanics (which may seem counterintuitive to engineering students, since mechanics is introduced on the reals), one may consider the simple problem of the one dimensional harmonic oscillator, whose Hamiltonian equations of motion are diagonalized in the complex representation, equivalently one needs to integrate a single (holomorphic) first order ODE instead of a single second order or two first order ODEs.
Share
Improve this answer
answered Jul 1, 2010 at 15:08
community wiki
David Bar Moshe
$\endgroup$
1
$\begingroup$ Thanks, David! It goes in line with some other answers but it's interesting to see a different point of view on the linearised DE for the harmonic oscillator. $\endgroup$
Wadim Zudilin
– Wadim Zudilin
2010-07-02 08:49:13 +00:00
Commented Jul 2, 2010 at 8:49
Add a comment |
1
2 Next
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
soft-question
cv.complex-variables
teaching
See similar questions with these tags.
Featured on Meta
Spevacus has joined us as a Community Manager
Introducing a new proactive anti-spam measure
Linked
10 Why the unreasonable applicability of complex numbers in physics/engineering?
How to present mathematics to non-mathematicians?
Abstract thought vs calculation
33 Do names given to math concepts have a role in common mistakes by students?
13 Why sin and cos in the Fourier Series?
5 Bounding higher derivatives of $f(x) = 1/(1+x^2)^r$
Trigonometry related to RogersRamanujan identities
Related
15 Freshman's definition of sin(x)?
5 Relation between the eigenvalue density and the resolvent?
Solutions to Schrödinger equation parameter dependence
7 Question about the correspondence between unitary Möbius transformations and quaternions
7 Reference for permanent integral identity
0 Series analyzed in LubotzkyPhillipsSarnak "Ramanujan Graphs"
Question feed |
187843 | https://www.chegg.com/homework-help/questions-and-answers/3-goal-exercise-prove-exponential-map-exp-r-0-00-ex-continuous-strictly-increasing-bijecti-q87045530 | Solved 3. The goal of this exercise is to prove that the | Chegg.com
Skip to main content
Books
Rent/Buy
Read
Return
Sell
Study
Tasks
Homework help
Understand a topic
Writing & citations
Tools
Expert Q&A
Math Solver
Citations
Plagiarism checker
Grammar checker
Expert proofreading
Career
For educators
Help
Sign in
Paste
Copy
Cut
Options
Upload Image
Math Mode
÷
≤
≥
o
π
∞
∩
∪
√
∫
Math
Math
Geometry
Physics
Greek Alphabet
Math
Advanced Math
Advanced Math questions and answers
3. The goal of this exercise is to prove that the exponential map exp : R + (0, +00) х ex is continuous, strictly increasing and bijective, and its inverse function is continuous. The inverse function of exp will be denoted by log : (0, +00) + R. Recall that in a previous homework we defined exp(x) as n exp(x):= lim 1+- n 5)" n-> = = for any x E R. We
Your solution’s ready to go!
Our expert help has broken down your problem into an easy-to-learn solution you can count on.
See Answer See Answer See Answer done loading
Question: 3. The goal of this exercise is to prove that the exponential map exp : R + (0, +00) х ex is continuous, strictly increasing and bijective, and its inverse function is continuous. The inverse function of exp will be denoted by log : (0, +00) + R. Recall that in a previous homework we defined exp(x) as n exp(x):= lim 1+- n 5)" n-> = = for any x E R. We also
Show transcribed image text
There are 2 steps to solve this one.Solution Share Share Share done loading Copy link Step 1 A series expansion is often e... View the full answer Step 2 UnlockAnswer Unlock Previous question
Transcribed image text:
The goal of this exercise is to prove that the exponential map exp : R + (0, +00) х ex is continuous, strictly increasing and bijective, and its inverse function is continuous. The inverse function of exp will be denoted by log : (0, +00) + R. Recall that in a previous homework we defined exp(x) as n exp(x):= lim 1+- n 5)" n-> = = for any x E R. We also proved the following properties: i) if x > -1, then the sequence an(x) (1 + )" is strictly increasing (during the discussion from week 5, you showed that (an(x))n is increasing when n > -x, which in particular tells us that it is globally increasing when x > -1); ii) exp(0) = 1 (it follows immediately from the definition), and exp(x) > 0 for every x ER (this follows from the fact that 1+ > 0 for n sufficiently large, and the sequence (an(x))n is increasing when n > -x); iii) for any M E N and for any x < M, we have en SeM (this follows from the argument < provided to see that the sequence (an(x))n is bounded); iv) for every x € R, exp(-x) = exp(x)-1; v) for every x,y e R, exp(x + y) = exp(x) exp(y). n = We will break the proof into small steps: 1. Show that, for every x > -1, we have exp(x) – 1 > x; 2. Prove that the function x H exp(x) is strictly increasing, i. e. for every x < x' we have exp(x) < exp(x'); 3. Show that, for every x E R and n E N \ {0}, exp(x/n)" = exp(x), and deduce that limnyoo exp(1/n) = limn70 ve = 1 (we saw in an exercise the fact that, for every a > 0, "a tends to 1 as n goes to 0o). Similarly we have that limnto exp(-1/n) = 1; - = = limn–100 The = 2 4. Prove that the function x H exp(x) is continuous at x = 0 (Hint: by the previous point, for every e > 0 there exists a large natural number n = ne EN such that 1-€ < e-1/n el/n <1+ ε. Now use the fact that exp is increasing to conclude); 5. Using property v) and the continuity at x = 0, deduce the continuity of exp at every point x eR; 6. Prove that = sup{exp(x) | X E R} = too and inf{exp(x) | X E R} = 0, х and deduce that exp : R + (0, +00) is bijective. = . Finally, combining this with the exercise seen in the last discussion, we can deduce that the inverse log exp-1: (0, +00) +R is continuous, bijective and strictly increasing. Here are a few consequences of this fact: • for any a > 0 and b E R, we can now define ab exp(blog a) > 0. The function bH a satisfies a 6+b' a'ab' and a-b = 1/(a”). If a > 1, then b + a' is strictly increasing, and if a < 1, it is strictly decreasing. In both cases, b H a' is bijective as a function from R to (0, +). The logarithm function loga : (0, +00) + R can then be defined as the inverse of b + a', and it is continuous for every a > 0. since ea > 0 for every x E R, the inequality e – 1 > x is true for every x € R. This implies that for every t > -1, we have log(1+t) 00 n lim log( Vn) = log(1) = 0, noo which implies that limg_0+ x log x = 0.
Not the question you’re looking for?
Post any question and get expert help quickly.
Start learning
Chegg Products & Services
Chegg Study Help
Citation Generator
Grammar Checker
Math Solver
Mobile Apps
Plagiarism Checker
Chegg Perks
Company
Company
About Chegg
Chegg For Good
Advertise with us
Investor Relations
Jobs
Join Our Affiliate Program
Media Center
Chegg Network
Chegg Network
Busuu
Citation Machine
EasyBib
Mathway
Customer Service
Customer Service
Give Us Feedback
Customer Service
Manage Subscription
Educators
Educators
Academic Integrity
Honor Shield
Institute of Digital Learning
© 2003-2025 Chegg Inc. All rights reserved.
Cookie NoticeYour Privacy ChoicesDo Not Sell My Personal InformationGeneral PoliciesPrivacy PolicyHonor CodeIP Rights
Do Not Sell My Personal Information
When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary Cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). For more information about the First and Third Party Cookies used please follow this link.
More information
Allow All
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Sale of Personal Data
[x] Sale of Personal Data
Under the California Consumer Privacy Act, you have the right to opt-out of the sale of your personal information to third parties. These cookies collect information for analytics and to personalize your experience with targeted ads. You may exercise your right to opt out of the sale of personal information by using this toggle switch. If you opt out we will not be able to offer you personalised ads and will not hand over your personal information to any third parties. Additionally, you may contact our legal department for further clarification about your rights as a California consumer by using this Exercise My Rights link.
If you have enabled privacy controls on your browser (such as a plugin), we have to take that as a valid request to opt-out. Therefore we would not be able to track your activity through the web. This may affect our ability to personalize ads according to your preferences.
Targeting Cookies
[x] Switch Label label
These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Reject All Confirm My Choices |
187844 | https://www.youtube.com/watch?v=8zO0algdgNM | Primary productivity in ecosystems| Matter and Energy Flow| AP Environmental Science| Khan Academy
Khan Academy
9090000 subscribers
744 likes
Description
53484 views
Posted: 13 Jan 2021
Keep going! Check out the next lesson and practice what you’re learning:
Primary productivity is the rate at which solar energy (sunlight) is converted into organic compounds via photosynthesis over a unit of time. Net primary productivity is the rate of energy storage by photosynthesizers in a given area, after subtracting the energy lost to respiration.
Khan Academy is a nonprofit organization with the mission of providing a free, world-class education for anyone, anywhere. We offer quizzes, questions, instructional videos, and articles on a range of academic subjects, including math, biology, chemistry, physics, history, economics, finance, grammar, preschool learning, and more. We provide teachers with tools and data so they can help their students develop the skills, habits, and mindsets for success in school and beyond. Khan Academy has been translated into dozens of languages, and 15 million people around the globe learn on Khan Academy every month. As a 501(c)(3) nonprofit organization, we would love your help!
Donate or volunteer today! Donate here:
Volunteer here:
60 comments
Transcript:
Intro in this video we're going to talk about energy and in particular we're going to talk about the energy of life the energy that i need to live and all of us need to live the energy you need to think the energy i'm using to make this video right now and some of you might already guess where this energy is coming from the surface of our planet is constantly being bombarded with light energy from the sun and you might know that there's certain organisms on our planet that are capable of taking that light energy and then storing it as chemical energy and those things and there's many types but the ones that we see most often in our day-to-day life are plants and so let's imagine a plant here and what it's doing is it's using that light energy in conjunction with water typically from the soil that maybe it's getting through its roots and carbon dioxide in the air and it's using that light energy to actually stick or you could say fix the carbon to construct itself and in its own tissue it's storing that energy and then if it were to break down that tissue it can release that energy in various forms now as it does this you might also be familiar that these photosynthesizers or these primary producers or these autotrophs they're also releasing molecular oxygen now if we were to describe this in in chemical terms or chemistry terms we would describe this process of photosynthesis as taking carbon dioxide from the air in conjunction with water from the soil and what's i guess you could say fueling all of this is light energy usually from the sun and what that is yielding is the tissue of the plant that is actually storing that energy is chemical energy in an organic form and the primary the primary way that this is done is through glucose which is c6h12o6 i know what you're thinking all plants don't taste sweet well if you take chains of sugars and put them together you get carbohydrates and you adapt them a little bit you get things like starches and that's what most of the plant tissue is so some variations of this or thing variations of this linked together but this is where the energy is stored so energy stored in the actual plant tissue and then of course it releases that molecular oxygen Photosynthesis and this is the process of photosynthesis and even if you look at the word photosynthesis and what the parts of it mean photo is referring to light and synthesis is referring to putting something together synthesizing something so photosynthesis you're using light to put together these essentially fix the carbon together to store energy now you might say all right that's nice i'm storing the energy this way how do i actually use the energy and that's something that all of us are doing all living systems have to do and that process is respiration and you could already guess what at least the chemical the chemical reaction for respiration will look like you're going to start with our stored energy our glucose cac6h12o6 in the presence of oxygen and since we're respiring all the time this is why we need to breathe oxygen and this is going to yield carbon dioxide and that's why we exhale more carbon dioxide than we inhale it's also going to release water and it's going to release and this is the whole point of it cellular energy and in other videos that you'll see in a biology class we'll talk about how this form of stored energy gets converted to other forms and then how that's used by the various machinery and cells to actually live to reproduce to move in many cases Measurement now an interesting question is how do you measure how much photosynthesis is going on how much primary productivity is going on well one way to think about it is find an ecosystem and take a certain area of the surface of that ecosystem and it could be a terrestrial ecosystem on land it could be a marine ecosystem and then say for this area in a given period of time often times a year how much stuff is growing so this is the stuff that grows and obviously or it would seem that the more stuff that is growing that the more photosynthesis that is taking place and the way that they measure how much how much is growing you can either measure it in terms of grams of biomass so grams of biomass and biomass is just a fancy way of saying that the mass of biological stuff that's just growing on this area and usually they'll take the water out so they get a consistent measurement or you can convert this to calories and it's usually measured in thousands of calories kilocalories and when you see calories on a packaging food label what we most of us think of calories those are actually kilocalories when we think about it in scientific terms and i know what you're thinking you're like wait mass and kilocalories calories that's just a form of energy well those two things you can go between because usually a certain type of biomass a gram of a certain type of biomass will have a certain amount of energy stored in it not energy that necessarily all animals could use or that we could use but it does have energy in it Net Primary Productivity now when we talk about this primary productivity you might already be thinking about well doesn't don't the plants need to use some of the energy that they are producing themselves to live and my answer to you is of course they need it in fact that's probably the most important reason why they need to photosynthesize is because they need to do respiration in order for them to grow and metabolize and live and reproduce and so when you see how much has been produced in a given area in a given year you're actually seeing the net primary productivity this is the you could think about it how much photosynthesis they did minus how much respiration they did so if you think of how much photosynthesis they did as gross primary productivity so that's the total amount of photosynthesis and then you subtract out the amount of energy chemical energy or cellular energy they needed for respiration that would then give you the net primary productivity and as i mentioned just to make things a little bit tangible if you took a very productive ecosystem let's say something like a rain forest that i have here in the background a very productive ecosystem like this if you were to take on average a square meter of this it produces in a year about 2000 grams of biomass so here we would say that the net primary productivity of this rainforest that you see in the background here would be approximately 2000 grams per square meter per year and if you wanted to think about this in terms of kilocalories you just have to say well each gram of biomass is how many kilocalories and it depends on the type of biomass but let's say that we have four kilocalories per gram of biomass so then we could also say that this net primary productivity is equal to 2000 grams per square meter per year times 4 kilocalories per gram the grams cancel out and then you multiply 4 times 2 000 that's going to be 8000 kilocalories kilocalories per square meter per year that would be the net primary productivity because that's after the plants have been doing respiration now how would you figure out gross primary productivity well you're not going to be able to do it directly but you can figure that out by figuring out the rate of respiration if you took some plants in that ecosystem and then you put them in a dark room with no light and then if you saw how much oxygen they are absorbing or they're having to use then that gives you a sense of how much respiration they are doing and there's ways that you can look at the ratios of the oxygens and the carbons to figure out exactly how much respiration is going on and then if you know the net primary productivity and the rate of respiration then you could figure out the gross primary productivity but i will leave you there because these are really useful measurements well one it's really useful to think about where all of the energy that allows us to live comes from but it's also useful for ecologists to think about how productive a system is or what's making it more productive or less productive and as we'll see these numbers here these are for sure on the high end of net primary productivity if we were in a desert type of ecosystem this number might be in the low hundreds and not in the 8 000 range |
187845 | https://mangolanguages.com/resources/learn/grammar/english/participles-in-english-what-are-they-and-how-are-they-used | English Articles
Participles in English: What are they and how are they used?
By: Isabel McKay Fri Sep 13 2024
English
Verbs,
Adjectives,
Grammar Tips
A participle is a word made from a verbVerbs are words used to describe an action, state, or occurrence, like “to run,” “to exist,” “to happen.” and used as an adjectiveAdjectives are words that describe nouns (things, people, ideas, places, etc.). or as part of a compound tenseA compound tense is formed using at least two verbs, as in "I am running.". There are two kinds of participles in English:
the present participle: verb + -ing
Used for:
the continuous tenses (e.g. He is running)
describing a noun that performs or does an action (e.g. a sleeping baby)
the past participle: verb + -ed or an irregularIrregular is a word we use in grammar to describe a word form that isn’t formed according to the usual grammar rules. form
Used for:
the perfect aspect (e.g. He has walked)
passives (e.g. She was seen)
describing a noun that underwent or experienced an action (e.g. a burnt pancake)
In this post we’ll go through how to form and use both kinds of participles in detail, then we’ll go through all of their major uses. Then, for more advanced learners, we’ll look at how to form different kinds of participle phrases in English.
Now that I’ve saidthat, let’s get down to a concentratedexplanation of the entertainingworld of participles in English! Having read that introduction, you can bet that lots of information will be covered!
Table of Contents
What are the types of participles in English?
How to form the present participle?
How to form the past participle?
How to use participles in English?
When are participles used in verb tenses?
Using the present participle in the past, present, and future continuous
Using the past participle in the past, present, and future perfect
How are participles used as adjectives?
Past participles vs. present participles as adjectives
How to use the past participle in the passive voice?
An advanced use of participles: What is a participle phrase?
What is a “classic” participle phrase?
What is a perfect participle?
How to use participles in reduced relative clauses?
What is a dangling participle?
Summary
What are the types of participles in English?
The words we call participles are actually two verb forms in English:
the present participle (also called the “active participle” or “imperfect participle”)
being, hoping, going, having…
the past participle (also called the “passive participle”)
root
... or an irregularIrregular is a word we use in grammar to describe a word form that isn’t formed according to the usual grammar rules. form
been, hoped, gone, had, walked...
Please don’t be confused by the words “present” and “past”! We use these participles in all three tenses: past, present, and future!
A better way to think of these is as “finished” (past) and “unfinished” (present) participles. Want to learn why? Keep reading!
Let’s get down to brass tacks and look at them one at a time!
How to form the present participle?
To form the present participle, just take the basic rootNo definition set for rootLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. of the verb and add -ing! It’s easy! For example:
| | Present participle |
--- |
| | dancing |
| | cooking |
| | studying |
| | cutting |
| find | finding |
| speak | speaking |
Tip
Though present participles are completely regular in speech, there are some special spelling rules to learn. For example:
make → making
→ e at the end is deleted
cut → cutting
→ some final consonants are doubled
Luckily these rules do not affect pronunciation and they are pretty simple to learn. Have a look at our reference sheet for spelling words with suffixes in English to learn more.
How to form the past participle?
There are two kinds of past participles in English: regular and irregular
Regular participles have a predictable form: root + -ed:
| Root | Past participle |
--- |
| dance | danced |
| look | looked |
| study | studied |
###### Tip
When we add -ed to the end of an English word, there are some spelling rules you will have to follow. For example:
study → studied
→ y at the end becomes i
+ dance → danced
→ e at the end is dropped
+ stop → stopped
→ certain final consonants are doubled
Luckily these rules do not affect pronunciation and they are pretty simple to learn. Have a look at our reference sheet for spelling words with suffixes in English to learn more.
Irregular participles are a little trickier because they have forms that must be memorized.
There are about 200 irregular verbs in English, and you can find their participles in the third column of most irregular verb lists:
| Root | Simple past | Past participle |
---
| be | was / were | been |
| cut | cut | cut |
| have | had | had |
| go | went | gone |
| speak | spoke | spoken |
###### Tip
Have a look at this chart of irregular verbs in English! We’ve given you two versions:
A version that sorts the verbs by skill level (A1-C2)
A version that sorts the verbs into categories to help you see the different patterns
How to use participles in English?
You will probably first use participles to form the continuous and the perfect aspectNo definition set for aspectLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. in each tense:
participle
Continuous:
I am teaching you participles right now.
Perfect:
You have seen participles used as verb forms!
But we also use participles as adjectives, that is, words that modify nouns:
participle
noun
Use our learning guides to help you understand.
Please clean up the broken glass!
Grammar term watch!
When we use a verb as a noun, we call this a gerund in English! A gerund looks exactly like a present participle (root + -ing), but because it’s used as a noun, we call it something else.
gerund
Speaking two languages is important in today’s world
Your reading has improved a lot since last September.
Past participles are also used to form the passive voice in English – sentences where the subjectNo definition set for subjectLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. has an action done to them:
Active voice:
Pedro ate a pear.
→ subject does the action
Passive voice:
A pear was eaten (by Pedro).
→ action is done to the subject
Tip
In this post we’re looking at the main uses of participles in English, but you will find them in a few other cases as well!
Now, let’s look at each of these uses in detail!
When are participles used in verb tenses?
In all three tenses (past, present, and future) participles are used to form verbs in the continuous and perfect aspects. To do this, each participle is used with a specific auxiliary (helping) verb: the present participle goes with auxiliary be and the past participle goes with the auxiliary have.
Continous aspect:
be + present participle
Perfect aspect:
have + past participle
Take a look at how this works in all three tenses!
Using the present participle in the past, present, and future continuous
To use the present participle in the past, present, or future continuous, just change the tense of the auxiliary verb be to the past, present, or future! The present participle does not change.
Past continuous:
Mark was playing the piano in the studio yesterday.
We were playing...
Present continuous:
I am reading an interesting book right now.
You are reading…
She is reading…
Future continuous:
Patty will be entertaining guests at her house tonight.
The continuous aspect is usually used for actions that continue for a period of time, but its use is a little different in each tense. Check out our posts on the present, past, and future continuous to learn more!
Using the past participle in the past, present, and future perfect
To use the past participle in the past, present, or future perfect, just change the tense of the auxiliary verb haveto the past, present, or future. The past participle does not change.
Past perfect:
Peter had seen that movie many times before.
Present perfect:
They have finished their test now.
She has finished…
Future perfect:
Muriel will have made twelve apple pies before the end of the week.
The perfect aspect is usually used to talk about something that finishes before an action or time that is the focus of a story.
I have seen this movie before…
→ before "now"
…so I know that it is good.
...so I do not need to see it again.
→ "now": the focus of the story.
We use the perfect when the fact that an earlier action happened impacts how we understand the event or situation we are focused on. Read more in our posts on past, present, and future perfect!
Tip
The perfect continuous tenses in English use both a past participle and a present participle!
I had been wondering about that!
How are participles used as adjectives?
Both present participles and past participles are commonly used as adjectives. That means that a participle can modify a noun (or noun phrase!), adding more information about it
Just like any adjective, participles as adjectives are usually placed before the noun they modify:
adjective
noun
Patty doesn’t like raw fish.
There was a large cat in the window.
participle as adjective
noun
Patty likes fried fish.
There was a sleeping cat in the window.
Like other adjectives, past participles, are often placed after a linking verb like be or seem:
linking verb
adjective
That fish is raw.
That cat looks sleepy.
linking verb
past participle
This fish is fried.
This cat looks tired.
But present participles usually do not follow a verb (except for be in a continous tense!)
linking verb
present participle (as adjective)
❌ This cat looks sleeping.
❌ My eggs seemed burning.
✅ My eggs were burning.
→ This is the past continuous tense!
Past participles vs. present participles as adjectives
Past participles are used to describe nouns that underwent a finished action, but present participles describe nouns that are doing the action. For example:
Past participles:
This is fried fish.
→ Someone fried the fish.
This is a forgotten toy.
→ Someone forgot the toy.
This is a grown man.
→ The man is finished growing.
This is a painted house.
→ Someone painted the house.
Present participles:
This is a sleeping baby
→ The baby is sleeping
There is a leaking sink.
→ The sink is leaking.
This is a running horse.
→ The horse is running
This is a growing boy.
→ The boy is growing
Tip
Gerunds (verbs used as nouns) can be used to describe the purpose of a noun. Because gerunds look just like present participles (verb + -ing), you’ll need to use context to decide the meaning:
a cleaning rag:
✅ a rag for cleaning (gerund)
❌ the rag is cleaning (participle)
a growing boy:
❌ a boy for growing (gerund)
✅ the boy is growing (participle)
How to use the past participle in the passive voice?
Use the past participle after the verb be to form the English passive voice:
be
past participle
The subject of a passive verb is the noun that undergoes an action, not the noun that does the action:
The ball was kicked.
The book will be purchased for John.
Mary is being given a present.
Notice that the past participle does not change when you change the tense of be!
In English, we use the passive voice when the object of an action is the topic of a sentence (the noun a sentence is “about”). For example:
I was talking to Mary’s mom today, and she said that Mary was caught using her cell phone in class so she got punished.
→ Mary is the topic of conversation, so even though she is the object of the “catching” action, she is the subject of the verb.
An advanced use of participles: What is a participle phrase?
A participle phrase is a phrase that begins with either a past or present participle followed by some associated words. There are actually two types of participle phrases in English:
“Classic” participle phrases (surrounded by commas)
Remembering to bring his lunch, Tom left the house.
Reduced relative clauses (not surrounded by commas)
Rita visited a portrait painted by da Vinci.
Let’s talk about each type, and then we’ll talk about “dangling participles,” which are a common writing error that happens when you confuse the two!
What is a “classic” participle phrase?
A “classic” participle phrase is a participle phrase that is surrounded by commas and can only describe the subject of a sentence. These are often used in writing, but are less common in speech.
A classic participle phrase can come in three main places:
At the beginning of the clause:
participle phrase
subject
Exhausted by the hike, Tom fell onto the sofa and groaned.
Remembering to bring his lunch, Tom left the house.
After the verb and object / at the end of the clause:
subject
participle phrase
Tom fell onto the sofa, exhausted by the hike.
Tom left the house, remembering to bring his lunch.
After the subject (uncommon):
subject
participle phrase
Tom, exhausted by the hike, fell onto the sofa.
Tom, remembering to bring his lunch, left the house.
Notice that these participle phrases are always separated out by commas, and they always describe the subject.
What is a perfect participle?
A perfect participle is a type of “classic” participle phrase that is formed from a verb in the perfect aspect. So they look like this:
having
past participle
having slept all night...
having forgotten his teddy bear..
The perfect participle tells us about something that the subject of the sentence completed before the main action. Look:
past participle
Having finished her work, Mary turned off her computer and left the office.
→ Mary finished her work before she turned off the computer and left.
How to use participles in reduced relative clauses?
A reduced relative clause can also begin with a participle, but these follow different grammar rules. This is because they are actually just English relative clauses with some words deleted. For example:
| Full relative clause | Reduced relative clause |
--- |
| Rita watched her friend who was cutting the grass. | Rita watched her friend cutting the grass. |
| Rita bought a table that wasmade of wood. | Rita bought a table made of wood. |
Recall that relative clauses in English are clauses (with a subject and a verb) that describe a noun.
If the verb in the relative clause is be and it is followed by a participle, you can delete the relative pronounNo definition set for relative pronounLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. and the verb be. This is how we create reduced relative clauses that begin with participles!
But because reduced relative clauses are still relative clauses, they follow the rules that relative clauses follow. For example:
Reduced relative clauses can describe any noun in the sentence, not just the subject:
Rita watched her neighbor cutting the grass.
→ The neighbor cut the grass.
+ Rita bought a table made from wood.
→ The table is made from wood.
Reduced relative clauses always come right after the noun phrase they describe, but they do not need to be separated by commas.
❌ Cutting the grass, Rita watched her neighbor.
→ This is only allowed if it is a classic participle phrase, describing Rita like the examples we saw above.
What is a dangling participle?
A dangling participle is a common writing error in English that happens when it is unclear which noun a participle phrase describes.
Sometimes, this is because the noun the participle phrase describes is completely missing from the sentence:
❌ Walking in the garden, weeds sprouted everywhere.
→ The weeds are walking in the garden?!?
Fixing this is easy! Just make sure there is a subject in the sentence!
✅ Walking in the garden, Mary saw weeds everywhere.
→ Now we know who was walking in the garden!
Other times it happens because you are using a “classic” participle phrase without the proper punctuation (no commas), so it looks like it is a reduced relative clause!
❌ Tom watched the zebra sitting on the porch.
→ Intended meaning: Tom was sitting on the porch. Tom saw a zebra.
→ Actual meaning: Tom saw a zebra. The zebra was sitting on the porch.
This second kind of dangling participle is easy to fix in writing (just add the commas!) but it can be confusing in speech, because we cannot see the commas! Even in writing, it is better to rephrase a sentence like this, because they can confuse readers.
If you want to be very clear, it is best to put a “classic” participle phrase close to the noun it describes, if you can!
✅ Sitting on the porch, Tom watched the zebra.
A third kind of dangling participle comes about because a few English prepositions come from present participles. For example:
Rita watched her neighbor using binoculars.
This sentence is correct, but has two different meanings:
If we read the word using as a participle, this is a reduced relative clauses, so the sentence means:
Rita watched her neighbor who was using binoculars.
+ But if we read the word using as a prepositionNo definition set for prepositionLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum., then the phrase using binoculars tells us how the watching happened instead. So the sentence means:
Rita watched her neighbor by means of binoculars.
To clarify sentences like these, it is best to just rephrase what you are trying to say!
Good for you! You’ve discovered a lot more about the participle in English!
Having read this post, you’ve seen that:
there are two types of participles in English:
the present participle (root + -ing)
the past participle (root + -ed / irregular form)
participles are used to form…
the continuous tenses (be + present participle)
the perfect tenses (have + past participle)
participles are also used as adjectives to modify nouns
participles can be combined with other words to make participle phrases, which fall into two main groups:
“classic” participle phrases → separated by commas, describe the subject
reduced relative clauses → no commas, follow the noun they describe
That was a bunch of information there! If you’re ready to practice, have a look at our English participle activities. We’ve also got a useful chart that covers comma use with participles and participle phrases.
To embark on your next language adventure, join Mango on social!
Ready to take the next step?
The Mango Languages learning platform is designed to get you speaking like a local quickly and easily.
Log In
Sign Up |
187846 | https://www.sciencedirect.com/science/article/abs/pii/S0387760498000229 | CT and MR imaging of cerebral tuberous sclerosis - ScienceDirect
Skip to main contentSkip to article
Journals & Books
Access throughyour organization
Purchase PDF
Patient Access
Other access options
Search ScienceDirect
Article preview
Abstract
Introduction
Section snippets
References (57)
Cited by (66)
Brain and Development
Volume 20, Issue 4, April 1998, Pages 209-221
Review article
CT and MR imaging of cerebral tuberous sclerosis
Author links open overlay panel Yuichi Inoue a, Yutaka Nemoto a, Ryuusuke Murata b, Takahiko Tashiro a, Miyuki Shakudo a, Kinuko Kohno a, Osamu Matsuoka c, Kunizo Mochizuki a
Show more
Add to Mendeley
Share
Cite
rights and content
Abstract
Tuberous sclerosis is a heredofamilial neurocutaneous syndrome, or phakomatosis, with multisystem involvement including the brain, kidney, skin, retina, heart, lung, and bone. The brain is the most frequently affected organ in tuberous sclerosis. Brain lesions in tuberous sclerosis are of three kinds; cortical tubers, white matter abnormalities, and subependymal nodules. We review the computed tomography (CT) and magnetic resonance (MR) features of the brain lesions in patients with tuberous sclerosis. CT clearly demonstrates calcified subependymal nodules. MR imaging demonstrates more clearly cortical, and white matter lesions than CT, since MR imaging shows excellent image contrast between various normal structures and high sensitivity in detecting pathological states due to intrinsic differences in proton density and in particular, in proton relaxation times of tissues. Possible pathogenesis of this disorder is also discussed
Introduction
Tuberous sclerosis is a heredofamilial neurocutaneous syndrome, or phakomatosis, with multisystem involvement including the brain, kidney, skin, retina, heart, lung and bone. The brain is the most frequently affected organ in tuberous sclerosis. Although the earliest report of a patient with tuberous sclerosis is said to have been by von Recklinghausen (1862) in a neonate with cardiac rhabdomyomata, the name tuberous sclerosis was introduced by Bourneville (1880) in a 3-year-old girl with mental retardation, seizures and facial angiofibromas (cited in Ref. ). In 1890, Pringle described the facial nevi of adenoma sebaceum. Vogt later emphasized the classic triad of seizures, mental retardation, and sebaceous adenoma. The eponym Pringle disease is used when there are only dermatological findings, and Bonneville disease when the nervous system is affected.
A recent Swedish study found the prevalence to be at least one in 6800 among children aged 11–15 years and one in 12 900 in a population aged 0–20 years . No racial or sexual predilection has been detected. Familial cases are inherited in an autosomal dominant fashion, but most cases probably arise sporadically; the rate of spontaneous mutation approaches 60–75% 2, 3, 4, 5. The tumor-like growth in different organs may include cells of more than one type, and examples of these are fibroblasts, cardiac myoblasts and angioblasts or glioblasts and neuroblasts 2, 3.
Due to multiple organ involvement and genetic heterogeneity, the disorder is called `tuberous sclerosis complex' (TSC). Linkage of tuberous sclerosis to markers on chromosome 9q was first reported in 1987 and in further details in the recent literature . It soon became clear that there is genetic heterogeneity. The gene on chromosome 9 is called TSC1. Another group has revealed an important tuberous sclerosis locus on chromosome 16p13 10, 11, which was confirmed later on and designated as TSC2. No significant phenotypical differences have been discovered between TSC1 and TSC2.
The traditional clinical triad of adenoma sebaceum, seizures, and mental retardation has been refined with appreciation of the complexity of the disorder. Recently, diagnostic criteria have been established by the National Tuberous Sclerosis Association (Table 1). A definitive diagnosis is established when one of the following are present: (1) cortical tubers or subependymal nodules, (2) multiple retinal astrocytomas, (3) facial or truncal adenoma sebaceum (angofibromas) or ungual angiofibromas, or (4) subependymal giant cell astrocytomas. In the absence of a primary finding, the diagnosis can be made if any two of the following are present: hypopigmented macules, shagreen patches, a single retinal tuber, multiple renal hamartomas (angiomyolipomas), cardiac rhabdomyoma, and tuberous sclerosis in a first-degree relative. Many of the clinical manifestations such as facial angiofibromas, ungual fibromas, and subependymal nodules, may not be apparent in infancy and childhood. Mental retardation and seizures are common in this disorder, but because of the lack of specificity they are not included in the diagnostic criteria.
As described earlier, the diagnosis of tuberous sclerosis has been based on the classic triad of characteristic cutaneous lesions, seizures, and mental retardation. Because intellectual deficits are seen in only 40% of patients and the cutaneous and intellectual deficits are not clinically apparent in the first 2–3 years of life, the radiologic manifestations of tuberous sclerosis have taken on considerable clinical importance in the diagnosis of this disorder in infants and young children.
Plain radiographs of the skull demonstrate periventricular calcification infrequently (Fig. 1). Historically, pneumoencephalography demonstrated the characteristic `candle guttering' appearance of nodular subependymal masses. The cortical tubers that dominate the gross pathologic appearance of this disorder, however, were generally not discernible on these plain radiography and pneumoencephalography. Computed tomography (CT) and magnetic resonance (MR) imaging revolutionized the role of radiologic evaluation of this disorder. CT clearly demonstrates calcified nodules, but it fails to demonstrate cortical tubers in spite of the presumed presence of such lesions in most patients. MR imaging demonstrates cortical tubers and white matter lesions above a certain size, but it frequently fails to depict subependymal nodules.
MR imaging utilizes programmed sequences of pulses to produce different kinds of images such as T1-weighted, T2-weighted, and proton-weighted images. In this article, unless stated otherwise, T1-weighted images indicate spin-echo T1-weighted images (TR: 500–600 ms, TE: 15–20 ms), and T2-weighted images indicate either spin-echo T2-weighted images (TR: 2000–3000 ms. TE: 90–100 ms) or fast spin-echo T2-weighted images (TR: 4000–6000 ms, Teff: 80–100 ms).
The cerebral lesions in tuberous sclerosis are of three kinds: cortical tubers, white matter abnormalities, and subependymal nodules. They are almost always benign hamartomas. Bender and Yunis have suggested that the same cellular components are present in all the brain lesions in tuberous sclerosis, and that they represent a combination of both neuronal and astrocytic features.
Cortical tubers are the most characteristic lesions of tuberous sclerosis at pathologic examination. Varying in size from millimeters to several centimeters, tubers are rounded or wart-like protrusions of single or adjacent gyri, very firm to touch and pale in color 1, 5, 13. They may be wide and flat or dimpled. Tubers expand the gyri and blur the margin between the gray and white matter 1, 13. They may be present in the depths of sulci. Histologically, tubers are characterized as atypical giant astrocytes, abnormal malorientated neurons or many indeterminate cells, and lack the normal six-layered lamination of the cortical gray matter. Numerous glial processes and fibers, especially in the subpial layers, make the tissue abnormally firm or sclerotic on palpation 5, 6. They also have gliosis and abnormal myelination. The gliosis and absence of myelin in cortical tubers may extend into the underlying subcortical white matter. Tubers are occasionally present in the cerebellum; disorganized cortex, abnormal astrocytes and Purkinje cells, and calcification are the main features 1, 13.
On MR imaging, tubers are found in 95% of patients with tuberous sclerosis 14, 15. They show somewhat thick cortical gray matter and a less distinctive gray-white matter junction compared with the normal cortex 15, 16, 17(Fig. 2). The peripheral component of tubers is isointense to mildly hyperintense to normal gray matter on both T1- and T2-weighted images, and the inner core of tubers is isointense to hypointense to gray matter on T1-weighted images and hyperintense on T2-weighted images 15, 16, 17, 18(Fig. 2, Fig. 3). These signal intensity changes reflect increased interstitial water in the subcortical area due to absence of myelin and looseness of tissue structures. The inner core of tubers may show a signal that is isointense to cerebrospinal fluid on both T1- and T2-weighted images 18, 19(Fig. 3). Although uncommon, part of the inner core may be hyperintense to gray matter on T1-weighted images 18, 19.
Cerebrospinal fluid in cerebral sulci or over the brain surface shows a hyperintense signal on T2-weighted images and may produce a partial volume effect, which may obscure small cortical tubers. Recently developed pulse sequence, fluid attenuated inversion recovery (FLAIR) sequence, which produces T2-weighted images with cerebrospinal fluid appearing hypointense , appears to delineate cortical lesions as well as small white matter lesions clearly, and may be helpful occasionally 21, 22(Fig. 3). Magnetic transfer imaging, another new technique to suppress signals from protein-bound water protons, seems an effective modality improving the detectability and signal noise ratio for all intracranial lesions of tuberous sclerosis .
In infants less than 1 year old, and particularly in those less than 6 months old, the appearance of cortical tubers is different compared with the children older than 2 years, when myelination of white matter comes close to the adult pattern . In the brain of young infants, there is less myelination, and a high water content in white matter is present, which results in hyperintensity on T2-weighted images . Thus, the inner core of cortical tubers in very young children is hyperintense to premyelinated white matter on T1-weighted images and is hypointense to premyelinated white matter on T2-weighted images 14, 25, 26(Fig. 4).
In adult patients with tuberous sclerosis, high signal intensity in tubers on T2-weighted images is less often seen than in children. In our patients who had follow-up MR studies, the high signal on T2-weighted images seen at a younger age became less distinct and often disappeared (Fig. 5). These suggest that dysmyelinated white matter subjacent to tubers may gain myelination with increasing age. Enhancement of tubers following intravenous administration of gadopentate meglumine (GdDTPA) has been reported to occur in less than 5% of patients . Some cortical tubers may distort the adjacent gyrus but do not demonstrate high signal intensity on T2-weighted images (Fig. 2, Fig. 8). Visual inspection of the cortical areas on T2-weighted images only fails to identify tubers that do not show hyperintensity. Therefore, it is important to look for gyral deformity (usually expanded), abnormal thickening of the cortical gray matter, and/or blurring of the gray-white matter junction. When only the latter findings are observed, cortical dysplasia must be considered 27, 28. It should be kept in mind that formes frustes of tuberous sclerosis are not uncommon; a solitary cortical tuber unassociated with other features of tuberous sclerosis should not be misdiagnosed as a brain tumor 28, 29.
On CT scans, cortical tubers can occasionally be detected as a localized low-density area 30, 31, which probably reflects the inner core of the tuber (Fig. 3). Calcification in cortical tubers has been reported as high as 54% . The cortical calcification may be gyriform, simulating the appearance of Sturge–Weber syndrome . In young infants, cortical tubers may be radiologically occult (Fig. 6). With increasing age, the low-density area of the inner core of tubers diminishes and becomes less distinctive, and calcifications may appear .
Subependymal nodules are found in 95% of patients with tuberous sclerosis 14, 15. Subependymal nodules may occur in the third and fourth ventricular walls, but most are found in the lateral ventricular walls, near the sulcus terminalis, with their deeper parts embedded in the caudate or thalamus. They are firm, or stony hard due to calcification, and form round or elongated protrusions into the ventricles 1, 13. Calcification is common with increasing age. Multiple adjacent subependymal nodules resemble dripping wax, hence the nickname `candle guttering' for the pneumoencephalographic appearance of these lesions. Histologically the nodules are composed of giant cells with glial and neural features and may have many vascular elements to account for the tumor stain, while in neonates there may be scattered neuroblasts 1, 4. They are covered by an intact layer of ependyma.
On CT scans, subependymal nodules are easily detected because of calcification. They are rarely calcified in the first year of life; the number of calcifications increases with the age of the patient . Calcification may be globular, partial, or ring-like in appearance 17, 19(Fig. 3, Fig. 7). MR imaging is less sensitive in depicting calcified nodules compared with CT. Gradient echo pulse sequences are known to be more sensitive for detection of calcium, iron, or other metals. Gradient echo T2-weighted images (T2 images) have been reported to be useful in detecting calcified nodules in patients with tuberous sclerosis . Routine clinical use of T2 pulse sequences, we believe, is not necessary to study patients with tuberous sclerosis because CT is widely available. Subependymal nodules compared with cortical gray matter, are commonly isointense to hyperintense on T1-weighted images and isointense to hypointense on T2-weighted images (Fig. 2, Fig. 3 and 10) 16, 17, 18. Hypointensity on T2-weighted images is a reflection of the calcification in nodules. Relative hyperintensity on T1-weighted images seems characteristic of subependymal nodules but may partly reflect mild calcification itself : calcification of less than 30% has been shown to cause hyperintensity on T1-weighted images . On Gd-DTPA MR, 30–80% of subependymal nodules show enhancement 15, 36, 37with a nodular, heterogeneous, or ring-like appearance. Enhancement of nodules does not necessarily denote neoplastic transformation . Periodic follow-up studies of enhancing lesions near the foramen of Monro are indicated because subependymal giant cell tumors occur almost exclusively at this site.
Grobus et al. were the first to report the association of subependymal giant cell tumor with tuberous sclerosis (cited in Ref. ). The average age at the time of diagnosis is about 13 years old, and most commonly between 5 and 10 years. Subependymal giant cell tumors differ from the subependymal nodules by their size and tendency to enlarge. Their incidence in tuberous sclerosis is approximately 10–15% 4, 15, 33. Prominent blood vessels are frequently present in the tumors . It is believed that subependymal giant cell tumors originate from subependymal nodules of tuberous sclerosis 39, 40.
On CT scans, subependymal giant cell tumors are identified near the foramen of Monro as a partially calcified, solid mass with either ipsilateral or bilateral ventricular enlargement, and they enhance to various degrees after intravenous contrast injection (Fig. 7, Fig. 8). On MR imaging, those tumors are usually heterogeneous but can be homogeneous and are isointense to slightly hypointense on T1-weighted images and hyperintense on T2-weighted images, and show marked enhancement with Gd-DTPA . On rare occasions, they may have a cystic internal component (Fig. 8). Serpentine signal voids which represent dilated vessels are occasionally seen in the tumors 15. Cerebral angiography may demonstrate tumor vessels as well as a tumor stain . Although uncommon, subependymal giant cell tumors may bleed, resulting in intraventricular hemorrhage 42. Why subependymal giant cell tumors develop only in the area of the foramen of Monro has not been determined. Uncommonly, other cerebral tumors also occur in patients with tuberous sclerosis. These tumors seem to be an incidental occurrence and not specific to tuberous sclerosis. These include pilocytic astrocytoma, fibrillary astrocytoma, and diffuse gliomatosis of the cerebral hemispheres .
Islets consisting of a group of heterotopic clustered cells are invariably present in the white matter of patients with tuberous sclerosis 1, 13. Histologically the cells are often bizarre and gigantic, with characteristics of both neurons and glia . They are associated with areas of hypomyelination similar to those seen in the cortical tubers. Many of these lesions are microscopic and therefore do not appear on imaging studies . Low power microscopic examination of heterotopic clusters suggests that they are distributed along the most direct line between the ependymal wall and the tubers . This distribution corresponds to the normal migratory path of spongioblasts during embryogenesis, and it is during this migration that they differentiate 13, 43.
On CT, those lesions large enough to be seen demonstrate decreased density in the white matter without enhancement after intravenous contrast administration. Calcification may occur in white matter lesions (Fig. 9). On MR, these show similar signal intensity to the cortical tubers: isointense to hypointense on T1-weighted images and hyperintense on T2-weighted images. They demonstrate three distinctive patterns: straight or curvilinear bands 44, 15that extend from the ventricular wall through the cerebrum toward the cortex (Fig. 10); wedge-shaped lesions; non-specific conglomerate foci (Fig. 3, Fig. 5); and cerebellar radial bands . Approximately 12% of these white matter lesions have been reported to show enhancement after contrast administration . The most interesting among those are straight or curvilinear bands which may reflect a possible pathogenesis of tuberous sclerosis, which will be discussed later.
The other type of abnormal signal intensity in the white matter is a hyperintense linear band on both T1-weighted and T2-weighted images, which has been reported previously but not widely discussed, to our knowledge. These lesions are uncommon and some of them appear to enhance after contrast administration (Fig. 10, Fig. 11). On CT, these lesions are not visualized. Corresponding histologic investigation has not been done, but the band appears to correspond to lesions described by Donegami et al. have described heterotopic clusters in white matter, some of which were distributed in a pattern similar to the normal migratory path of primitive neuroepithelial cells during embryogenesis . The reason why these lesions are hyperintense on T1-weighted images is not understood and microcalcification has been suggested . Hypermyelination is another possible cause for the T1 hyperintensity, but pathologically has not been proven. Abnormal enhancement is probably related to the absence or incompleteness of the blood brain barrier in these lesions. To maintain an intact blood brain barrier, astrocytes require normal structure and function . Abnormal enhancement on contrast MR images reflects the presence of abnormal astrocytes (bizarre-shaped astrocytes), which may not function as normal astrocytes or may not produce the necessary protein to maintain tight junctions (cited in Ref. ).
In neonates or young infants, white matter lesions look somewhat different, and are hyperintense to premyelinated white matter on T1-weighted images and hypointense to premyelinated white matter on T2-weighted images.
Cerebellar lesions have been described in tuberous sclerosis, but they are uncommon, occurring in approximately 10% of patients 33, 46. Lesions of the cerebellum are similar to those seen in the cerebral hemispheres, consisting of cortical tubers, heterotopic clusters in the white matter and subependymal nodules 1, 4, 12.
On microscopic examination, bizarre giant cell clusters are the main feature of cortical tubers, white matter lesions, and subependymal nodules. Much discussion has focused on the ontogenesis of these bizarre giant cells in tuberous sclerosis. Classic morphologic studies by light microscopy and electron microscopy have demonstrated astrocytic features . Immunohistochemical studies have shown neuronal features . Investigations using light microscopy, electron microscopy, and immunohistochemical studies, when considered together, have indicated that these bizarre giant cells have both astrocytic and neuronal features in cortical tubers, subependymal nodules, white matter lesions, and even subependymal giant cell tumors . These findings suggest that the giant cell in tuberous sclerosis is the product of a dysgenetic event in early development, resulting in incomplete expression of astrocyte or neuronal differentiation in that cell 12, 49. At histologic examination, the white matter lesions of tuberous sclerosis show clusters of giant cells that characteristically align in rows that appear to follow the path of neuronal migration, streaming radially along the most direct line from the ependyma to the cortical tuber 12, 13. Although straight or curvilinear lines that extend from the ventricular wall toward the cortex seen on MR seem to represent a migration anomaly, it may not be the primary abnormality of tuberous sclerosis but the secondary change due to the presence of bizarre-shaped giant cells. At present, it is reasonable to speculate as follows concerning brain lesions of tuberous sclerosis. The primary brain abnormality in tuberous sclerosis appears to be in some of the germ cells in the germinal matrix 50, 24, probably controlled by genes. Some dysgenetic giant cells in the ventricular wall may completely migrate to the cortex, producing cortical tubers; some may incompletely migrate, producing white matter lesions; some may not migrate at all, producing subependymal nodules 12, 15, 24.
There has been much attention paid to types of seizures in tuberous sclerosis patients, particularly infantile spasms, and their possible relation to the anatomic location of tubers. A statistical analysis by Shephard et al. has shown that the occurrence of infantile spasms rather than any other type of seizure was related to the total number of tubers and was not related to the number in any one area of the brain . Several investigations have been unable to demonstrate a link between intelligence and the number of subependymal nodules 18, 52. In our experience with a limited number of patients, severely mentally retarded patients tended to have a higher number of tubers; no correlation was noted between the severity of mental retardation and either the number of subependymal nodules or the degree of ventricular dilatation 18, 52. Shephard et al. have reported that all tuberous sclerosis patients with mental retardation had or have been had some type of seizure, usually generalized and very often infantile spasms. and that none of the patients who never had seizures were mentally disabled. Seizure-free tuberous sclerosis patients had a larger number of cortical tubers than the patients with partial seizures.
Ocular findings are common and present in approximately 50% of patients with tuberous sclerosis 13, 53. The most common of these is retinal hamartoma, an astrocytic proliferation without necrosis, hemorrhage, or dissemination 13, 54, 55. Retinal hamartomas are usually present in both eyes and are often multiple 13, 54. They may present in children as leukocoria. They must be differentiated from non-neoplastic causes of leukocoria as well as from neoplastic masses, notably retinoblastoma . When retinal hamartomas calcify, they can be seen on CT images as small calcifications in the region of the optic nerve head . When they do not calcify, they are difficult to detect on either CT or MRI, even after contrast administration.
There have been several case reports of patients with tuberous sclerosis associated with intracranial aneurysms 56, 57, 19. Although these intracranial aneurysms may be merely a coincidental finding, these reports suggest some association of intracranial aneurysms with tuberous sclerosis.
Access through your organization
Check access to the full text by signing in through your organization.
Access through your organization
Section snippets
Conclusion
This article presents CT and MR findings of cortical tubers, white matter lesions, subependymal tubers, and subependymal giant cell tumors. New findings on MR images are particularly emphasized. Many of cortical tubers expand cerebral gyri, and demonstrate thickening of the cortex and blurring of the gray and white matter junction. The inner core of cortical tubers is hypointense to nearly isointense to the gray matter on T1-weighted images, and nearly isointense to hyperintense on T2-weighted
Recommended articles
References (57)
A.E. Fryer et al.
Connor, et al. Evidence that the gene for tuberous sclerosis is on chromosome 9
Lancet (1987)
Harding BN, Malformations of the central nervous system. In: Adams JH, Duchen LW, editors. Greanfield's Neuropathology....
Gold AP. Tuberous sclerosis. In: Rowland LP, editor. Merritt's Textbook of Neurology. ninth ed. Baltimore, MD: Williams...
Adams RD, Victor M. Tuberous sclerosis. In: Principles of Neurology. fifth ed. New York: McGraw–Hill,...
J.G. Smirniotopoulos et al.
The phakomatoses
AJNR Am J Neuroradol (1992)
R.E. Scully et al.
Case records of the Massachusetts General Hospital: case 411986
N Engl J Med (1986)
D.W. Webb et al.
On the incidence of fits and mental retardation in tuberous sclerosis
J Med Genet (1991)
G. Ahlsen et al.
Tuberous sclerosis in Western Sweden. A population study of cases with early childhood onset
Arch Neurol (1994)
M. van Slegtenhorst et al.
Identification of the tuberous sclerosis gene TSC1 on chromosome 9q34
Cancer (1997)
R.S. Kandt et al.
Linkage of an important gene locus for tuberous sclerosis to a chromosome 16 marker for polycystic kidney disease
Nature Genet (1992)
E.S. Roach et al.
Diagnostic criteria: tuberous sclerosis complex. Report of the Diagnostic Criteria Committee of the National Tuberous Sclerosis Association
J Child Neurol (1992)
B.L. Bender et al.
Central nervous system pathology of tuberous sclerosis in children
Ultrastruct Pathol (1980)
Donegani G, Grattarola FR, Wildi E. Tuberous sclerosis. Bouneville disease. In: Vinken PJ, Bruyyn GW editors. Handbook...
N.R. Altman et al.
Tuberous sclerosis: characteristics at CT and MR imaging
Radiology (1988)
B.H. Braffman et al.
MR imaging of tuberous sclerosis: pathogenesis of this phakomatosis, use of gadopentetate dimeglumine, and literature review
Radiology (1992)
S.K. McMurdo et al.
MR imaging of intracranial tuberous sclerosis
Am J Roentgenol (1987)
J.R. Nixon et al.
Cerebral tuberous sclerosis: MR imaging
Radiology (1989)
Y. Inoue et al.
Magnetic resonance images of tuberous sclerosis
Further observation and clinical correlations. Neuroradiology (1988)
N. Martin et al.
MRI evaluation of tuberous sclerosis
Neuroradiology (1987)
J.V. Hajnal et al.
Use of fluid attenuated inversion recovery (FLAIR) pulse sequences in MRI of the brain
J Comput Asist Tomogr (1992)
J. Takanashi et al.
MR evaluation of tuberous sclerosis: increased sensitivity with fluid-attenuated inversion recovery and relation to severity of seizures and mental retardation
AJNR Am J Neuroradiol (1995)
T. Kato et al.
Improved detection of cortical and subcortical tubers in tuberous sclerosis by fluid-attenuated inversion recovery MRI
Neuroradiology (1997)
M.G. Jeong et al.
Application of magnetization transfer imaging for intracranial lesions of tuberous sclerosis
J Comput Assist Tomogr (1997)
A.J. Barkovich et al.
Normal maturation of the neonatal and infant brain: MR imaging at 1.5 T
Radiology (1988)
C. Christophe et al.
Neonatal tuberous sclerosis. US, CT, and MR diagnosis of brain and cardiac lesions
Pediatric Radiol (1989)
Barkovich AJ. Tuberous sclerosis. In: Pediatric Neuroimaging. second ed. New York: Raven Press,...
R. Kuzniecky et al.
Cortical dysplasia in temporal lobe epilepsy: magnetic resonance imaging correlations
Ann Neurol (1991)
A. Yagishita et al.
Focal cortical dysplasia: appearance on MR images
Radiology (1997)
View more references
Cited by (66)
Updated International Tuberous Sclerosis Complex Diagnostic Criteria and Surveillance and Management Recommendations
2021, Pediatric Neurology Citation Excerpt :
All individuals suspected of having TSC, regardless of age, should undergo MRI of the brain to assess for the presence of cortical or subcortical tubers, other types of neuronal migration defects, subependymal nodules, and subependymal giant cell astrocytomas (SEGAs). If MRI is not available or cannot be performed, CT or head ultrasound (in neonates or infants when fontanels are open) may be used, although these frequently will not detect all abnormalities revealed by MRI23,24 (Category 1). Recently, gadolinium deposition in the brain has drawn attention,25 although newer macrocyclic gadolinium agents lower the risk of depositions compared with older linear gadolinium agents26,27 and the long-term clinical implications are unknown.25,28 Show abstract Tuberous sclerosis complex (TSC) is an autosomal dominant genetic disease affecting multiple body systems with wide variability in presentation. In 2013, Pediatric Neurology published articles outlining updated diagnostic criteria and recommendations for surveillance and management of disease manifestations. Advances in knowledge and approvals of new therapies necessitated a revision of those criteria and recommendations. Chairs and working group cochairs from the 2012 International TSC Consensus Group were invited to meet face-to-face over two days at the 2018 World TSC Conference on July 25 and 26 in Dallas, TX, USA. Before the meeting, working group cochairs worked with group members via e-mail and telephone to (1) review TSC literature since the 2013 publication, (2) confirm or amend prior recommendations, and (3) provide new recommendations as required. Only two changes were made to clinical diagnostic criteria reported in 2013: “multiple cortical tubers and/or radial migration lines” replaced the more general term “cortical dysplasias,” and sclerotic bone lesions were reinstated as a minor criterion. Genetic diagnostic criteria were reaffirmed, including highlighting recent findings that some individuals with TSC are genetically mosaic for variants in TSC1 or TSC2. Changes to surveillance and management criteria largely reflected increased emphasis on early screening for electroencephalographic abnormalities, enhanced surveillance and management of TSC-associated neuropsychiatric disorders, and new medication approvals. Updated TSC diagnostic criteria and surveillance and management recommendations presented here should provide an improved framework for optimal care of those living with TSC and their families.
### Tuberous sclerosis complex surveillance and management: Recommendations of the 2012 international tuberous sclerosis complex consensus conference
2013, Pediatric Neurology Show abstract Tuberous sclerosis complex is a genetic disorder affecting every organ system, but disease manifestations vary significantly among affected individuals. The diverse and varied presentations and progression can be life-threatening with significant impact on cost and quality of life. Current surveillance and management practices are highly variable among region and country, reflective of the fact that last consensus recommendations occurred in 1998 and an updated, comprehensive standard is lacking that incorporates the latest scientific evidence and current best clinical practices. The 2012 International Tuberous Sclerosis Complex Consensus Group, comprising 79 specialists from 14 countries, was organized into 12 separate subcommittees, each led by a clinician with advanced expertise in tuberous sclerosis complex and the relevant medical subspecialty. Each subcommittee focused on a specific disease area with important clinical management implications and was charged with formulating key clinical questions to address within its focus area, reviewing relevant literature, evaluating the strength of data, and providing a recommendation accordingly. The updated consensus recommendations for clinical surveillance and management in tuberous sclerosis complex are summarized here. The recommendations are relevant to the entire lifespan of the patient, from infancy to adulthood, including both individuals where the diagnosis is newly made as well as individuals where the diagnosis already is established. The 2012 International Tuberous Sclerosis Complex Consensus Recommendations provide an evidence-based, standardized approach for optimal clinical care provided for individuals with tuberous sclerosis complex.
### The tuberous sclerosis 2000 study: Presentation initial assessments and implications for diagnosis and management
2011, Archives of Disease in Childhood
### Neurology of the Newborn
2008, Neurology of the Newborn
### Brain abnormalities in tuberous sclerosis complex
2004, Journal of Child Neurology
### Usefulness of diagnostic criteria of tuberous sclerosis complex in pediatric patients
2000, Journal of Child Neurology
View all citing articles on Scopus
View full text
Copyright © 1998 Elsevier Science B.V. All rights reserved.
Recommended articles
Implications of the local hemodynamic forces on the formation and destabilization of neoatherosclerotic lesions
International Journal of Cardiology, Volume 272, 2018, pp. 7-12 Ryo Torii, …, Christos V.Bourantas
### The evaluation of uterine artery embolization as a nonsurgical treatment option for adenomyosis
International Journal of Gynecology & Obstetrics, Volume 133, Issue 2, 2016, pp. 202-205 Shaoguang Wang, …, Yaozhong Dong
### Results of celiac trunk stenting during fenestrated or branched aortic endografting
Journal of Vascular Surgery, Volume 64, Issue 6, 2016, pp. 1595-1601 Hélène Wattez, …, Stéphan Haulon
### Ectopic arachnoid granulations
Cerebrospinal Fluid and Subarachnoid Space, 2023, pp. 169-173 Uduak-Obong I.Ekanem, R. Shane Tubbs
### Thyroid Arterial Embolization for the Management of Benign and Malignant Thyroid Disease: A Systematic Review
Endocrine Practice, 2025 Hannelore Iris Coerts, …, Tessa Malaika van Ginhoven
### Can we quantify the risk of embolization for a free-floating thrombus?
The Journal of Thoracic and Cardiovascular Surgery, Volume 153, Issue 4, 2017, pp. 804-805 Ruggero De Paulis, Luca Weltert
Show 3 more articles
About ScienceDirect
Remote access
Contact and support
Terms and conditions
Privacy policy
Cookies are used by this site.Cookie settings
All content on this site: Copyright © 2025 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply.
We use cookies that are necessary to make our site work. We may also use additional cookies to analyze, improve, and personalize our content and your digital experience. You can manage your cookie preferences using the “Cookie Settings” link. For more information, see ourCookie Policy
Cookie Settings Accept all cookies
Cookie Preference Center
We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors.
You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings.
You may also be able to exercise your privacy choices as described in our Privacy Policy
Allow all
Manage Consent Preferences
Strictly Necessary Cookies
Always active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work.
Cookie Details List
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site.
Cookie Details List
Contextual Advertising Cookies
[x] Contextual Advertising Cookies
These cookies are used for properly showing banner advertisements on our site and associated functions such as limiting the number of times ads are shown to each user.
Cookie Details List
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Confirm my choices |
187847 | https://artofproblemsolving.com/wiki/index.php/2020_AMC_12A_Problems/Problem_17?srsltid=AfmBOooQDSNfbz6hyGzjiVlG720AK879wuAOaLFQguZ7uU3gesnvxvjQ | Art of Problem Solving
2020 AMC 12A Problems/Problem 17 - AoPS Wiki
Art of Problem Solving
AoPS Online
Math texts, online classes, and more
for students in grades 5-12.
Visit AoPS Online ‚
Books for Grades 5-12Online Courses
Beast Academy
Engaging math books and online learning
for students ages 6-13.
Visit Beast Academy ‚
Books for Ages 6-13Beast Academy Online
AoPS Academy
Small live classes for advanced math
and language arts learners in grades 2-12.
Visit AoPS Academy ‚
Find a Physical CampusVisit the Virtual Campus
Sign In
Register
online school
Class ScheduleRecommendationsOlympiad CoursesFree Sessions
books tore
AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates
community
ForumsContestsSearchHelp
resources
math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten
contests on aopsPractice Math ContestsUSABO
newsAoPS BlogWebinars
view all 0
Sign In
Register
AoPS Wiki
ResourcesAops Wiki 2020 AMC 12A Problems/Problem 17
Page
ArticleDiscussionView sourceHistory
Toolbox
Recent changesRandom pageHelpWhat links hereSpecial pages
Search
2020 AMC 12A Problems/Problem 17
Contents
[hide]
1 Problem
2 Solution 1
3 Solution 2
4 Solution 3
5 Video Solution by TheBeautyofMath
6 See Also
Problem
The vertices of a quadrilateral lie on the graph of , and the -coordinates of these vertices are consecutive positive integers. The area of the quadrilateral is . What is the -coordinate of the leftmost vertex?
Solution 1
Let the coordinates of the quadrilateral be . We have by shoelace's theorem, that the area is We know that the numerator must have a factor of , so given the answer choices, is either or . If , the expression does not evaluate to , but if , the expression evaluates to . Hence, our answer is .
Solution 2
Like above, use the shoelace formula to find that the area of the quadrilateral is equal to . Because the final area we are looking for is , the numerator factors into and , which one of and has to be a multiple of and the other has to be a multiple of . Clearly, the only choice for that is
~Solution by IronicNinja
Solution 3
How is a concave function, then:
Therefore , all quadrilaterals of side right are trapezius
~Solution by AsdrúbalBeltrán
Video Solution by TheBeautyofMath
Another example of shoelace theorem included earlier in the video
~IceMatrix
See Also
2020 AMC 12A (Problems • Answer Key • Resources)
Preceded by
Problem 16Followed by
Problem 18
1•2•3•4•5•6•7•8•9•10•11•12•13•14•15•16•17•18•19•20•21•22•23•24•25
All AMC 12 Problems and Solutions
These problems are copyrighted © by the Mathematical Association of America, as part of the American Mathematics Competitions.
Retrieved from "
Art of Problem Solving is an
ACS WASC Accredited School
aops programs
AoPS Online
Beast Academy
AoPS Academy
About
About AoPS
Our Team
Our History
Jobs
AoPS Blog
Site Info
Terms
Privacy
Contact Us
follow us
Subscribe for news and updates
© 2025 AoPS Incorporated
© 2025 Art of Problem Solving
About Us•Contact Us•Terms•Privacy
Copyright © 2025 Art of Problem Solving
Something appears to not have loaded correctly.
Click to refresh. |
187848 | https://www.youtube.com/watch?v=pNkzuxQ0Cb4 | Geometry of a Tetrahedron: Question 2: Volume of a Tetrahedron in Vector Coordinate Form
Math Easy Solutions
56700 subscribers
11 likes
Description
1026 views
Posted: 14 Mar 2023
In this video I go over Question 2 of the Discovery Project: Geometry of a Tetrahedron. This question involves determining the volume of a tetrahedron but in vector coordinate form. The formula of the volume of a tetrahedron is V = 1/3·A·h. To get this formula in vector coordinate form, we can determine the area A of the triangular base as being half the cross product. For the height h, we first have to determine the equation of the plane that contains the area A and then calculate the distance from the opposite vertex to the plane. After going over the equation of a plane and the equation for the distance from a point to the plane, we can thus calculate the volume of a tetrahedron for any set of coordinates. I also graph the tetrahedron using the amazing GeoGebra 3D graphing calculator which also calculates its volume as well which I use as a double check.
The timestamps of key parts of the video are listed below:
Question 2: 0:00
Solution to Part (a): 1:11
The Equation of a Plane: 6:21
The Distance from a Point to a Plane: 10:33
Solution to Part (b): 17:31
Graphing 3D Tetrahedron in GeoGebra: 29:06
This video was taken from my earlier video listed below:
Discovery Project: The Geometry of a Tetrahedron:
Video notes:
Playlist: .
Become a MES Super Fan!
DONATE! ʕ •ᴥ•ʔ
SUBSCRIBE via EMAIL:
MES Links:
MES Truth:
Official Website:
Hive:
Email me: contact@mes.fm
Free Calculators:
BMI Calculator:
Grade Calculator:
Mortgage Calculator:
Percentage Calculator:
Free Online Tools:
iPhone and Android Apps:
1 comments
Transcript:
Question 2 all right and now let's go over to question two and uh just a note on question two I actually uh finished filming the video and then I realized that uh the actual calculus book had a mistake in it and then I thought it might as well uh redo this and add more clarity even though the final answer is still the same so question two uh the volume v of a tetrahedron is one-third the distance from a Vertex to the opposite face and here's the correction that means correction this is actually the height of the of the tetrahedron so it's not actually the distance from a Vertex to opposite face but it's the height of that tetrahedron from this uh opposite face and I'll uh illustrate that in a bit so times that the area of that face and again I will look to provide a proof in a later video and now look at a question a question a States a fine or part A of question two states find a formula for the volume of a tetrahedron in terms of the coordinates of its vertices P Q R and S and then Part B States find the volume of a tetrahedron whose vertices are p111 and then has Q is one two three then R is one one two and then s is three negative one and two so let's take a look at Solution to Part (a) solution to part A so after after finishing the filming of this video I realized that the formula for a tetrahedron as stated in question two is actually wrong so that's not actually the formula the actual formula in here added a link to the Wikipedia article involves the height of the tetrahedron and not the distance from vertex to opposite face and this is evident from a skewed tetrahedron and here I'll draw it out so let's say we had a tetrahedron like this something that's uh not symmetric like this and uh yeah so let's say that there and then inside there's the Point like this so there's the inside Triangle and goes there's the three triangles around like that and let's say there's a area here in this base there and so the opposite vertices is somewhere here so the distance from this face to that is going to be well something like that but that's not actually what we want the actual formula is involves the height h like that and the volume of the tetrahedron is equal to V 1 3 the height times by that area so this is the volume yeah so it's equal to uh yes one-third the distance is actually it's actually the height so one third the height times the area of that phase so from this area up to there so that's the actual formula all right and going further I think what the discovery project actually meant was the distance to the plane that contains the opposite triangle face which is the same thing as stating the height of the tetrade so this distance right here is the same as saying that the distance from this point to the opposite face or to the plane that consists of this opposite face plane like that um just have it in 3D like that and this this triangle is inside this plane so the distance from that plane that contains a triangle uh to that point I think that is what they were considering let's erase all of this all right so yes going further so thus we can write the volume of a tetrahedron in coordinate form as follows and uh first what I'm going to do is I'm going to actually draw that skewed uh tetration against it so it's more uh clear um so that yeah we're not dealing with yeah so we're dealing with a height so let's just draw this out I'm going to draw it like this and there is our triangle and then inside as a point say this point like this and I'm going to call the point inside here I'm going to call it P actually instead of the S I'm going to say this point s and it has coordinates S1 S2 S3 and then let's say and then the height here or the distance of the plane that contains this this face right there this is going to be our h like that and then this point right here is let's say this is our r says R1 R2 R3 and then this point right here is going to be let's call this q and this coordinates are q1 lowercase uh q1 yeah q1 Q2 Etc and Q3 and then at this point right here we will call this point p and I'll put this like this so that is our p and the coordinates are P1 P2 and P3 like that all right so now that we have this set up so we could write the volume as so we could write V equals one third H times it by the area of this opposite face which is a p the area of this pqr triangle and now what this equals to well let's write down our terms and then the height is let's call this uh let's write this in terms of the distance to the plane just so it's generic so distance called d from the planes we're going to go from this point right here so we'll go from S1 as 2 S3 and then that distance all the way up to the plane that contains pqr like that and then the area of this triangle we know that already that's just this Vector right here p p uh is PQ and this PR and let's put this p as well there and the coordinates our P1 P2 P3 Etc and so we have this I'm going to shift this over like this and then uh now this one here is going to be multiplied by the area of this and that we already saw that earlier is one half the length of the cross product PQ cross PR like that all right and now I'm going to box this in and the solutions that I find online I can find the solution manual the solution I find online just stopped at this point but I'm going to keep going further and get the exact stuff they're asking for which is in coordinate form because this one just says well the distance and so on we don't have a formula yet for the distance all right and now that we have this let's just keep going further and we're The Equation of a Plane going to look at now the equation of a plane because here we're distance from this point s uh for you from the point s to the to the plain pqr or that contains this triangle a pqr so let's write the equation of a plane so note that the equation of a plane can be derived based on the fact that the dot product is equal to zero for perpendicular vectors and yeah I haven't covered a equation of a plane before so it's a good good time to start now what I'm going to do is let's just draw a plane like this and then when we talk about playing just basically a rectangular in 3D uh rectangle Etc and now we're going to do is I'm going to put the X and Y axis of it like this so that's Y and there is our X and there's a this is our Z there's the origin zero and let's say we had a point here I'm going to call this point uh over here is at X zero y 0 Z 0 and let's say there is a just Vector like this and this goes to the X Y Z and a point right here so it's a vector like that and this could be anywhere so that's a whole idea of a plane so anywhere around this uh perfect rectangular sheet right there and now and then this can extend to Infinity Etc if you're if it's not bounded so now what we'll do is we're going to use the fact that the dot product is equal to zero for perpendicular vectors so if we take a a vector perpendicular to this plane like this so this is let's say this is perpendicular and we'll call this n or normal to it or perpendicular to it and let's say this has the components for this Vector a capital a capital B and C like that like that yeah so thus the dot product of this permanent of these two perpendular vectors where we can get the position Vector over from here to Here by subtracting so what we'll get is well uh we'll go a capital a capital b capital c Dot and then the position Vector of of this Vector right here position Vector all we do is shift it over to here Etc and likewise for this one there yes and this Vector right here uh this is actually the position Vector so this is the same thing as uh over here and so on so but these ones are the coordinates so then we could get the position vector and let's just write that down and that's going to be well just the difference x minus x0 and then y minus y 0 and then Z minus Z 0 and then recall from our DOT product video if they're perpendicular that's equal to zero so and now we can just do the dot product remember that we just multiply this by this B by the middle see that this one add them all so we just multiply the components and add so what we get is a x minus X zero plus b y minus y zero plus c z minus zero and this equals to all zero and you can stop right here or continue further I'm going to continue further and simplify it so that's one way of writing the equation of a plane the next one is you can expand this out this equals to well a times x then and then then minus a times x 0 but we'll do that after so a times x and the next one is B times y plus c times n B times y plus c times Z and then minus when we do this all together and add the add those up we get this ax 0 plus then this B times y 0 and then Z times I mean that's c times zero like that this equals to zero all right so now that we have this set up and what I'm going to do is I'm going to call this lowercase D and then just write this out as uh ax plus b y plus c z minus lowercase D equals to zero so this is the equation of a plane that's usually given all right so now that we have the equation of a plane here now the next step is well we need to find a The Distance from a Point to a Plane distance from a point to a plane because remember this distance from the point s to the to the plane pqr that contains the triangle pqr so on the equation for the distance from a point to a plane plane which I'll prove in a later video is given by yeah just bit out of the scope for this video right now and let's just put this down here let's do some drawings again just make it clear let's say we had a plane like this uh this this angle like that so this is plain and this has the equation ax plus b y plus c z plus c z minus b equals to zero so uh let's say we had like that and let's say we had this is our y this is our X and this our Z like that zero or a point let's say we had a point P let's say the point p is here and this point is um has X or instead of just putting a point because we already have P for someone something else let's say this is X1 uh y1 and Z1 like that all right so now the distance from here all the way to this uh part right there it's going to be perpendicular to it so let's just draw this a bit uh better so let's write this plane um here and let's say this this point like this goes across and say it goes all the way across here and it's gonna be at a right angle to it because it's uh that's the closest distance to it and this point right here we'll call this D so from there or from this distance like that let's call this D like that so that is the distance D and D is given by the formula which I'll prove in a future video a x one plus b x 2 I mean now B oh by1 Plus c z one minus lowercase D divided by square root a squared plus b squared plus c squared like that that is the distance B and let's hold alt or box all this in all right so that's putting this all together so that's putting this all together the volume of a tetrahedron in coordinate form is we're going to have V equals to wall one-third the distance of the plane or the height and I'll put a giant bracket like this a and instead of this X1 y1 Z1 we're going from the point S1 S2 S3 Etc to the plane pqr and there is the area of that triangle so that's going to be S1 plus b S2 plus C S 3 and then minus D like that divide this by this is going to be a squared plus b squared plus c squared and then bracket and then this is the area the triangle is going to be one half the length of the cross product PQ cross p r like that and yeah that's what we have let's just go look at the formula we had so there's a distance and there's a one-half PQ times up or across p r which is quite fascinating stuff indeed I'll move this into Center and uh yeah this is some epic stuff right here and I'm going to do is I'm going to box this whole thing in or actually box entire volume in like this all right and then I'm going to write here because we need this a b c Etc where uh the ABC well that is again that's going to be yeah that's going to be from a vector perpendicular to this plane because remember this equation right here perpendicular to the plane in other words um in other words we go back up here as well we have a perpendicular plane right there that's where n is our PQ yeah PR so in other words it's a cross product so this is going to be where uh PQ cross PR is going to be oh this this is a vector equals 2 well just the vector a b c and and we're also given uh yes we're that and we and we know that d this D is equal to well if we go back up with what D is B is a x 0 plus b y zero plus C zero and again if you go back up here so we have this part is there n so this is going to be our P there this is going to be our X zeros y zeros Etc so that's where n in other words uh that's going to be our this point right here that's just the point p and that's where we take a cross product from so then so then we replace uh we replaces x0 with the um with the P coordinates so that's going to be where D is equal to well a um this is going to be X1 I'm in P1 I mean P1 P1 Plus for our case you have B uh P2 plus c P3 like that so that is our d and we can illustrate this by um and actually before that I'm going to do uh where I'm going to box this in this is so this is part of this equation so this throws it all inside and then note just to illustrate that draw our skewed uh triangle our tetrahedron is like this this is a point s Point p is inside goes dashed line across and then this is going to be a dashed line across and this is our R this is our q and then our cross product goes up like this and goes straight across it like that this is our n like that and there's our Point p and this n is again equals two let's put this up in like this let's move this down and move this down and write this back again so n um and that is our perpendicular vector and that's just going to be the PQ so p q from this angle I'm from this cross I'm this Vector to this Vector across those and we get um cross PR like that so yeah that's what it is all right so now that we have this and uh yeah this is more detail than the solutions that I Solution to Part (b) was able to find online but they weren't the official ones but anyways it's pretty uh in-depth you could even throw this in a computer program and solve it all all right so that's what we have so let's go back to well question um the part B of it now so question two part B says find the volume of a tetrahedron whose vertices are p111 and q123 and r112 and S three minus one two so let's write these down and start solving it so solution to Part B so the first step is to derive the position vectors p q and PR and then compute their cross product and cross product length so so well let's write the point p is one one and then the next point is Q is well one two three and then R is one one two and uh we'll do deal with the S later so let's just double check that and double check it is one one one two three one one two all right so that's what we have and uh the position position Vector PQ is just a difference uh Q minus P so we're going to have a write this in Vector form the square bracket or the triangle bracket so one minus one two minus one and then three minus one like that and now this equals to well one minus one is zero two minus one is one three minus one is two that's position Vector for PQ the next one is PR all right PR is equal to well one minus one and then we'll have this one minus one then two minus one and now this equals two well one minus one is zero one minus one is zero then two minus one is one all right all right now that we have uh this right here let's solve the cross product and then do the cross product length afterwards because again we'll need the cross product for this ABC and then the cross product length that we need that for this area there so let's write this down so p q cross e R is equal to and then write this in indeterminate form and let's move this down a bit because we're going to write this the giant determined I J K like that the next one's going to be 0 1 2 and 0 0 or zero zero one like that all right this equals two well let's just uh solve this out so again as always you cross this cross this and then you multiply this by this and subtract by this by oh two times zero so we're going to have a 1 minus I mean one times one minus two times zero zero this is going to be the I component all right erase this erase this now the next one is here so we'll do a minus the middle one's always minus zero times one is zero minus two times zero is zero J and the next one's plus erase this cross cross this next one's going to be zero times zero minus 1 times 0 is also zero this would be K like that all right that's what we have all right so that's what we have and now we can simplify this uh further get rid of all the zeros this is going to be well I so 1 times 1 is I mean we have one times I just or just I or one I and all the others vanish and that's all we're getting is it's just the I component in other words and writing this in the other component set up like this this just equals to one zero zero like that and where again this cross product is going to be we need it for our equation that's just our a b C in other words all the B and C are zero and then a is one so that's what we have and now the area area p q r remember this is just going to be one half the cross product length one half cross product length like that PR and this is going to be equals two well one half of the length of this um this Vector I which is well again this is just going to be one half square root and then 1 squared all the components squared 0 squared well zero Square these old vanished just it just becomes one squared this is one this is equals to one half so the area is one half like that all right so now that we have this the next setup is to find a distance from the point at s which has coordinates three negative one two to the plane uh right to the plane uh p q r so let's write this down so we have our points uh this s right here um this is a three uh negative one two in other words this is uh the coordinates S1 S2 that's three that we were looking up and that is for the equation so there's S1 S2 S3 and we only know what a b c is ABC is this right here yeah so let's just write that down so we know that um the normal Vector of the plane is equal to uh in in our case p q cross p r and this equals 2 1 0 0 and then we have it this equals to well a b c like that yeah in other words we have a equals to one and b equals to C which equals to zero so just Circle these like that that's what we have and then we also have uh this right here and then the next setup is what we need to solve our D and this is a point p and the number of the coordinates of p is just one one so let's write this down here so we also have let's just write that down so P this is P1 P2 E3 this equals two P one one [Music] all right so we have that so all those are equal to uh yeah one one one so the so then this means that the lowercase D is equal to a A1 Plus B V2 plus c p three and a is again just one so one times one plus and then B times will be zero this is equals to one like that all right so that D is equal to one so in other words uh we'll have this right here uh the distance D is equal to uh that distance is just uh this whole part right here and we already know that D is ETC so let's just write that all down a S1 plus b s two plus C as 3 minus B uh divide Us by square root a squared plus b squared plus c squared this equals 2 and then a is just one times S1 is three minus L plus and then a b is 0 plus C zero minus d uh D is just one like that and then divide this by square root a squared is one yeah I mean a yeah a squares is 1 squared plus all the others are zero squared 0 squared so in other words all we get is well this becomes three minus one over one because this is just one squared and then square root this equals to two so the length is two all right and uh that's so the distance from the point to the plane is two but we can do a double check right here let's do a double check uh that's using the equation but first we gotta find the equation of a plane so we have the plane is just going to be equal if you just remember it uh a X plus b uh y plus c z minus D equals to zero and uh D is just one uh then a is just uh yeah we already know a is just one everything else zero this becomes one times x minus one equals zero so notice we have x minus one equals to zero or x equals two is shift to one over the other side x equals to one so this is the equation of a plane in three dimensions all right so let's graph that out so let's say we had the plane like this I'm going to draw this plane like this all right so that's the plane and let's say we had uh this point goes out like this this is the x-axis it goes through and over here and this is the Y and this is that I mean and at this point right here is one so that's x equals to one so this is the x equals to one plane and this is our y like that and let's say our point was the three or our Point s is going to be well three let's say this two let's say this is three now so this is 2 here and then it goes negative one so here like that negative one and then it goes up to uh two uh somewhere around here and this is our Point s I'll put Point s Point s which has coordinates three negative one two like this and basically what this is saying is Well the distance from here to here perpendicular to it is just going to be directly across on this x-axis because this is a perfectly straight with this x-axis that's going to be this three minus two I mean three minus one equals to two so we have double checked that that's exactly it all right now now that we double checked that right it's a bit better like this and now the next step is well let's just put it all together so that's the volume of the tetrahedron is and write this out so V equals to one third times D the distance from well the point s to the plane let's put this all down uh p q r and then Times by the area of the pqr area p q r this equals two well one third times the distance can be two Times by the area number is one half and yeah just a double check so there's the um this is the distance two area is one half and these two cancels cancels and this equals to one over three epic stuff all right and now let's go even further and uh as a double check we Graphing 3D Tetrahedron in GeoGebra can graph the tetrahedron and calculate its area using the uh gogebra or just uh using geogebra which is an amazing 3D graphing calculator and other kind of calculator here so here's what I've graphed so here's the points P yeah P Q R and S and then I made a pyramid pqrs in other words just a triangular pyramid is a tetrahedron and and then we get something that looks like this and actually calculus area two area is equal to 0.33 in other words uh one over three so we got that right and then when you shift this around and you see this perfectly in line with the x equals one plane as the x-axis this is z there's the Y Etc and uh also note that again if you have this plane here notice the distance to the point s or the perpendicular distance or the height from the plane is exactly three minus two so this three this one so this this distance is just two yeah so that's just two so yes epic epic stuff here and and now if we uh go to this link so here's a link you can play around with it and so on and it looks like this and you can move it around and so on which is quite amazing and you can adjust it as well you can move this I think you could also drag and drop so you could change it around and you see the area automatically changing and so on and let's do a control Z back there yes epic seven and again notice perfectly in line with the uh with this right here with this x equals one line and there's a distance three minus one is two so that's the height this epic epic stuff |
187849 | https://www.reddit.com/r/learnmath/comments/15pceug/multiplying_exponents_with_different_bases_and/ | multiplying exponents with different bases and powers - help : r/learnmath
Skip to main contentmultiplying exponents with different bases and powers - help : r/learnmath
Open menu Open navigationGo to Reddit Home
r/learnmath A chip A close button
Log InLog in to Reddit
Expand user menu Open settings menu
Go to learnmath
r/learnmath
r/learnmath
Post all of your math-learning resources here. Questions, no matter how basic, will be answered (to the best ability of the online subscribers).
403K Members Online
•2 yr. ago
[deleted]
multiplying exponents with different bases and powers - help
TOPIC
I’ve been watching YouTube videos about this, but I don’t find them very clear and comprehensive. I understand you have to make the base or the power the same, I just don’t know how to go about that. can someone please show me how that works? and how do you choose whether you make the base or exponent the same? I will be taking a quantitative reasoning class in college in the next coming weeks. I want to go back to concepts I never really understood, so I can address them and actually have a chance of succeeding in my class. I would appreciate a demonstration or any help. thank you!
Read more
Share
Related Answers Section
Related Answers
steps to multiply powers with different bases
how to multiply exponents in algebra
multiplication of exponents explained
when to add exponents in math
dividing variables with exponents rules
New to Reddit?
Create your account and connect with a world of communities.
Continue with Email
Continue With Phone Number
By continuing, you agree to ourUser Agreementand acknowledge that you understand thePrivacy Policy.
Public
Anyone can view, post, and comment to this community
0 0
Top Posts
Reddit reReddit: Top posts of August 12, 2023
Reddit reReddit: Top posts of August 2023
Reddit reReddit: Top posts of 2023
Reddit RulesPrivacy PolicyUser AgreementAccessibilityReddit, Inc. © 2025. All rights reserved.
Expand Navigation Collapse Navigation |
187850 | https://www.themathdoctors.org/order-of-operations-common-misunderstandings/ | Order of Operations: Common Misunderstandings – The Math Doctors
Typesetting math: 100%
Skip to content
Main Menu
Home
Ask a Question
Blog
FAQ
About The Math Doctors
Contact
Search Search for:
Order of Operations: Common Misunderstandings
September 19, 2019 December 22, 2023 / Algebra / Ambiguity, Mistakes, PEMDAS / By Dave Peterson
Last time I started a series looking at the Order of Operations from various perspectives. This time I want to consider several kinds of misunderstandings we often see.
Multiplication before division?
Here is a question from 2005 from a teacher, “WRW”:
Confusion over Interpretation of PEMDAS
In telling students to "do multiplication and division IN THE ORDER THEY APPEAR," it seems they want to always do multiplication first. I think they follow the PEMDAS rule BY THE LETTER, so they want to multiply before dividing.
When doing multiplication first, 8 / 2 4 = 8 / 8 = 1
When doing multiplication and division from left to right, 8 / 2 4 = 4 4 = 16
I agreed with him:
If you think that students have a tendency to misinterpret the rule, you're probably right; but I think the reason is that PEMDAS is a poorly stated version of the rule, and it is easy to misunderstand it as meaning you do Multiplication, then Division, then Addition, then Subtraction. That's not what the rule is supposed to mean, but many students don't get past the letters and see the meaning!
It's really wiser to think of subtraction as addition of the opposite, and division as multiplication by the reciprocal, and just leave D and S out of PEMDAS entirely, rather than try to fit them into the rules. But we make the rules for people who aren't ready to see things in a mathematically mature way! (I myself prefer to avoid PEMDAS altogether, and teach the "rules" in a more natural way that leads into this mature perspective.)
Where some people memorize the rule as “Please Excuse My Dear Aunt Sally”, if I use a mnemonic at all, I make it PEMA: “Please Excuse My Attitude”. It’s just Exponent-stuff, Multiplication-stuff, and Addition-stuff, with Parentheses acting as traffic cop, telling you when to do something other than what the signs say. I’ll include my own way of introducing the concept in a later post on why we need the rules.
But, continuing with an example:
Translating these ideas into the case of multiplication and division, when we write
8 / 2 4
we really mean
8 1/2 4
which we can do in any order, since multiplication is commutative; clearly, however you do it, it comes out to 16, not 1. The problem here is that people tend to see this as if it said
8
2 4
which means something different.
Note particularly that if we did the multiplication first in my example, then instead of
8×1 2×4=16,
we would be doing
8×1 2×4=1.
Interpreting it correctly, the only number we divide by is the 2.
Also, seeing it this way allows me to rearrange the expression at will (since multiplication, unlike division, is associative and commutative). If I had, for example,
7÷3×15,
I could think of it as
7×1 3×15=7×15×1 3=7×(15÷3)=7×5=35
In effect, I’m dragging the division sign around with the number following it!
“WRW” answered,
Thanks so much! I really like the idea of thinking of division as multiplying by the reciprocal and turning the whole multiplication/division portion into just multiplication. I'll try that out with my students and see if it helps. Thanks again!
I didn’t mention there the fact that students outside of America are taught mnemonics like BODMAS, and students there sometimes think Division has to be done before Multiplication. Get them together, and you might have quite an argument!
Addition before subtraction?
Here’s a question from another teacher, Monica, the next year:
Incorrect Application of PEMDAS and Order of Operations
I was working with students on the order of operations today and explained that multiplication and division are done from left to right, as are addition and subtraction. Apparently, they believe they were taught in the past to do all addition and then all subtraction. I tried to show examples of why that wouldn't work, but they simply did the problem their way, obtained a different answer and asked why it was incorrect.
Are there any examples or explanations that would clearly explain why they must be done from left to right?
The same reasoning I gave for multiplication applies here, as I explained:
It's impossible to show that they MUST be done from left to right; that is nothing more than a convention we all agree on. Your class has shown that it makes a difference which order you use; that proves that we MUST make some choice that we can all follow. What that choice is, is not so definite. But it makes a lot of sense to go left to right, for the following reason.
You can’t prove that a particular grammar is “correct”, as if nature forced us to use it; every language has a different grammar, and each is correct for its own speakers. What makes a grammar correct is only that it is the same grammar used by other speakers of the language. So you’d have to prove that addition and subtraction are done left to right by showing that all the books do it that way.
But we can see why it was a good idea. It’s the same thing I said about multiplication and division:
We define subtraction this way:
a - b = a + -b
This allows us to think of any subtraction as an addition; we essentially just attach the negative sign to the number following it, rather than taking it as a different operation. The subtraction requires no extra rules, just the rules we already have for addition.
If we do this, then
2 - 3 + 4 = 2 + -3 + 4 = 3
That is the same result we get if we do the operations from left to right (and it doesn't depend on whether we do the ADDITIONS from left to right, since addition is commutative!). If we did the addition first, we would get
2 - 3 + 4 = 2 - (3 + 4) = 2 + -(3 + 4) = 2 + -7 = -5
Note that this time, the negative sign ended up applying to ALL the following numbers, rather than just to the one after it.
So doing additions first would mean we are really subtracting everything after a subtraction sign. One benefit of replacing subtraction with addition of a negative, as in the multiplication case above, is to be able to move things around. For example, a common trick to evaluate a string of additions and subtractions more easily is to move all the subtractions to the end:
5+3−6+2−8+1=5+3+2+1−6−8=(5+3+2+1)−(6+8)=11–14=−3.
Without left-to-right operations, we couldn’t do this; we would have to look at the whole expression before rewriting any one subtraction as addition of the negative.So doing additions and subtractions from left to right makes it easier to transform an expression into one involving only addition; and since addition is commutative and associative, it is MUCH nicer to work with!
The rule, therefore, arises from the wish to make expressions easier to handle. Without it, a lot of algebra would turn out to be a lot harder. So your students should thank whoever first made this choice!
I closed by referring to the MD question above:
Now, your student's misunderstanding of the rule very likely comes from the use of PEMDAS or something equivalent, which is meant to be only a summary of the rules. It sounds as if A comes before S, but that twists the intended meaning of the mnemonic. See this page for another thought:
Confusion over Interpretation of PEMDAS
That says essentially the same thing I just said, but about multiplication and division, which is an even bigger problem. (Did you know that in other countries they use BODMAS instead of PEMDAS, so students often think division should be done first?)
For another interesting take on left-to-right operations, see
Left Associativity
Where do negatives fit in?
One of the most common difficulties in evaluating expressions is the mixing of negation with exponents. We have had many questions on this; in fact, I could have included this in the series Frequently Questioned Answers. I’ve chosen to use this question from 2002, whose answer covers most of the ideas we bring up (and refers to several other answers):
Negative Squared, or Squared Negative?
After reading your answer in
Exponents and Negative numbers
it seems to me that you're ignoring an important fact: -3 isn't just -13, but a number in its own right, i.e., the number 3 units to the left of zero. If that's the case, then shouldn't -3^2 have the value -3-3, or 9?
If -3 was intended to mean -13, then shouldn't it be written that way and not implied?
Thank you for your time.
The answer he referred to is an early one from 1997 where Doctor Ken stated that −3 2=−9, because it means −(3 2), not (−3)2. Now, if the convention is that negation is done after exponentiation, then that’s all we need to say. But Tom is arguing from the fact that −3 is itself a number, so it has to be kept together. Does that require us to do the negation first? He has a strong argument (and a common one).
I took the question:
I do recognize that it is possible to disagree on -3^2. Dr. Rick's answer to a similar question,
Squaring Negative Numbers
mentions this disagreement.
Like you, he notes that if you think of -3 as a single number, it makes sense for the negation to bind more tightly to the 3 than any operation. That reasoning makes some sense, though I think other arguments are stronger. But I do agree that since there is some reason to read it either way, it is prudent always to include parentheses one way or the other, to clarify your intent, i.e., to write either -(3^2) or (-3)^2.
Minimally, we can say that it is wisest to avoid this form, either because it is easy to read wrongly, or because all the books teach it wrong (depending on your opinion)! In fact, I find that the books I most respect never show such a form (making it hard to find examples to point to!), while others, on the contrary, emphasize it because students always get it wrong unless it is drilled into them.
Occasionally people will try to argue the point based on the behaviors of particular calculators or spreadsheet programs. However, these are really irrelevant, since they all define their own input formats, and programmers (of which I am one) are notorious for choosing what's easiest for them, rather than what is most appropriate for the user.
I've noted in several answers in our archives that some calculators, and Excel, use non-standard orders of operation without apology. But calculators in particular just don't use standard algebraic notation in the first place.
I’ll be including a link to one of these discussions at the bottom. But the main point is simply that calculators have to follow a convention that suits the way you enter expressions on them, which is different from print. (As calculators have come to display expressions more like type, however, they have been forced to follow conventions more closely.)
We also get questions from people who claim they learned long ago something different from what their children or grandchildren are learning (either the whole PEMDAS business, or some part of it like this one):
There also seems to be a generational difference, with older people (including some teachers) claiming that they were taught to interpret -3^2 as (-3)^2.
I suspect that what has changed is not the rules governing "order of operation" (operation precedence), but that schools are introducing the issue earlier, before students get into algebra proper. That means that they start by looking at expressions for which it is less clear why the rules make sense. I think you will rarely find examples of "-3^2" in practice, because there is no need for mathematicians to write it. You will find "-x^2" frequently.
Conventions of algebra apply primarily where we have variables; in arithmetic you don’t normally write long expressions with numbers, but just evaluate as you go. Basic four-function calculators were designed for such use, and don’t follow the order of operations. And the conventions make more sense on their home turf (with variables than with numbers):
If you approach the idea starting with numerical expressions like -3^2, you are thinking of -3 as a number and assuming that the expression says to square it. If you approach it first using variables, having first discovered that "-" in a negative number is actually an operator, then it is easier to see why -x^2 should be taken as the negative of the square. So I'll start with the latter, and then it becomes natural to treat numbers the same way we treat variables.
The point here is that in arithmetic, we see the negative sign as part of the number (“the number I’m attached to is negative”); in algebra, we see it primarily as an operator (“negate the thing that follows”). And the convention arises in the latter context. It isn’t primarily meant for use with numbers, but with variables.
Negation as multiplication or addition
Now, in an expression like -x, clearly "-" is a (unary) operator, which takes a value "x" and converts it to its opposite, or negative. The expression "-x" is not just a single symbol, but a statement that something is to be done to a value. As soon as we start combining symbols like this, as in -x^2 or -xy, we have to decide what order to use in evaluating them.
The trouble is that the "order of operations" rules as commonly taught (PEMDAS) don't mention negatives. So if we are going to go by the rules, we have to figure out how a negative relates to them. Well, there are two ways to express a negative in terms of binary operations.
There is no N in PEMDAS, or even in many fuller explanations of the convention. To see where it fits, we need to think about how its meaning relates to the other operations. How do mathematicians think of negation?
One is as multiplication by -1:
-x = -1 x
Treating it this way, clearly
-x^2 = -1 x^2 = -(x^2)
That is, since -x means a product, we have to do the exponentiation first.
So if we think of negation as a kind of multiplication, it belongs right in there with MD. And, if you think about it, you’ll realize it doesn’t matter whether we think of it as being done first, last, or left-to-right: If you negate a factor before multiplying or dividing, you get the same answer as if you negate after multiplying or dividing:
(−x)⋅y=−(x⋅y)
The other way to talk about negation is as the additive inverse, subtracting x from 0:
-x = 0 - x
(This is why the "-" sign is used for both negation and subtraction.) Using this view, we see that
-x^2 = 0 - x^2 = -(x^2)
In particular, we would like to be able to replace subtraction with negation wherever we find it, and not mess things up: x−y 2=x+−y 2, which would not be true if the latter meant x+(−y)2. This is why the subtraction idea is applicable even when we are not actually subtracting from 0.
So both views of negation produce the same interpretation, which does exponents first, and it is logical to put negation here in the order of precedence.
So if PEMDAS really means PE(MD)(AS), with operations in parentheses being done together, we can extend it as PE(NMD)(AS), where negation is definitely done after exponents and before addition.
But the fact is that there is no authority decreeing these rules; just as in the grammar of English, we get the "rules" by observing how the language is actually used, not by deducing them from some first principles. The order of operations is just the grammar of algebra. So the real question is, how do mathematicians really interpret negatives and exponents combined in an expression?
If you look in books, you will rarely find "-3^2" written out, but you will often find polynomials with negative coefficients. And you will find that
-x^2 + 3x - 2
is read as the negative of the square of x, plus three times x, minus 2.
So, even though there is not a lot of evidence of usage with numbers, usage in polynomials (with variables) is clearly on the side of negation-after-exponentiation, and we want to be consistent.
I have come to believe that the order of operations is what it is largely so that polynomials can be written efficiently. If "-x^2" meant the square of -x, then we would have to write this as
-(x^2) + 3x - 2
to make it mean what we intend. Since powers are the core of a polynomial, we ensure that powers are evaluated first, followed by products and negatives (the two ways to write a coefficient) and then sums (adding the terms).
Since we can easily see that this is how -x^2 is universally interpreted, it makes sense to treat -3^2 the same way.
Addition comes last so that a polynomial is a sum of terms; negation goes with multiplication in order to have the same base in every term, without having to use parentheses to avoid accidentally changing a base to −x.
For some other discussions of this issue that haven’t already been mentioned, see
Precedence of Unary Operators
Negative Numbers Combined with Exponentials
Negative vs. Subtraction in Order of Operations
And for a long discussion with a programmer, see
Order of Operations and Negation in Excel
That is the discussion I referred to above, about how calculators and programs have different needs, as well as whether -3 should be thought of as a unitary entity.
Post navigation
← Previous Post
Next Post →
26 thoughts on “Order of Operations: Common Misunderstandings”
Pingback: Order of Operations: Subtle Distinctions – The Math Doctors
Amanda November 3, 2021 at 7:20 pm
My mathematically minded friend told me that when there are no brackets you complete the equation left to right. If there are brackets you use BEDMAS. Is this true?
Reply
1. Dave Peterson November 4, 2021 at 10:42 am
No. In fact, “BEDMAS” is mostly about what to do in the absence of brackets (parentheses)! The role of brackets is just to intentionally change the order of evaluation from the default, which is multiplication and divisions before addition and subtraction. And if we used brackets everywhere, we wouldn’t need the “EDMAS” part at all.
See the previous post, Order of Operations: The Basics.
Reply
Helge February 3, 2022 at 10:10 am
I am discussing this expression on a Facebook group:
8÷2(2+2) = ?
My answer is 16. My opponent says 1.
She insists that the multiplication must be done first, because it has a special connection with the parentheses.
Who is right?
Reply
1. Dave Peterson February 3, 2022 at 10:46 am
Hi, Helge.
You’re both wrong … to be arguing over this. The fact is that different conventions are taught about this sort of expression, so each of you would be right according to some teachers. (For which reason, it’s best never to write such an expression, if you want to be understood.)
For details, see the post Order of Operations: Implicit Multiplication?, and its follow-up, Order of Operations: Historical Caveats.
If you have further questions about this, you should ask us directly, via Ask a Question.
Reply
Pingback: Why Properties Matter: Beyond Addition and Multiplication – The Math Doctors
David April 24, 2023 at 1:30 pm
Full disclosure, I’m a PEMDAS fanatic. (Promoting PEMDAS to the level of “1+1=2”, and there can be only one correct answer or answer set.) Amen. I would, without any crisis of faith, accept BODMAS as the one true rule. Processing mixed operations left to right ignores the fact that not all cultures process their written information that way.
Near the beginning of this page, an equation was rewritten by substituting “/ 2” with “ 1/2” resulting in an equation where the commutative property of multiplication can be used. That substitution requires a huge assumption: The operation 2 4 could not be performed before the 8 / 2. [Otherwise, the rewritten equation would be “8 (1 /(2 4))” which essentially makes the commutative property of multiplication a moot point.] The example actually highlights the importance of operator order in rewriting equations for manipulation. An inaccurate substitution condemns any further manipulations to produce errant results.
Reply
1. Dave Peterson April 24, 2023 at 2:42 pm
Hi, David.
I’m not quite sure what you mean by “PEMDAS”, and therefore what interpretation you are defending or refuting. You seem to think multiplications and divisions should not be done from left to right; but that is what we mean by PEMDAS, as I explained here, and also in Order of Operations: The Basics. If you are taking PEMDAS to mean “Multiplication, then Division”, why do you also approve of BODMAS, which in the same way would be taken to mean “Division, then Multiplication”? That is not what they mean; they mean what I described here.
This post explains why we choose to define the order of operations in this way. This is a choice; it is not based on any particular culture, or on an assumption that all cultures read left to right.
You’re right that we need to take the meaning of an expression into account when we make a substitution, and not mechanically replace one set of characters with another. That is the cause of several common errors.
But what we did here is not an error. The assumption I am making is that mathematicians do read expressions this way, which you can easily confirm.
Reply
Ronald Alfonso Becerra Gil May 14, 2023 at 10:25 am
Sorry to reopen the question about -3^2, but I was arguing about this on YouTube, and many people seem to be convinced that as a standalone expression it should be considered as 9, while others say it must be ambiguous, because there is not an universally accepted answer.
The point is that many calculators give 9 as the result, including the one in Excel. It is also the answer in many scientific calculators, but that is because they don’t let an expression to start with the binary minus but only with the unary one, that for them has more precedence than the rest of operations. Therefore, typing -3^2 ends meaning 9.
I read your previous post that what calculators say shouldn’t be considered the rules of what we do on paper, and also that it is hard to find examples of expressions like -3^2 in math texts, because when we write something like it is because we were dealing with variables.
But my question is: since the actual appearances of standalone expressions like -3^2 end occurring mostly when typing in calculators, not in math texts, can we teach that the actual answer for it must be -9 despite that is not what people will find in “practice”? Or at least not always in practice, as the answer tends to differ: in Google and WolframAlpha, it means -9.
I have also put as an argument that a standalone expression as -3^2 can come from a more complex one, and thinking about it as 9 can lead to some problems. For example, if we are solving this equation:
5 – 3^2 = 5 + F(x)
no one doubts that the left side means 5 – (3^2).
But we realize we can subtract 5 in both sides, leaving:
-3^2 = F(x)
If we were to interpret -3^2 as (-3)^2 when it appears alone, we would need to add here new parentheses to keep the original meaning, add a factor (-1), or be forced to write a zero:
-(3)^2 = F(x)
(-1) 3^2 = F(x)
0 – 3^2 = F(x)
because otherwise we would have no longer an equivalent equation to the original. Instead, if our convention is that -3^2 is always interpreted as -(3^2) regardless of the context, then we can just remove the redundant term 5 without the need of modifying the structure of the rest because of that.
I don’t know if the argument is wrong or some people didn’t understand the implied problem, but they insist that if the expression didn’t reflect a subtraction from the beginning, the minus sign should be part of the number.
Reply
1. Dave Peterson May 14, 2023 at 8:33 pm
Hi, Ronald.
I think your arguments in favor of taking -3^2 to mean -9 agree with ours. (I hope you do see that we are saying it is -9, not 9.)
We simply have to recognize that there are many people who don’t see things the way mathematicians do, and therefore we should “write defensively” and avoid that form. There’s no need (for either them or us) to argue about it. They can merely be told that, however irrational they think it is, it is the convention among mathematicians that -3^2 = -9. In exactly the same way, we who know that convention can recognize that, however irrational we know it is that Excel says -3^2 = 9, that is their convention and we have to live with it.
As for calculators, I think (most?) scientific calculators distinguish the “-” (subtract) button from the “(-)” (negate) button, though they could probably use one button, just as computer languages typically use one symbol. My guess is that they do this in part so that they can distinguish between starting an expression with a negation, and continuing a previous result by subtracting from it. But they typically give both subtraction and negation a lower precedence than exponentiation. That has not always been true, and it is not true in Excel (which is stuck with their rule for the sake of backward compatibility). Which calculators do you find give the answer as 9?
I find the link at the end, Order of Operations and Negation in Excel, a particularly interesting read.
Reply
1. Ronald Alfonso Becerra Gil May 15, 2023 at 12:38 pm
Hello.
For sure, I was aware that the article intended that it is -9 and not 9.
I was just going to type that my CASIO fx-82MS scientific calculator returned 9 when using negation, and to my suprise, when I tested it, the result was indeed -9. Maybe it was a Mandela Effect? Maybe I took that word from someone else and assumed it to be true, since we usually don’t have the need to write expressions like that, so I wasn’t used to knowing what to expect from the calculator. Despite this, I could not ensure that all versions of a scientific CASIO give the same result.
In the YouTube video in which I was arguing this, someone mentioned that in some old math texts they interpreted expressions like -3^2 as -3 -3, but didn’t give references. I suspect it was a Mandela Effect too.
I think that finding that the results in software programs are most of the time the same as those agreed upon in mathematics is an additional argument to say that the interpretation to learn should definitely be one and not the other, and Excel should be taught as a separate case to be careful with.
And thanks for sharing that article, which I found interesting! Seriously, before finding that YouTube video, I didn’t suspect there was much debate about it.
Reply
1. Dave Peterson May 15, 2023 at 9:20 pm
Yes, I think it is appropriate to explicitly teach that -3^2 = -9 in written mathematics, even though some software or calculators may not interpret it that way, and some humans may not as well. The former fits most usage they will see (including, I think, most current calculators), while the latter is a warning to be careful in reading what others write, and in writing for others.
And, yes, the reason that calculators and the like seem to tend in the direction of agreeing with what teachers say is because … that’s what teachers say! They want to be right, and to be recommended by teachers.
Reply
Andrew December 21, 2023 at 11:22 pm
I feel like this is a fundamental problem of our notation using the minus sign for multiple operations.
What do you take of the case of a negative exponent? If we follow the negation-after-exponentiation example proposed then a^-b becomes a much different equation than convention dictates. Most would assume a^-b is a^(-b) as written, but if we follow negation-after-exponentiation it would be a^-1b.
So negation in a base should conventionally be after exponentiation, but negation in a exponent should be conventionally before exponentiation. That to me sounds like a poor/ambiguous convention.
Reply
1. Dave Peterson December 22, 2023 at 11:34 am
This is a very interesting question, which I don’t think I’ve ever heard before. It raises several issues that are worth addressing.
First, I has to be noted that in traditional formatting, the question you raise is moot, because when written as a−b, the entire exponent is written as a superscript, which inherently groups it; you have to evaluate the entire exponent before using it. As a result, when this is written in in-line format using the “^” (caret) symbol, you would write it as “a^(-b)” in order to express that grouping. Either way, there is never a question as to which operation is done first.
Second, restricting ourselves to the latter format, I claim that it is unambiguous to write a−b as a^-b, because there is still only one way to interpret it, regardless of the order of operations. That is because the caret says to use the immediately following number as an exponent, and since “-” itself is not a number, we can only take all of “-b” as the exponent. As a unary operation, “-” acts only on the number on its right, so in this position, it is not “in competition” with exponentiation in the order of operations.
Third, your argument, as I understand it, is based on what I said about “-x^2 = -1 x^2 = -(x^2)”, where I replaced negation with multiplication by -1. But you can’t just replace an expression anywhere with an expression that would be equivalent when standing alone, without first considering the order of operations. (I discussed this idea in this more recent post, under “Substitution requires it”.) To do your substitution, you would properly write, not “a^-1b”, but “a^(-1b)”, because of my second point (that “-b” must be treated as a unit, so that parentheses are appropriate to maintain that). Also, note that if you instead used my “-x^2 = 0 – x^2 = -(x^2)” as a model, you would get “a^-b = a^0 – b = 1 – b”. This further illustrates the error of blind substitution.
(My two statements are not presented as proofs that the negation on the left is done after the exponentiation, but as two experiments to see what makes most sense; it is their agreement that provides support for the choice to do exponents (on the left) before negation. As I just showed, attempting the same experiment with negation of the exponent shows that that leads to in consistency.)
Fourth, if the rule as stated results in an interpretation different from the very convention that the rule is supposed to state, then clearly we need to change our statement of the rule, not the convention! If you were right that “negation after exponentiation” would entail taking a^-b as (a^-1)b, and we know very well that it is taken as a^(-b), then we would want to change our wording, not its meaning. I think that’s what you are saying.
In summary, I’m not sure whether your point is that we simply shouldn’t use “-” for negation (an idea discussed here), or that the convention has to be stated in your longer form in two parts (which I think unnecessary because of my second point), or something else. I think what we have is perfectly consistent.
Reply
Pingback: Order of Operations: Fractions, Evaluating, and Simplifying – The Math Doctors
Scott Stocking May 15, 2024 at 10:45 pm
On the negative numbers: One expression I use to try to confound the Facebook OOO crowd is the following:
6 ÷ -(1 + 2)
Is it 6 ÷ -3 or -6 3?
Surprisingly, those who think 6 ÷ 2(1 + 2) = 9 would say that that the answer to my problem is -2! I was taught that a negative sign in front of parentheses without a coefficient is the same as multiplying what is inside the parentheses by -1, so I would have guessed they’d say -18. I believe the answer should be -2, however, just like I believe 6 ÷ 2(1 + 2) = 1.
Reply
1. Dave Peterson May 20, 2024 at 5:19 pm
Hi, Scott.
I don’t see why anyone would say that 6 ÷ -(1 + 2) = -18. The only way to get that is to blindly replace -(1+2) with -1(1+2), ignoring how it would change the order of operations in the surrounding expression. I discussed such unthinking substitutions (in a very similar context) in Implied Multiplication 3: You Can’t Prove It. But there is no reason anyone would do that here, because it is already clear what to do.
As I said above in response to Andrew, “you can’t just replace an expression anywhere with an expression that would be equivalent when standing alone, without first considering the order of operations.”
By the way, looking at your site, I find numerous places where I would disagree with you; but I see no value in arguing about this or “trying to confound” people who disagree with me. My goal is peace and mutual understanding, not proving myself correct. Acknowledging ambiguity (even merely potential ambiguity) is a central part of that.
Reply
Doug Hendrie February 10, 2025 at 2:03 pm
Dave,
Near the top of the page, you stated
“Translating these ideas into the case of multiplication and division, when we write
8 / 2 4
we really mean
8 1/2 4
which we can do in any order, since multiplication is commutative; clearly, however you do it, it comes out to 16, not 1. The problem here is that people tend to see this as if it said
8
—–
2 4
which means something different.”
It doesn’t mean something different, and
your reciprocal calculation is wrong. Not ambiguous. Just wrong.
TERMS are reciprocated. Not factors.
So your reciprocal will be
1
———
24
————
Expression Reciprocal
1
2x ——
2x
———————————————-
3 x
—— ——
x 3
———————————————
2x-3 (x+5)
———- ———-
(x+5) 2x-3
Reciprocal in Algebra
—————
Reciprocals.
Which of the following is the reciprocal of
21 × 5?
1
Correct answer is: ———-
21×5
————-
The proof of 1/(ab) = (1/a)(1/b)
consists in showing that, (1/a)(1/b)
acts as the reciprocal of ab, and
the product of ab(1/a)(1/(b) = 1
Mary Dolciani
Teachers Manual
Page 17 Book 1
Modern Algebra
Structure and Method
Reply
1. Dave Peterson February 10, 2025 at 4:44 pm
Doug,
I’m sorry, but as in your previous comments, this is simply wrong.
You are assuming, contrary to convention, that all multiplications are done before any divisions. Some authors have taught that, but even the sources you quote here teach otherwise:
MathsIsFun says, “Divide and Multiply rank equally (and go left to right),” and gives the example
Similarly, SplashLearn says, “Multiplication and Division: Next, moving from left to right, multiply and/or divide, whichever comes first,” and gives the example
Dolciani also teaches left-to-right, at least for explicit multiplication. (In some books, she taught that implicit multiplication is done first, but that is not relevant here.) I showed that in my previous response.
As a result of this misunderstanding, you seem to assume that “/” means “take the reciprocal of the product that follows”, which it does not; your examples of reciprocals don’t deal with the symbolic form in which you are claiming to prove me wrong, but only with words; so they are irrelevant. The fact that the reciprocal of 21×5 is 1 21×5 does not imply that 1/21 × 5 means the reciprocal of 21 × 5. Rather, it means “the reciprocal of 21, times 5”, or 1 21×5.
[I tried to edit your comment to insert images of what you quoted; here they are:
]
I hope this is clear.
Reply
1. Doug Hendrie February 11, 2025 at 5:03 pm
Hi Dave,
I’m not assuming every multiplication should be done first however, for these expressions, the multiplications do come first.
For instance 2b÷2b= 1, NOT b²
Now 215÷215 = 1 and 1215=215
Now taking the reciprocal of 215 which is 1/(215)
which you already have the example, gives
1/(215)215= 1
Now previously , you questioned my use of the Commutative Law, ab=ba.
8÷24=8÷42 can only equal 1
Commutative Law states, factors can be presented in any order, without changing the result of the expression.
Now this Law contradicts the convention that is PEMDAS. Do we use a convention, or do we use
an actual Mathematical Law.
You are promoting PEMDAS as the ONLY way to simplify expressions, and that is most definitely NOT the case.
Simplify Expressions Using the Commutative and Associative Properties.
When we have to simplify algebraic expressions, we can often make the work easier by applying the Commutative or Associative Property first instead of automatically following the order of operations.
I’ll stick with the Laws of Maths.
Regards
Doug
Reply
1. Dave Peterson February 11, 2025 at 7:17 pm
Thanks for condensing your argument, so I can answer it more easily. I hope this time you will understand what I have been saying.
Let’s take a look at what you quoted, from an online textbook:
(First, let me point out that, like last time, you included a link only to the publisher of the snippet you quoted; I had to search to locate it specifically, which will make it easier to answer, by seeing the context, so I edited your comment to improve the link.)
What they are saying here is not that you should apply properties instead of following the order of operations, as if these would give different results. They are saying that, whereas the order of operations tells you what an expression means, you don’t have to evaluate the expression in that order, but can apply properties to simplify the work. In the example, they observe that two of the terms cancel one another, so it is easier to add them first, rather than to do everything left to right. The order of operations rules tell you that it means “this, plus this, plus this”, but the commutative property tells you that you will get the same result if you change the order. Other examples on that page show the same concept.
Now, you say that you aren’t claiming that all multiplications are done before divisions, but that in these examples, they are. Yet the only grounds you give for that claim is that, as you see them, the multiplication are done first. On that assumption, you then apply the commutative property to that product. But the standard rules (as taught by Lumen as well as the others I’ve pointed out), 21×5÷21×5 does not mean (21×5)÷(21×5)=1, but ((21×5)÷21)×5=25.
Here is what they teach:
If you were right, the first example would equal 1, not 4. But they follow the order of operations – the same rules they mention in the section on simplifying, and do not violate; they just rewrite the expression before evaluating.
Similarly, all current textbooks I know (though not all, historically) would say that 8÷2×4 means (8÷2)×4=16, while 8÷4×2 means (8÷4)×2=4.
And, again, the commutative property must be applied to the correct meaning of the expression, according to those rules, and does not override the rules. If two factors are not meant to be multiplied together, then you can’t change their order. A law can’t contradict a convention! The latter takes precedence, and determines when the law can be applied.
Reply
2. Dave Peterson February 14, 2025 at 12:23 pm
Doug,
If you want to continue debating this, please don’t just keep repeating the same claims; rather, try to understand and answer my arguments. In particular, explain why every source you quote agrees with me that an expression like 18÷9×2 means (18÷9)×2, not 18÷(9×2), as you take it. Are you the only person in the world who understands mathematics correctly? Or is there some chance that you might be wrong? The reality is that the order of operations convention does have to be applied first, in order to know what an expression is intended to mean, before you can correctly apply properties to rewrite it.
Reply
1. Doug Hendrie February 14, 2025 at 12:42 pm
Then why does Khan Academy show the following-
Khan Academy.
When do we NOT follow the order of operations?
Lots of times, actually! While the order of operations gives us one way to evaluate an expression, the properties of addition and multiplication allow us to be more flexible.
The distributive property says that we can multiply a value to each term inside of the parentheses instead of adding or subtracting inside the parentheses first.
The commutative property of multiplication says that we can multiply the factors in any order instead of only left to right. Once we learn more about reciprocals, we’ll be able to rewrite expressions with multiplication in place of the division.
The order of operations, or the convention of performing arithmetic calculations in a certain sequence, is not a universal or natural law, but a human invention that developed over time and across cultures.
Exponents and order of operations FAQ (article) | Khan Academy
2. Image 30 Dave Peterson February 14, 2025 at 3:19 pm
The answer is the same as what I explained before.
You are quoting from here in Khan Academy:
By “not following”, they don’t mean “disregarding” the convention, disobeying it, but rather going beyond it: recognizing, once we understand the meaning of an expression (according to the order of operations), we can use properties as a tool to change how we choose to carry out the work of evaluating it.
As I’ve shown for your other sources, Khan, too, disagrees with you about expressions of the sort you are interested in. For example, at the end of the video here, he does this:
He does not do the 3×2 first, but does everything from left to right.
I have not often seen students confused in this way by the term “order of operations”, though I can see how it can happen. It doesn’t mean “you must always do the operations in this order”, but just “what is written means that it could be done in this order, but you can rewrite it, if you want, before you do all the operations”. That’s what this site, and the book you used last time, are saying.
Once students have learned the basics, I usually explain that they have a choice between doing “exactly what an expression says” (which is what the order of operations indicates), and “whatever is the easiest way to evaluate it” (which the properties allow you to do). Some even get in the habit of automatically distributing, even when that actually makes the work harder; others never get beyond taking an expression literally. You, on the other hand, seem to be unaware of your own misinterpretation, because somehow doing multiplication first feels so natural to you that you aren’t even aware that you are interpreting at all, much less that you are doing so wrongly.
Again: Before you can apply the commutative property, you must determine what numbers are to be multiplied. That is what the order of operations does: It says, before you do this addition, for example, you will have to multiply these numbers. Once you understand what the expression means, then you can decide to rewrite it as an equivalent expression (with the same value) by rearranging parts of it, if you wish. But that doesn’t mean that you can swap any numbers that are separated by a multiplication sign without first considering whether those numbers are to be multiplied.
A different example may help. We also have the commutative property of addition. But when I see the expression 2×5+4×3, I can’t swap the 5 and 4 to get 2×4+5×3; that gives a different value, because the 5 and 4 are not meant to be added. We have to see that they have to be multiplied by the 2 and the 3, respectively, before we do the addition. So the numbers actually being added are 2×5 and 4×3; if we were to apply the commutative property of addition, we would actually get 4×3+2×5.
In the examples we have been discussing, like 8÷2×4, the numbers you claim can be commuted are not meant to be multiplied! And we know that because the order of operations tells us that, nominally, the division will be done before the multiplication. Yes, we can rewrite it if we want, but that might be as 8×4÷2, or just as 8 2×4.
Doug Hendrie February 15, 2025 at 9:03 am
Hi Dave
Thank you for the prompt reply,
So from the above 8/24
Does it follow that ab÷ab = b²
Regards,
Doug
Reply
1. Dave Peterson February 15, 2025 at 10:03 am
Ah! That’s the $64,000 question! I thought you’d never get around to it. But it’s a separate issue.
You’ve been talking about explicit multiplication. where there is hardly any question (but see here, and other references here to what I call “AMF”).
When you take out the multiplication sign, you get to the issue of implicit multiplication, which people fight over. Since there are different opinions, and it can be easily misread even if you have a strong opinion, I recommend just never writing expressions like ab÷ab.
I think there is a very good case for treating that as (ab)÷(ab) = 1. But some teachers, at least, insist on treating it the same as a×b÷a×b = ((a×b)÷a)×b = b², and there is some reason to believe that replacing one notation for multiplication with another should not change the meaning of an expression.
But that’s the subject of several other pages on this site, starting with Order of Operations: Implicit Multiplication?. In fact, you’ve previously commented on a more recent page on that topic, though you used explicit multiplication, making your comment irrelevant there.
Reply
Leave a Comment Cancel Reply
Your email address will not be published.Required fields are marked
Type here..
Name
Email
Website
Δ
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Have a question? Ask it here!
We are a group of experienced volunteers whose main goal is to help you by answering your questions about math. To ask anything, just click here.
Recent Blog Posts
Is a Circle One-Dimensional or Two-Dimensional?
Pigeonhole Principle II: Sets, Subsets, and Sums
Pigeonhole Principle I: Paths, Penguins, and Points
Sample Standard Deviation as an Unbiased Estimator
Formulas for Standard Deviation: More Than Just One!
Blog Archive
September 2025(2)
August 2025(2)
July 2025(4)
June 2025(4)
May 2025(5)
April 2025(4)
March 2025(4)
February 2025(4)
January 2025(5)
December 2024(4)
November 2024(4)
October 2024(3)
September 2024(3)
August 2024(5)
July 2024(4)
June 2024(4)
May 2024(2)
April 2024(4)
March 2024(5)
February 2024(4)
January 2024(4)
December 2023(5)
November 2023(4)
October 2023(4)
September 2023(5)
August 2023(4)
July 2023(4)
June 2023(5)
May 2023(4)
April 2023(4)
March 2023(5)
February 2023(4)
January 2023(4)
December 2022(5)
November 2022(4)
October 2022(4)
September 2022(5)
August 2022(4)
July 2022(5)
June 2022(4)
May 2022(4)
April 2022(5)
March 2022(4)
February 2022(4)
January 2022(4)
December 2021(5)
November 2021(4)
October 2021(5)
September 2021(4)
August 2021(4)
July 2021(5)
June 2021(4)
May 2021(9)
April 2021(9)
March 2021(9)
February 2021(7)
January 2021(7)
December 2020(7)
November 2020(8)
October 2020(9)
September 2020(8)
August 2020(9)
July 2020(8)
June 2020(9)
May 2020(8)
April 2020(9)
March 2020(9)
February 2020(8)
January 2020(9)
December 2019(9)
November 2019(8)
October 2019(9)
September 2019(9)
August 2019(9)
July 2019(9)
June 2019(8)
May 2019(9)
April 2019(9)
March 2019(8)
February 2019(8)
January 2019(9)
December 2018(13)
November 2018(13)
October 2018(14)
September 2018(12)
August 2018(4)
July 2018(5)
June 2018(13)
May 2018(13)
April 2018(13)
March 2018(10)
February 2018(12)
January 2018(14)
Categories
Algebra (211)
AQOTW (67)
Arithmetic (107)
Ask Dr. Math (6)
Calculus (91)
Geometry (102)
Higher math (20)
Logic (32)
NQOTW (171)
Probability (51)
Puzzles (30)
Statistics (34)
Study skills (7)
Trigonometry (64)
Tags
AlgorithmsAlternativesAmbiguityAssumptionsAveragesChallengesCheckingCombinatoricsComplex numbersCountingCuriosityDecimalsDefinitionsDerivativesEstimationFactorsFibonacciFormulasFractionsFunctionsGraphingHistoryInconsistencyInductionIntegrationIntuitionLimitsLogicMethodsMistakesModelsNotationPedagogyPEMDASPolynomialsPrimesProofProofsReal lifeStrategiesTextbook errorsVectorsWhyWord problemsWords
Recent Comments
Dave Peterson on Frequently Questioned Answers: Trisecting an Angle
Hans J. Berge on Frequently Questioned Answers: Trisecting an Angle
Pigeonhole Principle II: Sets, Subsets, and Sums – The Math Doctors on Pigeonhole Principle I: Paths, Penguins, and Points
Vladimir on Casting Out Nines: Why It Works
Dave Peterson on Distances on Earth 1: The Cosine Formula
About This Site
The Math Doctors is run entirely by volunteers who love sharing their knowledge of math with people of all ages. We have over 20 years of experience as a group, and have earned the respect of educators. For some of our past history, see About Ask Dr. Math. If you would like to volunteer or to contribute in other ways, please contact us.
Have a question of your own?
Ask a Question
Search Blog
Search for:
Meta
Log in
Entries feed
Comments feed
WordPress.org
Copyright © 2025 The Math Doctors | Powered by Astra WordPress Theme
Email |
187851 | https://www.youtube.com/watch?v=_QmIL9n613E | How do you solve an exponential equation with e as the base
Brian McLogan
1590000 subscribers
3184 likes
Description
490556 views
Posted: 4 Oct 2013
👉 Learn how to solve exponential equations in base e. An exponential equation is an equation in which a variable occurs as an exponent. e is a mathematical constant approximately equal to 2.71828. e^x is a special type of exponential function called the (natural) exponential function
To solve a natural exponential equation, we use the properties of exponents to isolate the (natural) exponential functions. Then we take the natural log of both sides. Note that the natural log cancels out the (natural) exponential function (e), leaving out only the exponent.
👏SUBSCRIBE to my channel here:
❤️Support my channel by becoming a member:
🙋♂️Have questions? Ask here:
🎉Follow the Community:
Organized Videos:
✅Solve Exponential Equations
✅Solve Exponential Equations | Learn About
✅Solve Exponential Equations with a Calculator
✅Solve Exponential Equations with Fractions
✅Solve Exponential Equations | Easy
✅Solve Exponential Equations with e
✅Solve Exponential Equations with Logarithms
✅Solve Exponential Equations without a Calculator
🗂️ Organized playlists by classes here:
🌐 My Website -
🎯Survive Math Class Checklist: Ten Steps to a Better Year:
Connect with me:
⚡️Facebook -
⚡️Instagram -
⚡️Twitter -
⚡️Linkedin -
👨🏫 Current Courses on Udemy:
👨👩👧👧 About Me: I make short, to-the-point online math tutorials. I struggled with math growing up and have been able to use those experiences to help students improve in math through practical applications and tips. Find more here:
brianmclogan #exponential
79 comments
Transcript:
so anyways Let Me Maybe explain it this way and I'll and I'll do this two different ways first of all when we learn now we have a single logarithm right logarithm it's in logarith or logarithmic or exponential form sorry so one way if I want to solve for this you can't solve for the X when it's raised up into the power but if I rewrite this in exponential form this is now going to be log base e of 12 = 3x now remember we write log base e of 12 as Ln of 12 = 3x so now to solve for x you divide by 3 so X = Ln base 12 of 3 okay now that's one way you guys can go ahead and do this the another way that you can look at this is Again by taking the log of both sides again ladies and gentlemen when we're looking at this and let's go to our properties of logs now of course ladies and gentlemen I don't want to take the log of both sides because I need to choose a base remember I kept on telling you the base has to be e right you always want to have the base of your logarithm be the same base of your exponent well log base e is what we call what Ln so let's just take the Ln of Both Sides Now by applying the properties of logarithms maybe this will make a little bit more sense for some of you if by applying the properties logarithms we can take this exponent and put it where in front right and write it as the product so this is 3x Ln of e = Ln of 12 now let's evaluate Ln of e so what that says is e raised to what power equals e e raised to what power equals e one 1 so guess what it's just 3x 1 = Ln of 12 or 3x = Ln of 12 IDE 3 IDE 3 x = Ln of 12 / 3 you're getting the exact same answer but now what we need to do is solve this so we go in our calculator and we type in Ln of 12 and then we divide that by three so therefore xal 83 approximately because your okay anybody have any questions on that question you |
187852 | https://convertermaniacs.com/deg-to-rad/0/what-is-67.5-degrees-in-radians.html | 67.5 degrees in radians (67.5 deg to rad)
Converter Maniacs
67.5 degrees in radians
Here is how to calculate and covert 67.5 degrees (deg) to radians (rad). We will show you the degrees to radians formula, the math to convert 67.5 degrees to radians, and we will illustrate 67.5 degrees in radians on a circle.
To convert degrees to radians, we multiply degrees by π and then divide the product by 180. Here is the formula to convert degrees to radians:
(degrees × π) ÷ 180 = radians
When we enter 67.5 degrees into our formula, we get 67.5 degrees in radians as follows:
(degrees × π) ÷ 180 = radians
(67.5 × π) ÷ 180 = 3π/8
67.5 degrees = 3π/8 radians
Discover more
Calculator
KJ to Calories
calculators
Since the answer above includes a Pi (π), which is an irrational number, 67.5 degrees in radians in terms of Pi is the only way to give the exact answer. However, we can divide the numerator by the denominator in the answer above and get an approximate decimal answer to 67.5 degrees to radians, like this:
3π ÷ 8 ≈ 1.17809724509617
67.5 degrees ≈ 1.1781 radians
To illustrate 67.5 degrees in radians on a circle, we first drew a circle with our compass and then outlined 67.5 degrees with our protractor. The counterclockwise area between the blue and the orange line is 67.5 degrees.
The counterclockwise distance from a to b along the red perimeter is 3π/8 or approximately 1.1781 if the radius is 1. If the radius is not 1, simply multiply 3π/8 (or 1.1781) by the radius to get the distance between a and b.
Degrees to Radians Calculator
Submit another number of degrees for us to convert to radians for you.
What is degrees in radians?
67.51 degrees in radians
Here is a similar degrees to radians calculation you may find interesting.
Copyright|Privacy Policy|Disclaimer|Contact |
187853 | http://services.artofproblemsolving.com/download.php?id=YXR0YWNobWVudHMvZC8zLzBiNWRjYTdiMjRiMzkwZGU5MzQwYTI3NTcyMDI0YzkwZTZjYmM1LnBkZg==&rn=QmlnT2xpdHRsZW8ucGRm | Big-O and Little-o
The Landau symbolic notation – Big-O and little-o – can be one of the most useful
notations in analysis. Learn it well and you become more flexible, quicker, and more sure of
yourself in a wide variety of analytic settings.
One concept associated with limits that we weave through this is the notion of
“eventually.” To understand this, let P(x) be a statement depending on the variable x that can be
either true or false. The statement “as x Æ0, eventually P(x)” means that $ d > 0 such that for
|x| < d, P(x) is true. The statement “as n Æ ∞, eventually P(n)” means that $ N such that for
n > N, P(n) is true. Similar statements can be made for all limit situations. As an example:
lim
xÆa
ƒ( x) = L iff " e > 0, eventually |ƒ( x) – L| < e, where in this case, “eventually” means
“$ d > 0 such that for | x – a| < d.”
With the concept of “eventually” understood, we proceed to the definitions of Big-O and
little-o. We can define these concepts in any limit situation. In what follows, the function g(x)
that we are comparing ought be eventually monotone , nonzero, and should be “reasonably
simple”, a concept that we won’t define.
Definition: As x Æ ?, ƒ( x) = o(g(x)) iff lim
xÆ?
ƒ( x)
g(x) = 0.
Definition: As x Æ ?, ƒ( x) = O(g(x)) iff $ C such that eventually |ƒ( x)| ≤ C|g(x)|.
The second definition could be rephrased as saying that the fraction ƒ( x)
g(x) is eventually
bounded.
Some examples of this notation:
If m < n, as x Æ ∞, xm = o(xn).
If m < n, as x Æ 0, xn = o(xm).
If n > 0, as x Æ ∞, ln( x) = o(xn).
If n < 0, as x Æ 0+, ln( x) = o(xn).
As x Æ ∞, sin x = O(1).
As x Æ 0, sin x = O(x).
As x Æ 0, sin x = x + O(x3).
As x Æ 0, sin x = x – x3
6 + O(x5). As n Æ ∞, n2#+#1 = O(n).
As n Æ ∞, n2#+#1 = n + OË
ÁÊ
¯
˜ˆ
1
n# .
This notation is very closely tied up with differentiation and with Taylor polynomials.
For instance, we can prove that the function ƒ is differentiable at x iff $ a number which
we call ƒ¢(x) such that as h Æ 0, ƒ( x + h) = ƒ( x) + ƒ¢(x)h + o(h). That is the definition of the
derivative right there. The thing we’re calling o(h) is ƒ( x + h) – ƒ( x) – ƒ¢(x)h and calling it o(h) is
saying that
lim
hÆ0
ƒ( x#+#h)#–#ƒ( x)#–#ƒ¢(x)h
h = lim
hÆ0
ƒ( x#+#h)#–#ƒ( x)
h – ƒ¢(x) = 0
That’s what we get if ƒ is differentiable at x. If we have something stronger, namely that
ƒ has a bounded second derivative in a neighborhood of x, then we can say something stronger:
ƒ( x + h) = ƒ( x) + ƒ¢(x)h + O(h2).
In general, if ƒ has n bounded derivatives in a neighborhood of x, Taylor’s Theorem says
ƒ( x + h) = Â
k=0
n–1
ƒkhk + O(hn) .
You’ll notice that this last statement is slightly weaker than the full statement of Taylor’s
Theorem. We have merely said that the error is no worse than some constant times hn. The full
statement of the theorem gives us some tools for estimating how big that constant is; using big-O
notation essentially announces that we don’t care how big the constant is, as long as it’s constant.
The reason big-O and little-o is a powerful an flexible notation is that more often than not, we
don’t care what the constant is.
Manipulating this notation:
Most of what we have here is common sense. Some useful ideas:
O(g1(x)) + O(g2(x)) = O(max( g1(x), g2(x))
O(g1(x)) ⋅O(g2(x)) = O(g1(x)⋅g2(x)) We’d like to be able to say that for reasonable functions w, w(O(g(x))) = O(w(g(x)). There’s no
problem with saying ( O(h)) 2 = O(h2), but eO(ln x) isn’t well defined. You have to be careful
there.
You should avoid dividing by big-O or little-o. However, one can make sense of something like
1
2+ O(x) by long division: 1
2+ O(x) = 1
2 + O(x) as x Æ 0.
Here’s a messy calculus problem which can be used to illustrate the ideas. You should take a
close look at each step of this and figure out why I’m doing what I’m doing:
Example: Find lim
nÆ∞
Î
ÍÈ
˚
˙˘
1
π#ı
Û
0
π
Ë
ÁÊ
¯
˜ˆ
1#+## cos #x
n
–1
dx n
.
This is a 1 ∞ case, so we start by taking the logarithm.
For small u, (1 + u)–1 = 1 – u + u2 + O(u3). (Geometric series.)
Thus, Ë
ÁÊ
¯
˜ˆ
1#+## cos #x
n
–1
= 1 – cos #x
n +cos 2x
n + O(n–3/2 )
1
π ı
Û
0
π
Ë
ÁÊ
¯
˜ˆ
1#+## cos #x
n
–1
dx = 1
π ı
Û
0
π
1 – cos #x
n +cos 2x
n + O(n–3/2 ) = 1 + 0 + 1
2n + O(n–3/2 )
For small u, ln (1 + u) = u + O(u2). Thus,
n ln Î
ÍÈ
˚
˙˘1
π#ı
Û
0
π
Ë
ÁÊ
¯
˜ˆ
1#+## cos #x
n
–1
dx = n ln Ë
ÁÊ
¯
˜ˆ
1#+#0#+# 1
2n#+#O(n–3/2 ) = n Ë
ÁÊ
¯
˜ˆ1
2n## +#O(n–3/2 )#+#O(n–2 )
= 1
2 + O(n–1/2 ) . This goes to 1
2 , so the limit is e1/2 = e .
Exercises:
Does the series Â
n=1
∞#
n3[sin(arctan( n–1 )) – arctan(sin( n–1 ))] converge or diverge?
Suppose that ƒ is defined on an interval and " y in that interval, as x Æ y,
ƒ( x) – ƒ( y) = o(| x – y|). Prove that ƒ is constant.
In all parts of this problem, assume that ƒ Œ C∞ (the infinitely differentiable functions.)
a. Show that ƒ( x#+#h)#–#f(x)
h = ƒ¢(x) + O(h) as h Æ 0.
b. Show that ƒ( x#+#h)#–#f(x#–#h)
2h = ƒ¢(x) + O(h2) as h Æ 0.
c. Show that ƒ( x#+#h)#–#2ƒ( x)#+#ƒ( x#–#h)
h2 = ƒ ¢¢ (x) + O(h2) as h Æ 0. |
187854 | https://en.wikipedia.org/wiki/Neural_crest | Jump to content
Search
Contents
1 History
2 Induction
2.1 Inductive signals
2.2 Neural plate border specifiers
2.3 Neural crest specifiers
2.4 Neural crest effector genes
3 Migration
3.1 Delamination
3.2 Migration
4 Clinical significance
4.1 Waardenburg syndrome
4.2 Hirschsprung's disease
4.3 Fetal alcohol spectrum disorder
4.4 DiGeorge syndrome
4.5 Treacher Collins syndrome
5 Cell lineages
5.1 Cranial neural crest
5.2 Trunk neural crest
5.3 Vagal and sacral neural crest
5.4 Cardiac neural crest
6 Evolution
7 Neural crest derivatives
8 See also
9 References
10 External links
Neural crest
العربية
Català
Čeština
Deutsch
Eesti
Español
Esperanto
فارسی
Français
Galego
한국어
Bahasa Indonesia
Italiano
עברית
Nederlands
日本語
Polski
Português
Русский
Suomi
Svenska
Українська
中文
Edit links
Article
Talk
Read
Edit
View history
Tools
Actions
Read
Edit
View history
General
What links here
Related changes
Upload file
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Print/export
Download as PDF
Printable version
In other projects
Wikimedia Commons
Wikidata item
Appearance
From Wikipedia, the free encyclopedia
Pluripotent embyronic cell group giving rise to diverse cell lineages
| Neural crest |
| The formation of neural crest during the process of neurulation. Neural crest is first induced in the region of the neural plate border. After neural tube closure, neural crest cells delaminate from the region between the dorsal neural tube and overlying ectoderm and migrate out towards the periphery. |
| Identifiers |
| MeSH | D009432 |
| TE | crest_by_E5.0.2.1.0.0.2 E5.0.2.1.0.0.2 |
| FMA | 86666 |
| Anatomical terminology [edit on Wikidata] |
The neural crest is a ridge-like structure that is formed transiently between the epidermal ectoderm and neural plate during vertebrate development. Neural crest cells originate from this structure through the epithelial-mesenchymal transition, and in turn give rise to a diverse cell lineage—including melanocytes, craniofacial cartilage and bone, smooth muscle, dentin, peripheral and enteric neurons, adrenal medulla and glia.
After gastrulation, the neural crest is specified at the border of the neural plate and the non-neural ectoderm. During neurulation, the borders of the neural plate, also known as the neural folds, converge at the dorsal midline to form the neural tube. Subsequently, neural crest cells from the roof plate of the neural tube undergo an epithelial to mesenchymal transition, delaminating from the neuroepithelium and migrating through the periphery, where they differentiate into varied cell types. The emergence of the neural crest was important in vertebrate evolution because many of its structural derivatives are defining features of the vertebrate clade.
Underlying the development of the neural crest is a gene regulatory network, described as a set of interacting signals, transcription factors, and downstream effector genes, that confer cell characteristics such as multipotency and migratory capabilities. Understanding the molecular mechanisms of neural crest formation is important for our knowledge of human disease because of its contributions to multiple cell lineages. Abnormalities in neural crest development cause neurocristopathies, which include conditions such as frontonasal dysplasia, Waardenburg–Shah syndrome, and DiGeorge syndrome.
Defining the mechanisms of neural crest development may reveal key insights into vertebrate evolution and neurocristopathies.
History
[edit]
The neural crest was first described in the chick embryo by Wilhelm His Sr. in 1868 as "the cord in between" (Zwischenstrang) because of its origin between the neural plate and non-neural ectoderm. He named the tissue "ganglionic crest," since its final destination was each lateral side of the neural tube, where it differentiated into spinal ganglia. During the first half of the 20th century, the majority of research on the neural crest was done using amphibian embryos which was reviewed by Hörstadius (1950) in a well known monograph.
Cell labeling techniques advanced research into the neural crest because they allowed researchers to visualize the migration of the tissue throughout the developing embryos. In the 1960s, Weston and Chibon utilized radioisotopic labeling of the nucleus with tritiated thymidine in chick and amphibian embryo respectively. However, this method suffers from drawbacks of stability, since every time the labeled cell divides the signal is diluted. Modern cell labeling techniques such as rhodamine-lysinated dextran and the vital dye diI have also been developed to transiently mark neural crest lineages.
The quail-chick marking system, devised by Nicole Le Douarin in 1969, was another instrumental technique used to track neural crest cells. Chimeras, generated through transplantation, enabled researchers to distinguish neural crest cells of one species from the surrounding tissue of another species. With this technique, generations of scientists were able to reliably mark and study the ontogeny of neural crest cells.
Induction
[edit]
A molecular cascade of events is involved in establishing the migratory and multipotent characteristics of neural crest cells. This gene regulatory network can be subdivided into the following four sub-networks described below.
Inductive signals
[edit]
First, extracellular signaling molecules, secreted from the adjacent epidermis and underlying mesoderm such as Wnts, BMPs and Fgfs separate the non-neural ectoderm (epidermis) from the neural plate during neural induction.
Wnt signaling has been demonstrated in neural crest induction in several species through gain-of-function and loss-of-function experiments. In coherence with this observation, the promoter region of slug (a neural-crest-specific gene) contains a binding site for transcription factors involved in the activation of Wnt-dependent target genes, suggestive of a direct role of Wnt signaling in neural crest specification.
The current role of BMP in neural crest formation is associated with the induction of the neural plate. BMP antagonists diffusing from the ectoderm generates a gradient of BMP activity. In this manner, the neural crest lineage forms from intermediate levels of BMP signaling required for the development of the neural plate (low BMP) and epidermis (high BMP).
Fgf from the paraxial mesoderm has been suggested as a source of neural crest inductive signal. Researchers have demonstrated that the expression of dominate-negative Fgf receptor in ectoderm explants blocks neural crest induction when recombined with paraxial mesoderm. The understanding of the role of BMP, Wnt, and Fgf pathways on neural crest specifier expression remains incomplete.
Neural plate border specifiers
[edit]
Signaling events that establish the neural plate border lead to the expression of a set of transcription factors delineated here as neural plate border specifiers. These molecules include Zic factors, Pax3/7, Dlx5, Msx1/2 which may mediate the influence of Wnts, BMPs, and Fgfs. These genes are expressed broadly at the neural plate border region and precede the expression of bona fide neural crest markers.
Experimental evidence places these transcription factors upstream of neural crest specifiers. For example, in Xenopus Msx1 is necessary and sufficient for the expression of Slug, Snail, and FoxD3. Furthermore, Pax3 is essential for FoxD3 expression in mouse embryos.
Neural crest specifiers
[edit]
Following the expression of neural plate border specifiers is a collection of genes including Slug/Snail, FoxD3, Sox10, Sox9, AP-2 and c-Myc. This suite of genes, designated here as neural crest specifiers, are activated in emergent neural crest cells. At least in Xenopus, every neural crest specifier is necessary and/or sufficient for the expression of all other specifiers, demonstrating the existence of extensive cross-regulation. Moreover, this model organism was instrumental in the elucidation of the role of the Hedgehog signaling pathway in the specification of the neural crest, with the transcription factor Gli2 playing a key role.
Outside of the tightly regulated network of neural crest specifiers are two other transcription factors Twist and Id. Twist, a bHLH transcription factor, is required for mesenchyme differentiation of the pharyngeal arch structures. Id is a direct target of c-Myc and is known to be important for the maintenance of neural crest stem cells.
Neural crest effector genes
[edit]
Finally, neural crest specifiers turn on the expression of effector genes, which confer certain properties such as migration and multipotency. Two neural crest effectors, Rho GTPases and cadherins, function in delamination by regulating cell morphology and adhesive properties. Sox9 and Sox10 regulate neural crest differentiation by activating many cell-type-specific effectors including Mitf, P0, Cx32, Trp and cKit.
Migration
[edit]
Further information: Collective cell migration
The migration of neural crest cells involves a highly coordinated cascade of events that begins with closure of the dorsal neural tube.
Delamination
[edit]
After fusion of the neural folds to create the neural tube, cells originally located in the neural plate border become neural crest cells. For migration to begin, neural crest cells must undergo a process called delamination that involves a full or partial epithelial–mesenchymal transition (EMT). Delamination is defined as the separation of tissue into different populations, in this case neural crest cells separating from the surrounding tissue. Conversely, EMT is a series of events coordinating a change from an epithelial to mesenchymal phenotype. For example, delamination in chick embryos is triggered by a BMP/Wnt cascade that induces the expression of EMT promoting transcription factors such as SNAI2 and FOXD3. Although all neural crest cells undergo EMT, the timing of delamination occurs at different stages in different organisms: in Xenopus laevis embryos there is a massive delamination that occurs when the neural plate is not entirely fused, whereas delamination in the chick embryo occurs during fusion of the neural fold.
Prior to delamination, presumptive neural crest cells are initially anchored to neighboring cells by tight junction proteins such as occludin and cell adhesion molecules such as NCAM and N-Cadherin. Dorsally expressed BMPs initiate delamination by inducing the expression of the zinc finger protein transcription factors snail, slug, and twist. These factors play a direct role in inducing the epithelial-mesenchymal transition by reducing expression of occludin and N-Cadherin in addition to promoting modification of NCAMs with polysialic acid residues to decrease adhesiveness. Neural crest cells also begin expressing proteases capable of degrading cadherins such as ADAM10 and secreting matrix metalloproteinases (MMPs) that degrade the overlying basal lamina of the neural tube to allow neural crest cells to escape. Additionally, neural crest cells begin expressing integrins that associate with extracellular matrix proteins, including collagen, fibronectin, and laminin, during migration. Once the basal lamina becomes permeable, neural crest cells can begin migrating throughout the embryo.
Migration
[edit]
Neural crest cell migration occurs in a rostral to caudal direction without the need of a neuronal scaffold such as along a radial glial cell. For this reason the crest cell migration process is termed "free migration". Instead of scaffolding on progenitor cells, neural crest migration is the result of repulsive guidance via EphB/EphrinB and semaphorin/neuropilin signaling, interactions with the extracellular matrix, and contact inhibition with one another. While Ephrin and Eph proteins have the capacity to undergo bi-directional signaling, neural crest cell repulsion employs predominantly forward signaling to initiate a response within the receptor bearing neural crest cell. Burgeoning neural crest cells express EphB, a receptor tyrosine kinase, which binds the EphrinB transmembrane ligand expressed in the caudal half of each somite. When these two domains interact it causes receptor tyrosine phosphorylation, activation of rhoGTPases, and eventual cytoskeletal rearrangements within the crest cells inducing them to repel. This phenomenon allows neural crest cells to funnel through the rostral portion of each somite.
Semaphorin-neuropilin repulsive signaling works synergistically with EphB signaling to guide neural crest cells down the rostral half of somites in mice. In chick embryos, semaphorin acts in the cephalic region to guide neural crest cells through the pharyngeal arches. On top of repulsive repulsive signaling, neural crest cells express β1and α4 integrins which allows for binding and guided interaction with collagen, laminin, and fibronectin of the extracellular matrix as they travel. Additionally, crest cells have intrinsic contact inhibition with one another while freely invading tissues of different origin such as mesoderm. Neural crest cells that migrate through the rostral half of somites differentiate into sensory and sympathetic neurons of the peripheral nervous system. The other main route neural crest cells take is dorsolaterally between the epidermis and the dermamyotome. Cells migrating through this path differentiate into pigment cells of the dermis. Further neural crest cell differentiation and specification into their final cell type is biased by their spatiotemporal subjection to morphogenic cues such as BMP, Wnt, FGF, Hox, and Notch.
Clinical significance
[edit]
Neurocristopathies result from the abnormal specification, migration, differentiation or death of neural crest cells throughout embryonic development. This group of diseases comprises a wide spectrum of congenital malformations affecting many newborns. Additionally, they arise because of genetic defects affecting the formation of the neural crest and because of the action of teratogens
Waardenburg syndrome
[edit]
Waardenburg syndrome is a neurocristopathy that results from defective neural crest cell migration. The condition's main characteristics include piebaldism and congenital deafness. In the case of piebaldism, the colorless skin areas are caused by a total absence of neural crest-derived pigment-producing melanocytes. There are four different types of Waardenburg syndrome, each with distinct genetic and physiological features. Types I and II are distinguished based on whether or not family members of the affected individual have dystopia canthorum. Type III gives rise to upper limb abnormalities. Lastly, type IV is also known as Waardenburg-Shah syndrome, and afflicted individuals display both Waardenburg's syndrome and Hirschsprung's disease. Types I and III are inherited in an autosomal dominant fashion, while II and IV exhibit an autosomal recessive pattern of inheritance. Overall, Waardenburg's syndrome is rare, with an incidence of ~ 2/100,000 people in the United States. All races and sexes are equally affected. There is no current cure or treatment for Waardenburg's syndrome.
Hirschsprung's disease
[edit]
Also implicated in defects related to neural crest cell development and migration is Hirschsprung's disease, characterized by a lack of innervation in regions of the intestine. This lack of innervation can lead to further physiological abnormalities like an enlarged colon (megacolon), obstruction of the bowels, or even slowed growth. In healthy development, neural crest cells migrate into the gut and form the enteric ganglia. Genes playing a role in the healthy migration of these neural crest cells to the gut include RET, GDNF, GFRα, EDN3, and EDNRB. RET, a receptor tyrosine kinase (RTK), forms a complex with GDNF and GFRα. EDN3 and EDNRB are then implicated in the same signaling network. When this signaling is disrupted in mice, aganglionosis, or the lack of these enteric ganglia occurs.
Fetal alcohol spectrum disorder
[edit]
Fetal alcohol spectrum disorder is among the most common causes of developmental defects. Depending on the extent of the exposure and the severity of the resulting abnormalities, patients are diagnosed within a continuum of disorders broadly labeled fetal alcohol spectrum disorder (FASD). Severe FASD can impair neural crest migration, as evidenced by characteristic craniofacial abnormalities including short palpebral fissures, an elongated upper lip, and a smoothened philtrum. However, due to the promiscuous nature of ethanol binding, the mechanisms by which these abnormalities arise is still unclear. Cell culture explants of neural crest cells as well as in vivo developing zebrafish embryos exposed to ethanol show a decreased number of migratory cells and decreased distances travelled by migrating neural crest cells. The mechanisms behind these changes are not well understood, but evidence suggests PAE can increase apoptosis due to increased cytosolic calcium levels caused by IP3-mediated release of calcium from intracellular stores. It has also been proposed that the decreased viability of ethanol-exposed neural crest cells is caused by increased oxidative stress. Despite these, and other advances much remains to be discovered about how ethanol affects neural crest development. For example, it appears that ethanol differentially affects certain neural crest cells over others; that is, while craniofacial abnormalities are common in PAE, neural crest-derived pigment cells appear to be minimally affected.
DiGeorge syndrome
[edit]
DiGeorge syndrome is associated with deletions or translocations of a small segment in the human chromosome 22. This deletion may disrupt rostral neural crest cell migration or development. Some defects observed are linked to the pharyngeal pouch system, which receives contribution from rostral migratory crest cells. The symptoms of DiGeorge syndrome include congenital heart defects, facial defects, and some neurological and learning disabilities. Patients with 22q11 deletions have also been reported to have higher incidence of schizophrenia and bipolar disorder.
Treacher Collins syndrome
[edit]
Treacher Collins syndrome (TCS) results from the compromised development of the first and second pharyngeal arches during the early embryonic stage, which ultimately leads to mid and lower face abnormalities. TCS is caused by the missense mutation of the TCOF1 gene, which causes neural crest cells to undergo apoptosis during embryogenesis. Although mutations of the TCOF1 gene are among the best characterized in their role in TCS, mutations in POLR1C and POLR1D genes have also been linked to the pathogenesis of TCS.
Cell lineages
[edit]
Neural crest cells originating from different positions along the anterior-posterior axis develop into various tissues. These regions of the neural crest can be divided into four main functional domains, which include the cranial neural crest, trunk neural crest, vagal and sacral neural crest, and cardiac neural crest.
Cranial neural crest
[edit]
Main article: cranial neural crest
The cranial neural crest migrates dorsolaterally to form the craniofacial mesenchyme that differentiates into various cranial ganglia and craniofacial cartilages and bones. These cells enter the pharyngeal pouches and arches where they contribute to the thymus, bones of the middle ear and jaw and the odontoblasts of the tooth primordia.
Trunk neural crest
[edit]
Main article: trunk neural crest
The trunk neural crest gives rise to two populations of cells. One group of cells fated to become melanocytes migrates dorsolaterally into the ectoderm towards the ventral midline. A second group of cells migrates ventrolaterally through the anterior portion of each sclerotome. The cells that stay in the sclerotome form the dorsal root ganglia, whereas those that continue more ventrally form the sympathetic ganglia, adrenal medulla, and the nerves surrounding the aorta.
Vagal and sacral neural crest
[edit]
Vagal and sacral neural crest cells develop into the ganglia of the enteric nervous system and the parasympathetic ganglia.
Cardiac neural crest
[edit]
Main article: cardiac neural crest
Cardiac neural crest develops into melanocytes, cartilage, connective tissue and neurons of some pharyngeal arches. Also, this domain gives rise to regions of the heart such as the musculo-connective tissue of the large arteries, and part of the septum, which divides the pulmonary circulation from the aorta. The semilunar valves of the heart are associated with neural crest cells according to new research.
Evolution
[edit]
Several structures that distinguish the vertebrates from other chordates are formed from the derivatives of neural crest cells. In their "New head" theory, Gans and Northcut argue that the presence of neural crest was the basis for vertebrate specific features, such as sensory ganglia and cranial skeleton. Furthermore, the appearance of these features was pivotal in vertebrate evolution because it enabled a predatory lifestyle.
However, considering the neural crest a vertebrate innovation does not mean that it arose de novo. Instead, new structures often arise through modification of existing developmental regulatory programs. For example, regulatory programs may be changed by the co-option of new upstream regulators or by the employment of new downstream gene targets, thus placing existing networks in a novel context. This idea is supported by in situ hybridization data that shows the conservation of the neural plate border specifiers in protochordates, which suggest that part of the neural crest precursor network was present in a common ancestor to the chordates. In some non-vertebrate chordates such as tunicates a lineage of cells (melanocytes) has been identified, which are similar to neural crest cells in vertebrates. This implies that a rudimentary neural crest existed in a common ancestor of vertebrates and tunicates.
Neural crest derivatives
[edit]
Ectomesenchyme (also known as mesectoderm): odontoblasts, dental papillae, the chondrocranium (nasal capsule, Meckel's cartilage, scleral ossicles, quadrate, articular, hyoid and columella), tracheal and laryngeal cartilage, the dermatocranium (membranous bones), dorsal fins and the turtle plastron (lower vertebrates), pericytes and smooth muscle of branchial arteries and veins, tendons of ocular and masticatory muscles, connective tissue of head and neck glands (pituitary, salivary, lachrymal, thymus, thyroid) dermis and adipose tissue of calvaria, ventral neck and face
Endocrine cells: chromaffin cells of the adrenal medulla, glomus cells type I/II.
Peripheral nervous system: Sensory neurons and glia of the dorsal root ganglia, cephalic ganglia (VII and in part, V, IX, and X), Rohon-Beard cells, some Merkel cells in the whisker, Satellite glial cells of all autonomic and sensory ganglia, Schwann cells of all peripheral nerves.
Enteric cells: Enterochromaffin cells.
Melanocytes, iris muscle and pigment cells, and even associated with some tumors (such as melanotic neuroectodermal tumor of infancy).
See also
[edit]
First arch syndrome
DGCR2—may control neural crest cell migration
List of human cell types derived from the germ layers
References
[edit]
^ a b c d e f Huang, X.; Saint-Jeannet, J.P. (2004). "Induction of the neural crest and the opportunities of life on the edge". Dev. Biol. 275 (1): 1–11. doi:10.1016/j.ydbio.2004.07.033. PMID 15464568.
^ Shakhova, Olga; Sommer, Lukas (2008). "Neural crest-derived stem cells". StemBook. Harvard Stem Cell Institute. doi:10.3824/stembook.1.51.1. PMID 20614636. Retrieved 27 December 2019.
^ Brooker, R.J. 2014, Biology, 3rd edn, McGraw-Hill, New York, NY, 1084
^ a b c d e Meulemans, D.; Bronner-Fraser, M. (2004). "Gene-regulatory interactions in neural crest evolution and development". Dev Cell. 7 (3): 291–9. doi:10.1016/j.devcel.2004.08.007. PMID 15363405.
^ a b Sauka-Spengler, T.; Meulemans, D.; Jones, M.; Bronner-Fraser, M. (2007). "Ancient evolutionary origin of the neural crest gene regulatory network". Dev Cell. 13 (3): 405–20. doi:10.1016/j.devcel.2007.08.005. PMID 17765683.
^ a b Le Douarin, N.M. (2004). "The avian embryo as a model to study the development of the neural crest: a long and still ongoing story". Mech. Dev. 121 (9): 1089–102. doi:10.1016/j.mod.2004.06.003. PMID 15296974.
^ Hörstadius, S. (1950). The Neural Crest: Its Properties and Derivatives in the Light of Experimental Research. Oxford University Press, London, 111 p.
^ Le Douarin, N.M. (1969). "Particularités du noyau interphasique chez la Caille japonaise (Coturnix coturnix japonica). Utilisation de ces particularités comme "marquage biologique" dans les recherches sur les interactions tissulaires et les migrations cellulaires au cours de l'ontogenèse"". Bull Biol Fr Belg. 103 (3): 435–52. PMID 4191116.
^ Le Douarin, N.M. (1973). "A biological cell labeling technique and its use in experimental embryology". Dev Biol. 30 (1): 217–22. doi:10.1016/0012-1606(73)90061-4. PMID 4121410.
^ Vallin, J.; et al. (2001). "Cloning and characterization of the three Xenopus slug promoters reveal direct regulation by Lef/beta-catenin signaling". J Biol Chem. 276 (32): 30350–8. doi:10.1074/jbc.M103167200. PMID 11402039.
^ Mayor, R.; Guerrero, N.; Martinez, C. (1997). "Role of FGF and noggin in neural crest induction". Dev Biol. 189 (1): 1–12. doi:10.1006/dbio.1997.8634. PMID 9281332.
^ Tribulo, C.; et al. (2003). "Regulation of Msx genes by Bmp gradient is essential for neural crest specification". Development. 130 (26): 6441–52. doi:10.1242/dev.00878. hdl:11336/95313. PMID 14627721.
^ Dottori, M.; Gross, M.K.; Labosky, P.; Goulding, M. (2001). "The winged-helix transcription factor Foxd3 suppresses interneuron differentiation and promotes neural crest cell fate". Development. 128 (21): 4127–4138. doi:10.1242/dev.128.21.4127. PMID 11684651.
^ Cerrizuela, Santiago; Vega-López, Guillermo A.; Palacio, María Belén; Tríbulo, Celeste; Aybar, Manuel J. (2018-12-01). "Gli2 is required for the induction and migration of Xenopus laevis neural crest". Mechanisms of Development. 154: 219–239. doi:10.1016/j.mod.2018.07.010. hdl:11336/101714. ISSN 0925-4773. PMID 30086335.
^ Vincentz, J.W.; et al. (2008). "An absence of Twist1 results in aberrant cardiac neural crest morphogenesis". Dev Biol. 320 (1): 131–9. doi:10.1016/j.ydbio.2008.04.037. PMC 2572997. PMID 18539270.
^ Light, W.; et al. (2005). "Xenopus Id3 is required downstream of Myc for the formation of multipotent neural crest progenitor cells". Development. 132 (8): 1831–41. doi:10.1242/dev.01734. PMID 15772131.
^ a b c d e f Sanes, Dan (2012). Development of the Nervous System, 3rd ed. Oxford: ELSEVIER INC. pp. 70–72. ISBN 978-0123745392.
^ a b Lamouille, Samy (2014). "Molecular mechanisms of epithelial–mesenchymal transition". Nature Reviews Molecular Cell Biology. 15 (3): 178–196. doi:10.1038/nrm3758. PMC 4240281. PMID 24556840.
^ a b c Theveneau, Eric (2012). "Neural crest delamination and migration: From epithelium-to-mesenchyme transition to collective cell migration" (PDF). Developmental Biology. 366 (1): 34–54. doi:10.1016/j.ydbio.2011.12.041. PMID 22261150.
^ a b c Kandel, Eric (2013). Principles of Neural Science. New York: The McGraw-Hill Companies, Inc. pp. 1197–1199. ISBN 978-0-07-139011-8.
^ a b Taneyhill, L.A. (2008). "To adhere or not to adhere: the role of Cadherins in neural crest development". Cell Adh Migr. 2, 223–30.
^ Mayor, Roberto (2013). "The Neural Crest". Development. 140 (11): 2247–2251. doi:10.1242/dev.091751. PMID 23674598.
^ a b Sakuka-Spengler, Tatjana (2008). "A gene regulatory network orchestrates neural crest formation". Nature Reviews Molecular Cell Biology. 9 (7): 557–568. doi:10.1038/nrm2428. PMID 18523435. S2CID 10746234.
^ Vega-Lopez, Guillermo A.; Cerrizuela, Santiago; Tribulo, Celeste; Aybar, Manuel J. (2018-12-01). "Neurocristopathies: New insights 150 years after the neural crest discovery". Developmental Biology. The Neural Crest: 150 years after His' discovery. 444: S110 – S143. doi:10.1016/j.ydbio.2018.05.013. hdl:11336/101713. ISSN 0012-1606. PMID 29802835.
^ Bolande, Robert P. (1974-07-01). "The neurocristopathies: A unifying concept of disease arising in neural crest maldevelopment". Human Pathology. 5 (4): 409–429. doi:10.1016/S0046-8177(74)80021-3. ISSN 0046-8177.
^ Cerrizuela, Santiago; Vega‐Lopez, Guillermo A.; Aybar, Manuel J. (2020-01-11). "The role of teratogens in neural crest development". Birth Defects Research. 112 (8): 584–632. doi:10.1002/bdr2.1644. ISSN 2472-1727. PMID 31926062. S2CID 210151171.
^ a b c Mallory, S.B.; Wiener, E; Nordlund, J.J. (1986). "Waardenburg's Syndrome with Hirschprung's Disease: A Neural Crest Defect". Pediatric Dermatology. 3 (2): 119–124. doi:10.1111/j.1525-1470.1986.tb00501.x. PMID 3952027. S2CID 23858201.
^ Arias, S (1971). "Genetic heterogeneity in the Waardenburg's syndrome". Birth Defects B. 07 (4): 87–101. PMID 5006208.
^ "Waardenburg syndrome". Genetics Home Reference. October 2012.
^ Rogers, J. M. (2016). "Search for the missing lncs: gene regulatory networks in neural crest development and long non-coding RNA biomarkers of Hirschsprung's disease". Neurogastroenterol Motil. 28 (2): 161–166. doi:10.1111/nmo.12776. PMID 26806097. S2CID 12394126.
^ Sampson, P. D.; Streissguth, A. P.; Bookstein, F. L.; Little, R. E.; Clarren, S. K.; Dehaene, P.; Graham, J. M. Jr (1997). "The incidence of fetal alcohol syndrome and prevalence of the alcohol-related neurodevelopmental disorder". Teratology. 56 (5): 317–326. doi:10.1002/(SICI)1096-9926(199711)56:5<317::AID-TERA5>3.0.CO;2-U. PMID 9451756.
^ Smith, S. M.; Garic, A.; Flentke, G. R.; Berres, M. E. (2014). "Neural crest development in fetal alcohol syndrome". Birth Defects Research Part C: Embryo Today: Reviews. 102 (3): 210–220. doi:10.1002/bdrc.21078. PMC 4827602. PMID 25219761.
^ Scambler, Peter J. (2000). "The 22q11 deletion syndromes". Human Molecular Genetics. 9 (16): 2421–2426. doi:10.1093/hmg/9.16.2421. PMID 11005797.
^ Ahmed, M.; Ye, X.; Taub, P. (2016). "Review of the Genetic Basis of Jaw Malformations". Journal of Pediatric Genetics. 05 (4): 209–219. doi:10.1055/s-0036-1593505. PMC 5123890. PMID 27895973.
^ a b c d Gilbert, Scott F. (2000). The Neural Crest. Sinauer Associates.
^ Vega-Lopez, Guillermo A.; Cerrizuela, Santiago; Aybar, Manuel J. (2017). "Trunk neural crest cells: formation, migration and beyond". The International Journal of Developmental Biology. 61 (1–2): 5–15. doi:10.1387/ijdb.160408gv. hdl:11336/53692. ISSN 0214-6282. PMID 28287247.
^ Takamura, Kazushi; Okishima, Takahiro; Ohdo, Shozo; Hayakawa, Kunio (1990). "Association of cephalic neural crest cells with cardiovascular development, particularly that of the semilunar valves". Anatomy and Embryology. 182 (3): 263–72. doi:10.1007/BF00185519. PMID 2268069. S2CID 32986727.
^ Gans, C.; Northcutt, R. G. (1983). "Neural crest and the origin of vertebrates: A new head". Science. 220 (4594): 268–274. Bibcode:1983Sci...220..268G. doi:10.1126/science.220.4594.268. PMID 17732898. S2CID 39290007.
^ Northcutt, Glenn (2005). "The new head hypothesis revisited". Journal of Experimental Zoology Part B: Molecular and Developmental Evolution. 304B (4): 274–297. doi:10.1002/jez.b.21063. PMID 16003768.
^ Sauka-Spengler, T.; Bronner-Fraser, M. (2006). "Development and evolution of the migratory neural crest: a gene regulatory perspective". Curr Opin Genet Dev. 13 (4): 360–6. doi:10.1016/j.gde.2006.06.006. PMID 16793256.
^ Donoghue, P.C.; Graham, A.; Kelsh, R.N. (2008). "The origin and evolution of the neural crest". BioEssays. 30 (6): 530–41. doi:10.1002/bies.20767. PMC 2692079. PMID 18478530.
^ Abitua, P. B.; Wagner, E.; Navarrete, I. A.; Levine, M. (2012). "Identification of a rudimentary neural crest in a non-vertebrate chordate". Nature. 492 (7427): 104–107. Bibcode:2012Natur.492..104A. doi:10.1038/nature11589. PMC 4257486. PMID 23135395.
^ Kalcheim, C. and Le Douarin, N. M. (1998). The Neural Crest (2nd ed.). Cambridge, U. K.: Cambridge University Press.
^ Van Keymeulen, A; Mascre, G; Youseff, KK; et al. (October 2009). "Epidermal progenitors give rise to Merkel cells during embryonic development and adult homeostasis". J. Cell Biol. 187 (1): 91–100. doi:10.1083/jcb.200907080. PMC 2762088. PMID 19786578.
^ Szeder, V; Grim, M; Halata, Z; Sieber-Blum, M (January 2003). "Neural crest origin of mammalian Merkel cells". Dev. Biol. 253 (2): 258–63. doi:10.1016/s0012-1606(02)00015-5. PMID 12645929.
^ Lake, JI; Heuckeroth, RO (1 July 2013). "Enteric nervous system development: migration, differentiation, and disease". American Journal of Physiology. Gastrointestinal and Liver Physiology. 305 (1): G1–24. doi:10.1152/ajpgi.00452.2012. PMC 3725693. PMID 23639815.
External links
[edit]
Embryology at UNSW Notes/ncrest
ancil-445 at NeuroNames
Diagram at University of Michigan
Hox domains in chicks
| v t e Human embryonic development in the first three weeks |
| Week 1 | Fertilization Oocyte activation Zygote Cleavage Blastomere Morula Cavitation Blastocoel Blastocyst Inner cell mass Trophoblast |
| Week 2 (Bilaminar) | Hypoblast Epiblast |
| Week 3 (Trilaminar) | | | | --- | | Germ layers | Archenteron/Primitive streak + Primitive pit + Primitive node/Blastopore + Primitive groove Gastrula + Gastrulation Regional specification | | Ectoderm | Surface ectoderm Neuroectoderm Somatopleuric mesenchyme Neurulation Neural crest | | Endoderm | Splanchnopleuric mesenchyme | | Mesoderm | Axial mesoderm Paraxial + Somite + Somitomere Intermediate Lateral plate + Intraembryonic coelom + Splanchnopleuric mesenchyme + Somatopleuric mesenchyme | |
| v t e Development of the nervous system |
| Neurogenesis | | | | --- | | General | Neural development Neurulation Neurula Notochord Neuroectoderm Neural plate + Neural fold + Neural groove Neuropoiesis Adult neurogenesis | | Neural crest | Cranial neural crest + Cardiac neural crest complex Truncal neural crest | | Neural tube | Rostral neuropore Neuromere / Rhombomere Cephalic flexure Cervical flexure Pontine flexure Alar plate Basal plate Glioblast Neuroblast Germinal matrix | |
| Eye | Neural tube Optic vesicle Optic stalk Optic cup Surface ectoderm + Lens placode |
| Ear | Otic placode + Otic pit + Otic vesicle |
Retrieved from "
Categories:
Embryology of nervous system
Ectoderm
Chordate anatomy
Animal nervous system
Hidden categories:
Articles with short description
Short description is different from Wikidata
Neural crest
Add topic |
187855 | https://www.ck12.org/flexi/math-grade-8/slope-intercept-form-of-linear-equations/define-the-slope-intercept-form./ | Flexi answers - Define the slope-intercept form. | CK-12 Foundation
Subjects Explore
Donate
Sign InSign Up
All Subjects
Math Grade 8
Deriving the Equation y = mx+b
Question
Define the slope-intercept form.
Flexi Says:
Linear expressions can be written in slope-intercept form:y=m x+b, where m represents the slope of the line, and b is the y coordinate where the line crosses the y axis (a location called the y-intercept).
The slope,m,is written in the form change in y change in x or r i s e r u n and whole-number slope values are read as having a denominator of 1.
Let’s look at an example:
Graph the linear equation: y=3 x+2.
i
Analogy / Example
Try Asking:
How can you derive the equation of a graph?What are m and b in the linear equation y = 26x + 12?Determine the equation of the line passing through the point (−3, −10), with a slope of m=4. Put your answer in slope-intercept form.
How can Flexi help?
By messaging Flexi, you agree to our Terms and Privacy Policy
×
Image Attribution
Credit:
Source:
License: |
187856 | https://www.liverpool.ac.uk/~pjgiblin/papers/zigzag-final.pdf |