content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Lesson 13
Story Problems and Equations
Warm-up: Which One Doesn’t Belong: Diagrams (10 minutes)
This warm-up prompts students to carefully analyze and compare features of tape diagrams and equations. In making comparisons, students have a reason to use language precisely (MP6). The activity
also enables the teacher to hear the terminologies students know and how they talk about characteristics of tape diagrams and connect them to equations (MP2, MP7).
• Groups of 2
• Display image.
• “Pick one that doesn’t belong. Be ready to share why it doesn’t belong.”
• 1 minute: quiet think time
• “Discuss your thinking with your partner.”
• 2–3 minutes: partner discussion
• Share and record responses.
Student Facing
Which one doesn’t belong?
3. \(27 + \underline{\phantom{\hspace{1.05cm}}} = 40\)
Activity Synthesis
• “Let’s find at least one reason why each one doesn’t belong.”
• “Which diagram best matches the equation in C? Explain.”
Activity 1: Card Sort: Story Problems and Equations (15 minutes)
The purpose of this activity is for students to connect story problems to the equations that represent them and to solve different types of story problems. Students identify equations with a symbol
for the unknown that match a story problem and justify their decisions by describing how the equations represent the quantities and any actions in the story problem (MP4). When students analyze and
connect the quantities and structures in the story problems and equations, they are thinking abstractly and quantitatively (MP2) and making use of structure (MP7).
MLR8 Discussion Supports. Display sentence frames to support discussion as students explain their reasoning to their partner: “I noticed ___ , so I matched . . .” Encourage students to challenge each
other when they disagree.
Advances: Conversing, Representing
Required Materials
Materials to Gather
Materials to Copy
• Equations for Different Types of Word Problems
Required Preparation
• Create a set of cards from the blackline master for each group of 2–4.
• Each group of 2–4 needs a set of cards from the previous lesson.
• Groups of 2–4
• Give each group the story problems (Cards A–I) from the Story Problem and Diagram Cards.
• Give one set of Equation Cards to each group of students.
• Give each group access to base-ten blocks.
• “Take turns reading the story problems. After one person reads, work together to find an equation that matches. When you think you have found a match, explain to your group why the cards match.”
• “If you think that more than one card could match the story, explain the match to your group.”
• As needed, demonstrate the activity with a student volunteer.
• “When your group finishes, choose 2 story problems from Cards F, G, H, or I and solve them.”
• 6 minutes: partner work time
• Monitor for students who explain why more than one equation may match a story and how the equations match the quantities in the context of the story problem.
• 4 minutes: independent work time
Student Facing
1. Match each story problem with an equation. Explain why the cards match.
2. Choose 2 story problems and solve them. Show your thinking.
Advancing Student Thinking
Students may believe that only one equation could match each story. Encourage students to describe how equations match any actions in the story and whether any other equations show the same actions.
If there are no actions in the story, ask students to explain why one or more cards shows the relationship between the parts and the whole. Consider providing the tape diagrams from the previous
activity to support students in their explanations.
Activity Synthesis
• Invite students to share a match for each story.
• Consider asking:
□ “How does the equation match the story problem?”
□ “Is there another equation that could match the story problem? Explain why or why not.”
Activity 2: Represent and Solve Story Problems (20 minutes)
The purpose of this activity is for students to use tape diagrams and equations to represent different types of story problems within 100. In this activity, students interpret story problems and use
diagrams and equations to represent the unknown quantities. Students are encouraged to solve using a method that makes sense to them.
Students may complete the parts of each problem in an order that makes sense to them. In the synthesis, students compare and connect their diagrams, equations, and methods for solving (MP2, MP7).
Monitor for students who draw accurate diagrams and create different equations for the problem with Noah and Kiran’s seeds to share in the lesson synthesis.
Representation: Develop Language and Symbols. Provide students with access to a chart that shows an example of a completed tape diagram so that students can refer to it as they work on the activity.
Supports accessibility for: Visual-Spatial Processing, Memory, Attention
• Groups of 2
• Give each group access to base-ten blocks.
• “Now you get a chance to draw diagrams and write equations that represent story problems.”
• “Read the story carefully. Then solve each problem and show your thinking.”
• 8 minutes: independent work time
• 5 minutes: partner discussion
• Monitor for students who:
□ use an addition equation to represent Andre’s seeds
□ subtract to find the number of seeds Andre won using a base-ten diagram or equations.
Student Facing
1. Write an equation using a question mark for the unknown value.
2. Solve. Show your thinking using drawings, numbers, or words.
2. Andre started a game with 32 seeds. Then he won more seeds. Now he has 57 seeds. How many seeds did Andre win?
1. Label the diagram to represent the story.
2. Write an equation using a question mark for the unknown value.
3. Solve. Show your thinking using drawings, numbers, or words.
3. Diego gathered 22 seeds from yellow flowers and 48 seeds from blue flowers. How many seeds did he gather in all?
1. Label the diagram to represent the story.
2. Write an equation using a question mark for the unknown value.
3. Solve. Show your thinking using drawings, numbers, or words.
4. Noah and Kiran gathered 92 pumpkin seeds. Noah gathered 53 pumpkin seeds. How many seeds did Kiran gather?
1. Draw a diagram to represent the story.
2. Write an equation using a question mark for the unknown value.
3. Solve. Show your thinking using drawings, numbers, or words.
Advancing Student Thinking
If students draw their own diagrams, but do not label the quantities, consider asking:
• “What are the different things that you can count in the story? How does your diagram show these things?”
• “What could you add to your diagram to help someone make connections to the story?”
Activity Synthesis
• Invite a previously identified student to share their completed tape diagram and the addition equation for Andre's seeds.
• Invite a previously identified student to share the way they used subtraction to find how many seeds Andre won.
• “How are _____’s equation and _____’s method the same? How are they different?” (They both use the same numbers. _____’s method is a way to find the unknown value in the equation. They are
different because the equation is addition, but the method shows subtraction).
• “Does _____’s method using subtraction match the actions in the story problem? Explain why or why not.” (No. The story tells about starting with some seeds and getting more seeds. That is
addition. But you can use subtraction to find the value, since there is an unknown addend.)
• “Sometimes it might be better to use addition or subtraction equations to represent the actions that are happening in a story. But you can always use addition or subtraction to find an unknown
Lesson Synthesis
Display student work samples for the story about Noah and Kiran’s seeds that show an accurate diagram, an addition equation that represents the story, and a subtraction equation that represents the
“Do both equations match the story and the diagram? Explain.” (Yes. Each equation shows the total amount of seeds and Noah’s seeds. The question mark shows Kiran’s seeds. You could show how Noah’s
seeds and Kiran’s seeds are related with addition or subtraction.)
“Which helps you make sense of a story—a diagram, an equation, or both?”
Cool-down: Match the Equation (5 minutes)
|
{"url":"https://im.kendallhunt.com/K5/teachers/grade-2/unit-2/lesson-13/lesson.html","timestamp":"2024-11-09T22:33:11Z","content_type":"text/html","content_length":"102924","record_id":"<urn:uuid:68f358e3-a17f-4363-b447-3fd23ef82f40>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00625.warc.gz"}
|
Sympathetic Vibratory Physics | A Russellian Extrapolation of the Creation of Alphanon from Plutonium Decay
A Russellian Extrapolation of the Creation of Alphanon from Plutonium Decay
August 12,2011
At the beginning of infinite mind conceptualizing 3 dimensional motion lies the realm of the scalar potential. Walter Russell's point of cyclic beginnings is the undiscovered point of Alphanon, the
first octaves inert gas. His first inert gas as described in his nine octave periodic table of our 3 dimensional elements. When his ninth octaves 7th element, Plutonium has reached the end of it's
harmonic stabilities and it's sub components have reached revolutionary radioactive speeds approaching the speed of light it finalizes it's decay by flatting at it's poles and boring a hole through
it's center. It's decayed form in it's ending has now become the the beginning or the first octaves inert gas Alphanon. Alphanon is the largest of the inert gases in ratio of ring size due to
inheriting it's diameters from Plutonium the largest of the elements in its wave field size as described by Walter Russell in The Universal ONE under his chapter regarding universal ratios.
The question now to be asked by the reader is, "what is this ring composed of?" . The ring we know from what we've conjectured is the remnant of the nucleus of the plutonium atom. Decayed now without
is neutrons, simple protons and electrons or two thirds of a 3 dimensional nucleolated system. Left without motion due to lack of neutrons, flat and nebulous. Because this is now a binary construct,
we know from Walter Russell that the protons represent the + positive and electrons represent the - negative electrical forces. With this in mind the natural outcome is a magnetic polarization of the
ring system.
With the above construct of slight electric and magnetic polarization Universe accommodates a wave field ratio boundaries based upon the energy expressed by the ring systems nature. Simultaneously
with this construct of potentials the harmonic resonances of a binary ring system divide the ring into two rings of blue and red sympathetic opposite potentials. These rings seeking to find fully
stable harmonic balance by each dividing again into two more rings of blue violet and red violet sympathetic resonances to dynamically balance all four rings in a fully balanced dynamically harmonic
stable system.
The subsequent completion of this Alphanon formation is a progression of sub division into a four ring sudo 2 dimensional system matured and catalyzed by the voided open center of the ring. Space ,
or the plenum of the universe has no voids, there is no vacuum, all space is filled. Based upon this knowledge, we know scalar vibratory desire of radial motion fills the background of space. John
Keely tells us the density of this background desire is 986,000 times the density of steel. Then what lies now in the center of this ring? Answer, a neutral center.
Now with a complete wave field bounded stable system the rings continue their evolutionary journey from Plutonium to Alphanon and into the forms allowed by there octave wave field boundary of the 1st
octave by forming spheres at the neutral center of the bound system. Centrifugal motion of the rings expanding to limits of the wave fields corners flow back centripetally to the systems center to
meet with the neutral center and complete a 3 dimensional construct with proton, electron and neutral centers gift of neutron by way of neutral center. The gift produces 1st octave element Tomium.
See Also
Table of the Elements - Russell Elements
|
{"url":"https://svpwiki.com/A-Russellian-Extrapolation-of-the-Creation-of-Alphanon-from-Plutonium-Decay","timestamp":"2024-11-12T12:17:33Z","content_type":"text/html","content_length":"42782","record_id":"<urn:uuid:22dea33b-55c1-4f36-bab5-2a97d07a6e69>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00788.warc.gz"}
|
Practice Algebra with the exercise "Secret Message Decoding"
Learning Opportunities
This puzzle can be solved using the following concepts. Practice using these concepts and improve your skills.
An agent undercover sends encoded secret message parts to the headquarters through public communication channels such as e-mail, text message and social media. Our task at the headquarters is to
decode and assemble the original message.
Encoding process:
The agent wants to send the message ‘Hello!’. As only two channels are available at this time, e.g., e-mail and Twitter, she divides it into hs = 2 equal length parts, each containing ms = 3 original
characters ('Hel' and 'lo!') given by their ASCII representation, i.e., 72 101 108 and 108 111 33, respectively. Furthermore, an identity matrix called encoding header with size hs is attached in
front of the original message parts:
Original Part 1: [1 0] [72 101 108]
Original Part 2: [0 1] [108 111 33]
Next, the agent linearly combines the message parts using modulo arithmetic over Galois Field GF(127), i.e., modulo addition and multiplication over finite field with characteristic of 127. The jth
number in the encoding header represents how many times the jth original message part was added, while the ith encoded character of the message is the sum of the ASCII values of the ith characters in
the original message parts modulo 127. For example, the 2nd encoded character 69 in Part 2 below is the product of the encoding header [113 74] and the 2nd characters of the original message parts
[101 111], i.e., (113*101 + 74*111) mod 127 = 69:
Encoded Part 1: [122 122] [116 83 57]
Encoded Part 2: [113 74] [126 69 41]
Finally, the corresponding characters in Part 1 and Part 2 are sent in e-mail: zztS9 and tweeted in a post: qJ~E), respectively.
Decoding example:
After at the headquarters the two messages zztS9 and qJ~E) were detected, the operators calculate hs = 2 and ms = 5 - 2 = 3, which gives the following ASCII representation:
Encoded Part 1: [122 122] [116 83 57]
Encoded Part 2: [113 74] [126 69 41]
Following the reverse process of encoding, the operators use finite field arithmetics to calculate the multiplicative inverse of elements to obtain the identity matrix on the hsxhs encoding header
part, while the same operations are performed on the encoded characters as well:
Decoded Part 1: [1 0] [72 101 108]
Decoded Part 2: [0 1] [108 111 33]
As a result, appending the decoded ASCII characters 72 101 108 and 108 111 33 reveals the original secret message ‘Hello!’.
Line 1: An integer hs for the size of the encoding header.
Line 2: An integer ms for the number of the encoded characters in each encoded message part.
Next hs lines: A string containing hs + ms characters.
A single line containing the decoded string.
1 ≤ hs ≤ 10
1 ≤ ms ≤ 40
The input strings contain printable ASCII characters (from the range of 33 to 126).
A higher resolution is required to access the IDE
|
{"url":"https://www.codingame.com/training/hard/secret-message-decoding","timestamp":"2024-11-09T09:03:16Z","content_type":"text/html","content_length":"148391","record_id":"<urn:uuid:71dea633-b35f-4f00-af20-4c873073c208>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00463.warc.gz"}
|
Decrease a Number by a Percentage Worksheet (answers, examples)
There are two sets of Increase/Decrease of Number Percentage Worksheets:
Increase a Number by a Percentage
Decrease a Number by a Percentage
Examples, solutions, videos, and worksheets to help Grade 6 students learn how to decrease a number by a percentage. Include percentage word problems.
How to decrease a number by a percentage?
There are 3 sets of decrease a number by a percentage worksheets:
• Easy or Simple Percents (50%, 25%, 20%, 10%, 5%, 1%)
• Whole Number Percents
• Decimal Number Percents
Method 1: Using the formula
In general, the formula for increasing a number by a percentage can be expressed as:
Increased Value = Original Number - (Original Number × Percentage Decimal)
1. Convert the percent to a decimal by dividing it by 100.
2. Multiply the original number by the decimal obtained in Step 1. This will give you the amount to increase the number by.
3. Subtract the amount from the original number to get the increased value.
Example: Decrease 80 by 25%.
1. Convert 25% to a decimal: 25/100 = 0.25
2. Multiply 80 by 0.25: 80 × 0.25 = 20
3. Subtract from the original number: 80 - 20 = 60
Therefore, decreasing 80 by 25% results in 60.
Method 2: Using a Multiplier
The formula for increasing a number by a percentage using a multiplier can be expressed as:
Increased Value = Original Number × (1 - Percentage Decimal)
1. Convert the percent to a decimal by dividing it by 100.
2. Subtract the decimal from 1.
3. Multiply the original number by the result from Step 2 to get the decreased value.
Example: Decrease 80 by 25%.
1. Convert 25% to a decimal: 25/100 = 0.25
2. Subtract the decimal from 1: 1 - 0.25 = 0.75
3. Multiply 80 by 0.75: 80 × 0.75 = 60 Therefore, decreasing 80 by 25% results in 60.
Using this multiplier approach can provide a quick and efficient way to calculate the decreased value of a number by a certain percentage. We can do step 2 in the head and use a calculator for step 3
if necessary.
Have a look at this video if you need to review how to decrease a number by a percent using both methods.
Click on the following worksheet to get a printable pdf document.
Scroll down the page for more Percentage Worksheets.
More Percentage Worksheets
(Answers on the second page.)
Percent Worksheet #1 (easy percentages)
Percent Worksheet #2 (whole number percentages)
Percent Worksheet #3 (decimal number percentages)
Online or Interactive
Percentages (What is X% of Y)
Percentages (X is what % of Y)
Percentages (X is Y% of what)
Percent of a Number
Finding Percent
Finding the Base
Percent Word Problems (percent or rate)
Percent Word Problems (percent of or part)
Percent Word Problems (increase or decrease)
Percent Word Problems (profit or loss)
Percentage Word Problems
Percentage of a Number Lesson
Percent Word Problems
Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
|
{"url":"https://www.onlinemathlearning.com/decrease-number-percentage.html","timestamp":"2024-11-04T09:26:43Z","content_type":"text/html","content_length":"44163","record_id":"<urn:uuid:93e8f3ec-4a6e-4291-b0fe-4a0e24ddc384>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00194.warc.gz"}
|
koi88.site Revenue To Valuation Ratio
Revenue To Valuation Ratio
Equity price based multiples ; PEG ratio, Prospective PE ratio / prospective average earnings growth. Most suitable when valuing high growth companies ; Dividend. For some sectors, an EBITDA multiple
is not the most commonly utilised metric. For instance, Financial Services tends to trade on Price / Earnings (PE) ratios. The ratio is determined by dividing a company's current share price by its
earnings per share. For example, if a company is currently trading at $25 a share and. An enterprise value (EV) is the measure of a company's total value or selling price. The above formula in Step 2
is also known as the Enterprise Value-to-. The Enterprise Value to Earnings Before Interest and Taxes (EV/EBITDA) ratio is the most used metric for valuing a business. It is determined by.
entry valuation · discounted cashflow · asset valuation · times revenue method · price to earnings ratio · comparable analysis · industry best practice · precedent. Note that the P/E multiple equals
the ratio of equity value to net Income, in which the numerator and denominator are both are divided by the number of fully. A valuation ratio formula measures the relationship between the market
value of a company or its equity and some fundamental financial metric (eg, earnings). Ratio studies are primarily formulated from information reported on the "declaration of value" that must
accompany most deeds that convey fee ownership of real. Cash flow and earnings multiples represent Sellers Discretionary Earnings (SDE) as reported by the business owners or business brokers closing
the sale listing. Enterprise Value to Revenue Ratio compares enterprise value with the company's total revenue. It indicates how much it costs investors relative to per unit of. The price-to-sales (P
/S) ratio compares a company's stock price to its revenues, helping investors find undervalued stocks that make good investments. The Enterprise Value (EV) to Revenue multiple is a valuation metric
used to value a business by dividing its enterprise value (equity plus debt minus cash). A valuation ratio formula measures the relationship between the market value of a company or its equity and
some fundamental financial metric (eg, earnings). Revenue multiples. ARR multiples are the ratio between Annual Recurring Revenue (ARR) and company valuation. The Multiple can be found by dividing
the Valuation. It reflects the company's subscription-based revenue stream and can be used to estimate future revenue and growth potential. What is an ARR valuation multiple.
At their core, valuation ratios are metrics that help us understand the relationship between a company's stock price and earnings, book value, sales. The Enterprise Value (EV) to Revenue multiple is
a valuation metric used to value a business by dividing its enterprise value (equity plus debt minus cash). The value-to-revenue ratio is one of the measures of a company's financial performance,
especially relative to other companies in the same industry. Price-to-sales (P/S) ratio, also known as revenue multiple, is a popular valuation metric used by investors, analysts, and companies to
evaluate the performance. EV / TTM Revenue (sometimes referred to as EV / TTM Sales) is the ratio between the enterprise value of a company to its annual revenues (sales). A lower EV/. Pre-revenue
valuation measures a startup's worth, and it's an important activity for investors and the business owner. From an owner's standpoint, they can get. Valuation ratios examine the relationship between
a firm's market value and different financial metrics. Explore common examples and their applications. Essentially, this metric divides the company's enterprise value (EV) by its annual revenue. The
resulting figure shows how much investors will pay for every. The ratio takes a company's enterprise value (which represents market capitalization plus net debt) and compares it to the Earnings
Before Interest, Taxes.
It is desirable that the EBIRDA/revenue be at least 8% and the value of enterprise moves upward above 8%. Companies with EBITDA/revenue ratio above 15% are rare. The enterprise value to revenue
multiple is a ratio that compares the value of a company, its potential market worth, with its revenue, the actual money the. The average SaaS business sold by FE over the past decade had a ratio of
MRR to ARR (annual recurring revenue) – this is an ideal mix to aim for to maximize. We provide enterprise value multiples based on trailing Revenue, EBITDA, EBIT, Total Assets, and Tangible Assets
data, as reported. Our valuation multiples. In essence, they are ratios between share price and an underlying measure of the company's performance such as earnings, sales or book value. Enterprise
Valuation ratios examine the relationship between a firm's market value and different financial metrics. Explore common examples and their applications. Note that the P/E multiple equals the ratio of
equity value to net Income, in which the numerator and denominator are both are divided by the number of fully. It measures a company's share price with its earnings per share, indicating whether a
stock is relatively cheap or expensive. In other words, the P/E ratio. In essence, they are ratios between share price and an underlying measure of the company's performance such as earnings, sales
or book value. Enterprise value. At their core, valuation ratios are metrics that help us understand the relationship between a company's stock price and earnings, book value, sales. SaaS: usually
10x revenues, but it could be more depending on the growth, stage and gross margin. · E-commerce: x revenues or x EBITDA. · Marketplaces. Revenue multiples. ARR multiples are the ratio between Annual
Recurring Revenue (ARR) and company valuation. The Multiple can be found by dividing the Valuation. Enterprise Value to Revenue Ratio compares enterprise value with the company's total revenue. It
indicates how much it costs investors relative to per unit of. The enterprise value to revenue multiple is a ratio that compares the value of a company, its potential market worth, with its revenue,
the actual money the. Valuation multiples are commonly used methods for valuing startups, especially pre-revenue ones. Multiples value a company based on ratios. This is a key indicator that a
company is ready to scale. In this phase, the key valuation metrics include: annual recurring revenue, ARR growth rate, net. We can calculate gross margin as (Revenue minus Cost of Goods Sold) /
Revenue. So, if revenues were $7m and costs were $1, we have (7 - 1) / 7 = A gross margin. Cash flow and earnings multiples represent Sellers Discretionary Earnings (SDE) as reported by the business
owners or business brokers closing the sale listing. An enterprise value (EV) is the measure of a company's total value or selling price. The above formula in Step 2 is also known as the Enterprise
Value-to-. Equity price based multiples ; PEG ratio, Prospective PE ratio / prospective average earnings growth. Most suitable when valuing high growth companies ; Dividend. Essentially, this metric
divides the company's enterprise value (EV) by its annual revenue. The resulting figure shows how much investors will pay for every. The average SaaS business sold by FE over the past decade had a
ratio of MRR to ARR (annual recurring revenue) – this is an ideal mix to aim for to maximize. The price-to-sales (P/S) multiple is another popular method of calculating enterprise value. The ratio of
enterprise value to sales revenue is. Revenue-based or Annual Recurring Revenue (ARR) valuation focuses on potential growth. The other two, on the other hand, look mainly at profits and earnings. The
ratio is determined by dividing a company's current share price by its earnings per share. For example, if a company is currently trading at $25 a share and. Pre-revenue valuation measures a
startup's worth, and it's an important activity for investors and the business owner. The EV/Gross Profit Ratio is a profitability financial ratio that estimates the enterprise value of a company to
its gross profit. It demonstrates how many. For some sectors, an EBITDA multiple is not the most commonly utilised metric. For instance, Financial Services tends to trade on Price / Earnings (PE)
ratios. EV / TTM Revenue (sometimes referred to as EV / TTM Sales) is the ratio between the enterprise value of a company to its annual revenues (sales). A lower EV/. In the CCA method, valuation
multiples such as P/E ratio, EV/Revenue ratio, and EV/EBITDA ratio, provide benchmarks for estimating value by comparing financial. You can calculate the revenue valuation multiples by dividing the
sold companies' selling prices by their revenue, usually measured over the most recent twelve. Also called enterprise value-to-revenue ratio, this calculation relates the amount of a company's annual
revenue to its total value including assets and debts. The times-revenue method determines the maximum value of a company as a multiple of its actual revenue for a set period. The EV/Revenue multiple
is a valuation ratio that compares the enterprise value of a firm to the net sales generated in a specified period.
What Does A Termite Inspection Cost | Free Budget Worksheets For Household
|
{"url":"https://koi88.site/learn/revenue-to-valuation-ratio.php","timestamp":"2024-11-05T15:19:02Z","content_type":"text/html","content_length":"17313","record_id":"<urn:uuid:fb305ab3-5060-4055-8c0f-c499d27b5333>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00038.warc.gz"}
|
Velocity Time Graph Worksheet Part 1 Answer Key
Velocity Time Graph Worksheet Part 1 Answer Key
Some of the worksheets displayed are velocity time graph problems work 7 velocity and acceleration work motion graphs name describing motion with velocity time graphs work 3 scanned documents physics
name unit 1d motion period topic 3 kinematics displacement velocity acceleration. Describing a graph one skill you will need learn is describing a velocity time graph.
The Match That Graph Concept Builder Is A Concept Building Tool That Allows The Learner To Match A Position Time Graph Graphing Stem Education Progress Report
Velocity time graphs worksheet pdf.
Velocity time graph worksheet part 1 answer key. Some of the worksheets for this concept are speed velocity and acceleration calculations work displacementvelocity and acceleration work time velocity
velocity and acceleration calculation work speed velocity and acceleration calculations work s distance time speed practice problems speed and. For example if you were to find the acceleration of the
object you should find the first derivative. Split the graph up into distinct sections these can be seen in the image as a b c and d.
A worksheet that requires the pupils to construct their own graphs of motion and answers questions about them. The speed time graph shows a 50 second car journey describe the 50 second journey. 4 8
22 customer reviews.
Velocity time graph worksheet and answers. In detail describe each part of the journey ensuring to use numerical values throughout. Speed and velocity with answer key displaying top 8 worksheets
found for this concept.
Finally in order to answer your questions you should look at your equations and think about the correct way to solve them. Showing top 8 worksheets in the category velocity time graph answer key. A
harder question at the end to stretch the higher attaining students.
Speed velocity and acceleration worksheet answer key or worksheet calculating speed time distance and acceleration.
Real Life Graphs Worksheets Cazoom Maths Worksheets In 2020 Distance Time Graphs Worksheets Distance Time Graphs Graphing
Graphing Interpreting Distance Vs Time Graphs In 2020 Distance Time Graphs Reading Graphs Graphing
Search Tes Resources Distance Time Graphs Distance Time Graphs Worksheets Teaching Middle School Maths
Distance Time Graphs Worksheet Distance Time Graphs Worksheets Distance Time Graphs Motion Graphs
Analyzing Motion Graphs Ws Motion Graphs Graphing Motion Graphs Worksheets
Distance Time Graphs Maths Worksheet And Answers 9 1 Gcse Foundation Grade 4 Year 9 Distance Time Graphs Math Worksheet Worksheets
This Scaffolded Worksheet Covers D T Graphs V T Graphs And Plenty Of Practice Students Are Motion Graphs Interpreting Motion Graphs Motion Graphs Worksheets
Motion Review Worksheet Distance Time Graphs Distance Time Graphs Distance Time Graphs Worksheets Physical Science Lessons
Speed Time Graphs Worksheet Graphing Distance Time Graphs Math Worksheet
Velocity Time Graph Worksheet Unique Unbelievable Real Life Graphs Worksheets School Algebra In 2020 Algebra Worksheets Algebra Resources Math Resources
Physics Graph Of Motion Google Search Fizik
The Position Time Graphs Numerical Analysis Concept Builder Is An Interactive Exercise That Provides The Learner With Practice De Graphing Positivity Concept
Distance Vs Time Graph Worksheet Lovely Distance Time Graphs By Mizz Happy Teaching Resource In 2020 Distance Time Graphs Worksheets Distance Time Graphs Motion Graphs
The Position Time Graphs Concept Builder Is A Concept Building Tool That Provides The Learner With Practice Determining The Graphing Positivity Progress Report
Velocity Time Graphs Concept Builder This Interactive Exercise Challenges The Learner To Identify The Velocity Time Graph S T Graphing Motion Graphs Velocity
Kinematics Motion Graph Matching Card Game Motion Graphs Physics Notes Physics
P1 Motion Physical Science Physics And Mathematics Science Notes
Image Result For Unit 2 Uniform Motion Worksheet 8 Answers Persuasive Writing Prompts Solving Quadratic Equations Graphing
Analyzing Motion Graphs Calculating Speed Ws Motion Graphs Motion Graphs Worksheets Calculating Speed
|
{"url":"https://kidsworksheetfun.com/velocity-time-graph-worksheet-part-1-answer-key/","timestamp":"2024-11-14T18:51:05Z","content_type":"text/html","content_length":"134855","record_id":"<urn:uuid:eb3e0060-5ecd-47c9-b288-bc9ba3253efa>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00829.warc.gz"}
|
Base 13 Conversion
Base 13 Conversion Capital One Python Interview Question
Base 13 Conversion
Capital One Python Interview Question
Given an integer , return its string representation in base 13.
In case you don’t use base 13 that much (who does, right?), here’s a quick rundown: just like base 10 uses digits from 0 to 9. But also for 10, 11 and 12, we use the letters A, B, and C.
For example:
• 9 in base 13 is still
• 10 in base 13 is
• 11 in base 13 is
• 12 in base 13 is
• 13 in base 13 is
• 14 in base 13 is
• 49 in base 13 is (since $3*13 + 10 = 49$)
• 69 in base 13 is (since $5*13 + 4 = 69$)
Iterative Approach
If you’re not familiar with how base conversion from decimal (base 10) to another base works, here's the process:
1. Divide the number by the base and store the remainder.
2. Update the number using integer division (only the whole number part).
3. Repeat until the number becomes 0.
4. Reverse the collected remainders to get the result.
Let’s convert 49 to base 13:
• $49 \mod 13 = 10 \rightarrow \text{store "A"}$
• $49 \div 13 = 3$(integer division, whole number only).
• $3 \mod 13 = 3 \rightarrow \text{store "3"}$
• $3 \div 13 = 0 \ \text{(stop)}$
So, the remainders are $[A, 3]$. After reversing them, the result is "3A" in base 13.
Here’s the code for the iterative approach where we reverse the result at the end:
In the above code, we are concatenating strings using , and every time we append to a string, Python creates a new string by allocating memory for both the current string and the new character. This
operation takes $O(n + m)$, where $n$ is the length of the first string and $m$ is the length of the second string.
In the worst case, string concatenation over multiple iterations becomes $O(n^2)$. This isn’t a big issue for small strings like in this problem, but for longer strings, we can improve the efficiency
by using a list and joining it at the end.
Here’s the code using a list for collecting digits, avoiding repeated string concatenations:
This version avoids string concatenation overhead by building the result in a list and reversing it at the end, which operates in $O(n)$ time.
Recursive Approach
The recursive approach follows the same logic as the iterative one but works by breaking the number down step-by-step using recursion. Each recursive call handles a smaller part of the number and
builds the result as it "returns" from the recursive calls.
Base Case:
• If the number is less than 13, directly return its base 13 equivalent.
Recursive Case:
• Divide the number by 13, recursively call the function with the quotient, and then append the remainder at the end.
This method handles negative numbers by converting the absolute value to base 13 and then prepending a minus sign.
Sourced from
Capital One
|
{"url":"https://datalemur.com/questions/python-base-13-conversion","timestamp":"2024-11-06T01:46:07Z","content_type":"text/html","content_length":"175021","record_id":"<urn:uuid:32d53d20-de3f-4c83-a93c-67aff59eabd2>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00651.warc.gz"}
|
Code::Blocks with C/C++ Compiler
Qt 5.11.0 is built for more than a week (without WebEngine, that doesn't want to build because libc is too old).
I was struggling with QtWebKit... The build doesn't want to finish and got killed (with 4GB swap) on Document.cpp (and yes, I have disabled the monolitic built). I have a last try running, not sure
what I will try if it still fail (probably try to cut the file, but that complicate).
Not sure I want to build a new Qt for a minor bump in the version, unless there is something uber important in the ChangeLog.
Jun 1, 2004
I was struggling with QtWebKit...
I think QtWebKit is not worth it anymore, as i only see 3 programs still using it on my Manjaro Linux desktop repos (KdenLive, K3B and Marble).
Shame about glic being old because of this fixes:
- Fix build with GCC 8.1
[QTBUG-68752] Fix compilation with opengl es2
Aug 10, 2007
Hi all
: I'm trying to build some OCR program (tesseract) on the Pandora with your latest Code::Blocks release (GCC 8.1), but it fails with the following error:
In file included from boxword.h:24,
from blamer.h:26,
from blamer.cpp:21:
rect.h: In member function ���double TBOX::x_overlap_fraction(const TBOX&) const���:
rect.h:463:65: error: no matching function for call to ���max(float, double)���
return std::max(0.0, static_cast<double>(high - low) / width);
In file included from /mnt/utmp/codeblocks/usr/include/c++/8.1.0/algorithm:61,
from ../../src/ccutil/genericvector.h:23,
from boxword.h:23,
from blamer.h:26,
from blamer.cpp:21:
/mnt/utmp/codeblocks/usr/include/c++/8.1.0/bits/stl_algobase.h:219:5: note: candidate: ���template<class _Tp> const _Tp& std::max(const _Tp&, const _Tp&)���
max(const _Tp& __a, const _Tp& __b)
/mnt/utmp/codeblocks/usr/include/c++/8.1.0/bits/stl_algobase.h:219:5: note: template argument deduction/substitution failed:
In file included from boxword.h:24,
from blamer.h:26,
from blamer.cpp:21:
rect.h:463:65: note: deduced conflicting types for parameter ���const _Tp��� (���float��� and ���double���)
return std::max(0.0, static_cast<double>(high - low) / width);
In file included from /mnt/utmp/codeblocks/usr/include/c++/8.1.0/algorithm:61,
from ../../src/ccutil/genericvector.h:23,
from boxword.h:23,
from blamer.h:26,
from blamer.cpp:21:
/mnt/utmp/codeblocks/usr/include/c++/8.1.0/bits/stl_algobase.h:265:5: note: candidate: ���template<class _Tp, class _Compare> const _Tp& std::max(const _Tp&, const _Tp&, _Compare)���
max(const _Tp& __a, const _Tp& __b, _Compare __comp)
/mnt/utmp/codeblocks/usr/include/c++/8.1.0/bits/stl_algobase.h:265:5: note: template argument deduction/substitution failed:
In file included from boxword.h:24,
from blamer.h:26,
from blamer.cpp:21:
rect.h:463:65: note: deduced conflicting types for parameter ���const _Tp��� (���float��� and ���double���)
return std::max(0.0, static_cast<double>(high - low) / width);
In file included from /mnt/utmp/codeblocks/usr/include/c++/8.1.0/algorithm:62,
from ../../src/ccutil/genericvector.h:23,
from boxword.h:23,
from blamer.h:26,
from blamer.cpp:21:
/mnt/utmp/codeblocks/usr/include/c++/8.1.0/bits/stl_algo.h:3462:5: note: candidate: ���template<class _Tp> _Tp std::max(std::initializer_list<_Tp>)���
max(initializer_list<_Tp> __l)
/mnt/utmp/codeblocks/usr/include/c++/8.1.0/bits/stl_algo.h:3462:5: note: template argument deduction/substitution failed:
In file included from boxword.h:24,
from blamer.h:26,
from blamer.cpp:21:
rect.h:463:65: note: mismatched types ���std::initializer_list<_Tp>��� and ���float���
return std::max(0.0, static_cast<double>(high - low) / width);
In file included from /mnt/utmp/codeblocks/usr/include/c++/8.1.0/algorithm:62,
from ../../src/ccutil/genericvector.h:23,
from boxword.h:23,
from blamer.h:26,
from blamer.cpp:21:
/mnt/utmp/codeblocks/usr/include/c++/8.1.0/bits/stl_algo.h:3468:5: note: candidate: ���template<class _Tp, class _Compare> _Tp std::max(std::initializer_list<_Tp>, _Compare)���
max(initializer_list<_Tp> __l, _Compare __comp)
/mnt/utmp/codeblocks/usr/include/c++/8.1.0/bits/stl_algo.h:3468:5: note: template argument deduction/substitution failed:
In file included from boxword.h:24,
from blamer.h:26,
from blamer.cpp:21:
rect.h:463:65: note: mismatched types ���std::initializer_list<_Tp>��� and ���float���
return std::max(0.0, static_cast<double>(high - low) / width);
rect.h: In member function ���double TBOX::y_overlap_fraction(const TBOX&) const���:
rect.h:485:66: error: no matching function for call to ���max(float, double)���
return std::max(0.0, static_cast<double>(high - low) / height);
In file included from /mnt/utmp/codeblocks/usr/include/c++/8.1.0/algorithm:61,
from ../../src/ccutil/genericvector.h:23,
from boxword.h:23,
from blamer.h:26,
from blamer.cpp:21:
/mnt/utmp/codeblocks/usr/include/c++/8.1.0/bits/stl_algobase.h:219:5: note: candidate: ���template<class _Tp> const _Tp& std::max(const _Tp&, const _Tp&)���
max(const _Tp& __a, const _Tp& __b)
/mnt/utmp/codeblocks/usr/include/c++/8.1.0/bits/stl_algobase.h:219:5: note: template argument deduction/substitution failed:
In file included from boxword.h:24,
from blamer.h:26,
from blamer.cpp:21:
rect.h:485:66: note: deduced conflicting types for parameter ���const _Tp��� (���float��� and ���double���)
return std::max(0.0, static_cast<double>(high - low) / height);
In file included from /mnt/utmp/codeblocks/usr/include/c++/8.1.0/algorithm:61,
from ../../src/ccutil/genericvector.h:23,
from boxword.h:23,
from blamer.h:26,
from blamer.cpp:21:
/mnt/utmp/codeblocks/usr/include/c++/8.1.0/bits/stl_algobase.h:265:5: note: candidate: ���template<class _Tp, class _Compare> const _Tp& std::max(const _Tp&, const _Tp&, _Compare)���
max(const _Tp& __a, const _Tp& __b, _Compare __comp)
/mnt/utmp/codeblocks/usr/include/c++/8.1.0/bits/stl_algobase.h:265:5: note: template argument deduction/substitution failed:
In file included from boxword.h:24,
from blamer.h:26,
from blamer.cpp:21:
rect.h:485:66: note: deduced conflicting types for parameter ���const _Tp��� (���float��� and ���double���)
return std::max(0.0, static_cast<double>(high - low) / height);
In file included from /mnt/utmp/codeblocks/usr/include/c++/8.1.0/algorithm:62,
from ../../src/ccutil/genericvector.h:23,
from boxword.h:23,
from blamer.h:26,
from blamer.cpp:21:
/mnt/utmp/codeblocks/usr/include/c++/8.1.0/bits/stl_algo.h:3462:5: note: candidate: ���template<class _Tp> _Tp std::max(std::initializer_list<_Tp>)���
max(initializer_list<_Tp> __l)
/mnt/utmp/codeblocks/usr/include/c++/8.1.0/bits/stl_algo.h:3462:5: note: template argument deduction/substitution failed:
In file included from boxword.h:24,
from blamer.h:26,
from blamer.cpp:21:
rect.h:485:66: note: mismatched types ���std::initializer_list<_Tp>��� and ���float���
return std::max(0.0, static_cast<double>(high - low) / height);
In file included from /mnt/utmp/codeblocks/usr/include/c++/8.1.0/algorithm:62,
from ../../src/ccutil/genericvector.h:23,
from boxword.h:23,
from blamer.h:26,
from blamer.cpp:21:
/mnt/utmp/codeblocks/usr/include/c++/8.1.0/bits/stl_algo.h:3468:5: note: candidate: ���template<class _Tp, class _Compare> _Tp std::max(std::initializer_list<_Tp>, _Compare)���
max(initializer_list<_Tp> __l, _Compare __comp)
/mnt/utmp/codeblocks/usr/include/c++/8.1.0/bits/stl_algo.h:3468:5: note: template argument deduction/substitution failed:
In file included from boxword.h:24,
from blamer.h:26,
from blamer.cpp:21:
rect.h:485:66: note: mismatched types ���std::initializer_list<_Tp>��� and ���float���
return std::max(0.0, static_cast<double>(high - low) / height);
make[2]: *** [Makefile:504: blamer.lo] Error 1
Do you understand what I'm doing wrong ?
Cheers, Magic Sam
Yep, you are using -fsingle-precision-constant (it's by default in CFLAGS and CXXFLAGS because it's faster on Pandora), and this mess up the template resolution... Either force the template, or
remove the flags from you CFLAGS (or add -fno-single-precision-constant to the CFLAGS). Because it's OCRa nd may need the full double precision I suggest the second option.
hey, just a quick question so I'll put it in here:
The gigahertz Pandora's GPU clocks about twice as fast, but what is the frame rate difference in practice on average? I only have a good old CC Pandora myself and I was finally gonna release
something for it. The CC one doesn't quite cut it, with 40 - 45ish fps in the most demanding scene of the game, so I would default to frameskipping on. If a gigahertz one manages just fine I'd like
to autodetect the model and default to full framerates for a better experience, though.
I'd also like to avoid auto-frameskip in order to maintain a consistent framerate during the game, so there's that.
I could of course build a PND with just that scene and upload it for someone to test, but if there's past experience that points towards a good enough improvement I could skip this step.
Oct 6, 2008
Maybe ptitseb could hazard a guess, but it's a very difficult question to nail down positively. Firstly, where's your code spending most of its time in a profiler? If it's missing 60fps mostly
because of the render step or game logic (especially any of the fiddly bit inside that) it varies. There's also the RAM speed to consider on the different units, because of the way the graphics unit
writes the framebuffer back to RAM on ARM systems.
Well, the Gigahertz is faster, but not 2* faster all-in-all (but like 50% is some cases, compared to my CC @800Mhz).
But as
mentionned, it will depend where is your bottleneck. Using and ssh window, try to have a look at "sudo perf top", and see if it's caping at 100% CPU (then it's probably cpu limited) or if cpu idle a
bit (then it's probably GPU limited). CPU on Gigahertz is also faster the CC (more cache), but you wont double the speed...
I'm already pretty sure I'm bottlenecked by the GPU, but I re-checked everything.
According to the engine's internal performance counters it's spending about 50µs in the game logic code, and around 20000µs per frame in the rendering code. With frameskip on it's taking just ~3000µs
- ~4000µs there, possibly since it's not running behind anywhere.
The scene in question uses multiple somewhat large textures with alpha transparency. I'm aware how taxing this is on that thing, but resizing or using compressed ones isn't an option for me here.
Didn't copy it off
(had some issues getting the cli tools run over ssh, eh)
, but perf top looked like this:
25.50% libmikmod.so
7.86% libGLES_CM.so
5.40% kernel
2.00% ProjectTWC
Last thing is the game's executable.
I'll think about replacing the tracker module audio with streamed ones for this since I did that for another platform already anyways.
Overall just running top the total cpu usage of the process maxed out around 37% us. My Pandora runs at 969 MHz.
So yeah, I think this is on the GPU.
Guess I'll prepare something if someone here is willing to run it real quick. Not today though.
If you have a lot of alpha transparency, try to use GL_BLEND instead of GL_ALPHA_TEST if possible.
Oct 6, 2008
Wasn't there a slowness issue with tracker music in some of your ports? I may be misremembering, and I can't remember what you did to fix those cases anyway to be honest.
Sorry for the delay, couldn't find quite the time to actually prepare package it up properly.
If you have a lot of alpha transparency, try to use GL_BLEND instead of GL_ALPHA_TEST if possible.
Already doing that.
So if anyone has a bit of time and a 1 GHz Pandora, could they try running this for me? This is a limited build of the game, starting right in the problematic area.
Menus are disabled, debug overlay with fps and frametimes is enabled. Pressing Start will exit the game.
There's nothing really that has to be done as far as controlling goes. You can move around, but interactions and such were removed for this.
I'm only really interested if it reaches 60 fps just fine. The frametimes are hard to read as they are displayed right now, I know.
May 14, 2006
FPS is between 58 and 59
BGStep: most 0us, sometimes more
Obj Steps: 30us
Render: 2900-3000us
Cleanup: 0us
Total: not stable, can not clearly read, seems to be around 5000us
Used: 30-33%
Game objects are really small, had to go very close to the screen to recognize something.
Last edited:
May 30, 2006
which TI SGX driver are you using?
are you using vsync?
are you using SDL/SDL2?
if no are you using EGL directly?
which TI SGX driver are you using?
Been a while since I messed with the driver. The choice on CC isn't that great. I suppose it was the default one as I just went through the newer ones and the only one that even started the game
froze it after a couple of seconds of worse performance. Huh.
VSync is requested via SDL_GL_SetSwapInterval(1). I don't see any tearing on my unit, despite GLShim telling me to set an environment variable to enable it for real (guess I'll still do that), so I
suppose it's somewhat working? I'm not using the powervr.ini trick.
SDL 2, I'm using Code::Blocks and the libraries of this PND to build it. I think it's set up to go through GLShim by default, though my code is only using GLES 1.1 functions.
FPS is between 58 and 59
BGStep: most 0us, sometimes more
Obj Steps: 30us
Render: 2900-3000us
Cleanup: 0us
Total: not stable, can not clearly read, seems to be around 5000us
Used: 30-33%
Game objects are really small, had to go very close to the screen to recognize something.
These render times look much better than on my unit. Thanks! The 2 frames missing here and there may be from VSync not working as intended, as there's a lot room in the frame timing.
The screen size is really not ideal here, yeah. The game's resolution was designed to scale nicely for modern 16:9 resolutions. Pandora's 800x480 is a nasty in-between. Uneven scaling factors are a
sin on pixel art and adapting the game to have a low enough resolution to do 2x on it doesn't quite work out in other parts of the game
(I know it seems like I'd be just fine in what I've uploaded)
So looks like I can disable frameskiping on the 1 GHz Pandora.
...probably should've asked earlier, but what would be the easiest way to reliably detected that?
Edit: Going with checking if "/etc/powervr-esrev" is 5, which should be suitable for what I want to check.
Last edited:
VSync set with SDL_GL_SetSwapInterval(1) don't really work. You need to do the "powervr.ini" trick to have a flicker free swapbuffer.
SDL2 is built for both GL and GLES, because it find a libGL, GLES is ignored, unless you used special environement variable. To get true ES1.1 context, use:
SDL_VIDEO_GLES2=1 SDL_VIDEO_GL_DRIVER=libGLES_CM.so
(yep, it's SDL_VIDEO_GLES2, even for GLES1.1)
Hi, as soon as GLES 1.1 is forced, creation of the window fails.
Optional settings aside, the executed context creation code in this case boils down to this:
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 6);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 16);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 0);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 1);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 1);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_ES);
SDL_CreateWindow(title, SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 800, 480, SDL_WINDOW_OPENGL | SDL_WINDOW_FULLSCREEN);
No error string from SDL, but it prints "SDL: Forcing GLES2" before window creation.
This works on other devices (i.e. Android or even Steam Link) and doesn't look much different from what recommended on the wiki either. Am I missing something?
Edit: It's working after using the latest beta of the Code::Blocks PND. Whew.
Last edited:
Hello again,
I'd like to detect when the lid is being closed and opened. I've found "op_lidstate", but polling that is far less than ideal.
Any good options? I loosely remember reading before that it also acts as a key, but can't find any reference for that, nor does it actually generate a key event in SDL.
Nothing? Okay, not super important, so let's drop that then.
More importantly, I'm running into issues when running my game from a PND. It's running fine for the most part, except that all mouse input goes past the window to the desktop or whatever is behind
the fullscreen application and the game leaves black dirty rectangles on the screen after exiting. Any ideas? The exact same directory, but not inside a PND, runs without these issues.
Hope nobody spent too much time trying to help and then gave up or something... the fault lies within me, obviously. The main difference between running as PND and directly was the different config
directory and as it turned out, the game didn't run in fullscreen mode when running from the PND. Some default config shenanigans were wrong for the Pandora and well, the fullscreen/windowed part is
removed from the settings in the game as it does not work anyways, so I didn't notice it from there.
The behavior in the broken windowed mode is still very weird to be honest, but oh well.
One step closer to release. I swear this isn't quite worth it, considering what I'm putting out.
Last edited:
Jan 5, 2008
Do you have a preferred method of launching the Dev command line from your PND while ssh'ing into the Pandora?
I have local (slightly modified) copy of codeblocks's cbpnd_cli.sh than I launch using ". cbpnd_cli.sh"
|
{"url":"https://www.pyra-handheld.com/boards/threads/code-blocks-with-c-c-compiler.68104/page-27","timestamp":"2024-11-09T05:59:29Z","content_type":"text/html","content_length":"192942","record_id":"<urn:uuid:f2dad718-4554-479e-97de-91a514dc8bd6>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00114.warc.gz"}
|
Mixed superrationality does not beat pure in prisoner's dilemma
prisoner's dilemma
is probably one of the most famous toy games of game theorists. It amounts to two criminals that being caught by the police are interrogated individually are offered the following deal: If both
remain silent ("cooperate" with each other) both go to prison for $S$ ('short') years for small crimes that the police can prove. But if one prisoner admits the big crime ("defects") he goes free and
the other spends $L$ ('long') years in prison. But if both admit the crime they both face a $M$ ('middle') year sentence. To be a dilemma the sentences should obey $0<S<M<L$ and by picking an
appropriate normalisation of the unit of time, we can set $S=1$.
The standard (economist) analysis of the game goes as follows: I assume that the other prisoner has already made his decision. Then, no matter what he decided I am better off by defecting: If he
cooperates, my choice is between going free and $S$ years while if he is defecting I can choose between $M$ and $L$. So I defect and he comes to the same conclusion, so we end up spending $M$ years
in prison. Both defecting is in fact a
Nash equilibrium
That's not too exciting, as we could do better by both cooperating and serving only $S$ years, which is
Pareto optimal
but unstable because there is the temptation for each player to defect and then go free. So much for the classic analysis of this game (not iterated) which is a model for many decision problems where
one has to decide between a personal advantage or the global optimum.
I first learned about this game many many years ago when still attending high school from a Douglas Hofstaedter column in the Scientific American. He makes the following observation: When defecting,
I am counting on the fact that the other prisoner is not as clever as me. It only pays if the situation is asymmetric. But since the other prisoner is faced with the same problem, he will come up
with the same solution so the asymmetric case of one player cooperating and the other defecting will not occur. Thus the only real possibilities are both cooperating (yielding $S$ years) and both
defecting (yielding $M$ years) of which the obvious better choice is to cooperate. Hofstadter calls this argument "superrational". It is the realization that in the analysis of the Nash equilibrium
the idea that my decision is independent of the other prisoner's decision might be wrong.
Then Hofstadter points out another version of this game: You receive a letter from a very rich person stating that she is studying human intelligence and she figured that you are one of the top ten
intelligent people in the world. She offers you (and also the other nine top-brainers) the following game: On the bottom of the letter is a coupon. You can either ignore the letter (in which case
nothing more will happen) or you write your name on the coupon and send it back. If out of the ten possible coupons she receives exactly one she gives the person who returned the coupon 100 Million
dollars. If any other number of coupons arrive until the end of this year nobody will receive any money. And as a warning: You are watched over by a number of private investigators. If they notice
you trying to find out who the other nine people are the whole thing is called off and again nobody will get any money. So don't even think about it.
This does not look very promising: Obviously, if you don't send in the coupon you won't get any money. So you have to send the coupon but so will the other nine and again you will receive nil. Too
Well, unless you widen your strategy space and besides 'pure', deterministic strategies you also allow for 'mixed', i.e. probabilistic strategies. You could for example come up with the following
strategy: You roll dice and then send the coupon only with probability $p$. Let's see which $p$ optimizes your expectation assuming the other nine player follow the same strategy: You only get the
money if you send the letter (probability $p$) and all nine other don't (probablity $(1-p)^9$) so the expectation is $E=p(1-p)^9$. Setting to zero the $p$ derivative of $E$ gives $0=(1-p)^9-9p(1-p)^8
=(1-p)^8(1-p-9p)$ thus $p=1/10$. So you could prepare ten envelopes but only one with the coupon and mail a random one of these to optimize your expectation.
But with this idea of taking into account also mixed strategies we can go back to the prisoner's dilemma and see what happens when both players defect with probability $p$ (this is the new part of
the story I came up with this morning under the shower. Of course, I do not claim any originality here). Then the expected number of years I spend in prison is $p^2M+Lp(1-p)+(1-p)^2$. Quick check for
$p$ being 0 or 1 I get back the two deterministic values. So can I do better? Obviously, this is a quadratic function of $p$ going through $(0,1)$ and $(1,M)$. So it has is minimum in the interior of
the range $p\in[0,1]$ if the slope at $p=0$ is negative (remember $M>S=1$). But the slope is $2(M-L+1)p+L-2$ which is positive as long as $L>2$. But this is really the interesting parameter range for
the game since for $L<2$ it is better for both players to always switch between cooperate-defect and defect-cooperate since the average sentence in the asymmetric case is shorter than the one year
sentence of both cooperating. So, unless that is the case, always cooperating is still the better symmetric strategy of superrational players than the probabilistic ones.
No comments:
|
{"url":"https://atdotde.blogspot.com/2011/03/mixed-superrationality-does-not-beat.html","timestamp":"2024-11-07T05:59:13Z","content_type":"application/xhtml+xml","content_length":"73612","record_id":"<urn:uuid:a5c8a34e-d2f8-4308-b95b-d3b699c76ae0>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00407.warc.gz"}
|
32.03 feet per second to kilometers per minute
Speed Converter - Feet per second to kilometers per minute - 32.03 kilometers per minute to feet per second
This conversion of 32.03 feet per second to kilometers per minute has been calculated by multiplying 32.03 feet per second by 0.0182 and the result is 0.5857 kilometers per minute.
|
{"url":"https://unitconverter.io/feet-per-second/kilometers-per-minute/32.03","timestamp":"2024-11-08T03:08:20Z","content_type":"text/html","content_length":"15659","record_id":"<urn:uuid:c67cdd48-7d01-46eb-abd4-aba605e00da2>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00066.warc.gz"}
|
230-0203/02 – Mathematics III (BcM III)
Gurantor department Department of Mathematics Credits 5
Subject guarantor RNDr. Radomír Paláček, Ph.D. Subject version guarantor RNDr. Radomír Paláček, Ph.D.
Study level undergraduate or graduate Requirement Compulsory
Year 2 Semester winter
Study language Czech
Year of introduction 2018/2019 Year of cancellation
Intended for the faculties FAST Intended for study types Bachelor
PAL39 RNDr. Radomír Paláček, Ph.D.
POS220 Ing. Lukáš Pospíšil, Ph.D.
Part-time Credit and Examination 16+0
Subject aims expressed by acquired skills and competences
The aim of the course is to provide theoretical and practical foundation for understanding of the meaning of basic probability terms and teach the student to statistical thinking as a way of
understanding of the processes and events around us, to acquaint him with the basic methods of statistical data gathering and analyzing, and to show how to use these general procedures in other
subjects of study and in practice. Graduates of this course should be able to: • understand and use the basic terms of combinatorics and probability theory; • formulate questions that can be answered
by the data, learn the principles of data collecting, processing and presenting; • select and use appropriate statistical methods for data analysis; • propose and evaluate conclusions (inferences)
and predictions using the data.
Teaching methods
Individual consultations
Other activities
Combinatorics and probability. Random events, operations with them, sample space. Definitions of events' probability - classical, geometrical, statistics. Conditional probability. Total probability
and independent events. Random variable and its characteristics. Basic types of probability distributions of discrete random variables. Basic types of probability distributions of continuous random
variables. Random vector, probability distribution, numerical characteristics. Statistical file with one factor. Grouped frequency distribution. Statistical file with two factors. Regression and
correlation. Random sample, point and interval estimations of parameters. Hypothesis testing.iables: two-dimensional integrals, three-dimensional integrals, line integral of the first and the second
kind. Probabilities of random events: axioms of probability, conditional probability, independence. Random variables: discrete random variables, continuous random variables, expected values.
Important practical distributions of discrete and continuous random variables.
Compulsory literature:
Kučera, Radek: Mathematics III, VŠB – TUO, Ostrava 2005, ISBN 80-248-0802-1 http://mdg.vsb.cz/portal/en/Statistics1.pdf
Recommended literature:
Kučera, Radek: Mathematics III, VŠB – TUO, Ostrava 2005,
ISBN 80-248-0802-1
Way of continuous check of knowledge in the course of semester
Passing the course, requirements Course-credit -participation on tutorials is obligatory, -elaborate programs, Point classification: 5-20 points. Exam Practical part of an exam is classified by 0 -
60 points. Practical part is successful if student obtains at least 25 points. Theoretical part of the exam is classified by 0 - 20 points. Theoretical part is successful if student obtains at least
5 points. Point quantification in the interval 100 - 86 85 - 66 65 - 51 50 - 0 National grading scheme excellent very good satisfactory failed 1 2 3 4 List of theoretical questions: 1. Combinatorics.
2. Random events. 3. Probabilities of random events - clasical, geometrical, statistical. 4. Conditional probability. 5. Composite probability. 6. Bernoulli sequence of independent random trials. 7.
Bayes formula. 8. Discrete random variable. 9. Continuous random variable. 10. Probability mass and density function. Probability distribution funciton. 11. Characteristics of random variables. 12.
Basic types of probability distributions of discrete random variables. 13. Basic types of probability distributions of continuous random variables. 14. Random vectors, their probabilities
distribution and characteristics. 15. Processing of the statistical sample. 16. Random selection. 17. Point estimates. 18. Interval estimates. 19. Testing of hypothesis, parametrical tests. 20.
Testing of hypothesis, nonparametrical tests. 21. Linear regression. 22. Least square method.
http://www.studopory.vsb.cz http://mdg.vsb.cz (in Czech language)
Other requirements
At least 70% attendance at the exercises. Absence, up to a maximum of 30%, must be excused and the apology must be accepted by the teacher (the teacher decides to recognize the reason for the
Subject has no prerequisities.
Subject has no co-requisities.
Subject syllabus:
Syllabus of lecture Combinatorics. Random events and their operations. Probabilities of random events - clasical, geometrical, statistical. Conditional probability. Composite probability. Bernoulli
sequence of independent random trials. Bayes formula. Discrete and continuous random variable. Probability mass and density function. Probability distribution funciton. Characteristics of random
variables. Basic types of probability distributions of discrete and continuous random variables. Random vectors, their probabilities distribution and characteristics. Processing of the statistical
sample. Random selection, point and interval estimates. Testing of hypothesis - parametrical and nonparametrical tests. Linear regression. Least square method.
Conditions for subject completion
Conditions for completion are defined only for particular subject version and form of study
Occurrence in study plans
2021/2022 (B3607) Civil Engineering K Czech Ostrava 2 Compulsory study plan
2020/2021 (B3607) Civil Engineering K Czech Ostrava 2 Compulsory study plan
2019/2020 (B3607) Civil Engineering K Czech Ostrava 2 Compulsory study plan
2018/2019 (B3607) Civil Engineering K Czech Ostrava 2 Compulsory study plan
Occurrence in special blocks
Assessment of instruction
|
{"url":"https://edison.sso.vsb.cz/cz.vsb.edison.edu.study.prepare.web/SubjectVersion.faces?version=230-0203/02&subjectBlockAssignmentId=405141&studyFormId=2&studyPlanId=22784&locale=en&back=true","timestamp":"2024-11-08T12:23:14Z","content_type":"application/xhtml+xml","content_length":"171160","record_id":"<urn:uuid:d62af8e0-8249-4f7e-82ce-939d5d82ea54>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00300.warc.gz"}
|
AVO Modeling in Seismic Processing and Interpretation Part III. Applications
AVO modeling plays an active role in three areas: new technology development, QC data processing, and assisting data interpretation. This paper attempts to discuss these issues, with emphasis on the
applications of AVO modeling in data processing and interpretation. Data modeling is introduced for its theoretical background and its applications in isotropic and anisotropic situations. In the
data processing side, we will focus on calibrations. Finally, some discussion is given on the applications of AVO modeling in interpretation with additional case studies.
AVO Data Modeling
In the pre-stack processing stage, the cmp gathers that are considered having appropriate amplitude recovery or having gone through true amplitude processing are modeled using AVO equations to solve
AVO attributes. This can be called AVO data modeling. Using the AVO equations introduced in Part I of this article, data modeling is implemented on the amplitudes at a given two-way time of a cmp
gather. This is often implemented using angles of incidence for a linear fitting, or, a surface fitting for cases where two variables, offset and azimuth for HTI media, are involved. AVO data
modeling is usually conducted using least square s method (L2 norm) or other robust methods such as L1 norm. The difference between L2 and L1 norms is that L2 minimizes squared deviation and L1
minimizes the absolute deviation of the data from a model. L2 norm and L1 norm have the form of
is the single data point residual and d(x[i]) is the data and m(x[i]) represents an AVO equation fitted at given offset or angle of incidence x[i]. Since number of data points n or offsets is always
greater than the number of variables or AVO attributes to be solved, this is an over-determined problem. Minimization of the norms results in the solution. For L2 norm, the matrix form of the
minimization is a = (A^T· A )^-1· A^T· d, where a is AVO attribute vector, A is angle-dependent coefficient matrix formed by an AVO equation corresponding to angles of incidence, and d is the vector
of data (amplitudes). For L1 norm, median of coefficient of an AVO equation may be used in minimization (Press et al., 1989). To stabilize the solution, constraints from rock physical relationships
may be brought in. The solution often consists of two AVO attributes. Shuey’s equation, for example, yields intercept and gradient, and Fatti’s equation yields P- and S- reflectivities.
In the amplitude fitting, L2 norm is particular appropriate when the data contain random noise. L1 norm is considered robust when a small number of data points have deviant amplitudes, such as a
multiple cutting across a primary reflection. To demonstrate L1 norm and L2 norm in AVO attribute extraction, a synthetic data set with strong coherent interfering noise was used and is shown in
Figure 1. Fatti’s equation was used for amplitude fitting in this case. We can see that the L1 norm operation results in a better solution while the L2 norm solution is compromised by the spurious
data points.
Figure 1. Curve fitting using L1 and L2 norm.
To further demonstrate this, a synthetic gather with primary reflections and multiples was processed through L1 norm solution of Fatti’s equation. Figure 2a to 2d are primary-only gather, input cmp
gather, reconstructed gather from the extracted P- and S-reflectivities, and difference between Figure 2b and Figure 2c. As expected, The L1 norm solution does a good job in rejecting the large
moveout multiples (the large moveout multiple is essentially gone from the reconstructed gather Figure 2c). Some energy from the small moveout multiples labeled 1 and 2 is still present on the
reconstructed gather. Ideally, the reconstructed gather should contain only primary signal (Figure 2c should look like 2a). Hence, multiple attenuation may be required before AVO attribute
Figure 2. Data modeling using L1 norm, a) primary-only gather, b) input gather for AVO extraction; c) reconstructed gather using P- and S-reflectivity; and d) difference between b) and c).
A real data example of AVO data modeling is shown in Figure 3. We can see that the input cmp gather (Figure 3a) contains random noise, linear coherent noise, and multiples. The input cmp gather
(Figure 3a) is modeled by Fatti’s equation with L1 norm operation. The P- and S-reflectivities were solved and used in constructing Figure 3b. As indicated, the Class III AVO is successfully modeled.
The random noise, linear coherent noise, and multiples are rejected and shown in Figure 3c. In practice, the difference gathers is used to examine if any reflections have been rejected due to poor
NMO or inappropriate processing.
Figure 3. Data modeling using Fatti’s equation and L1 norm on a cmp gather: a) input gather; b) reconstructed gather using P- and S - reflectivity; and c) difference between a) and b).
A second example is from a 3-D data set for studying AVO in a fractured reservoir. The fractured reservoir is considered as horizontal transverse isotropic (HTI) medium (Figure 4). For this case, L2
norm data modeling was performed based on Rüger’s equation (1996). To enhance the resolution and increase the stability of the data modeling, a surface fitting approach was taken to include all
traces in the calculation. By examining Figure 4c, we see that the reflection events have been successfully modeled since little primary energy leakage can be seen. Figure 4d is a calculated
theoretical amplitude surface that illustrates the amplitude variation with offset and azimuth. The resulting attributes in this AVOZ analysis are zero-offset reflectivity, fracture orientations, and
gradients parallel and perpendicular to the fracture orientations. The fracture density is calculated based on the gradients. Figure 4e gives the estimates of fracture orientation and fracture
density at a carbonate formation.
Figure 4. Data modeling using Rüger’s equation and L2 norm for a fractured reservoir: a) input cmp gather; b) reconstructed cmp gather using intercepts and gradients; c) difference between a) and b);
d) a theoretical amplitude surface; and e) fracture orientations and fracture density. Note that these gathers are sorted by offset, not by azimuth. Vertical ‘discontinuities’ of the amplitudes in b)
occur where there is a ‘jump’ in azimuth within the gather.
Data Calibrations
Calibration on cmp gathers and output AVO attributes can be conducted for optimizing AVO processing. It helps to answer the questions such as: whether the cmp gathers are properly processed with an
amplitude friendly processing flow and parameters; whether phase, tuning, signal-to-noise ratio and other factors are influencing the solutions; and whether the correct impedance background is used
in elastic rock property inversion. Calibration is often implemented by using well logs, synthetic gathers, walkway VSP, and/or known relationships between AVO attributes or between rock properties.
Calibration can be performed locally at a cmp location, or globally on a data set.
Using synthetic cmp gather(s) to tie seismic often gives a quick insight to the data. AVO type and its variation are often determined in this stage. Further, since AVO modeling links seismic
responses directly to rock properties, it helps to confirm or define reservoir condition. One may perturb the well logs to represent the possible reservoir conditions. For example, gas substitution
may be performed on a wet well or vice verse. The other parameters that are often changed are porosity, reservoir thickness and lithology. Figure 5 shows an example in which a synthetic cmp gather
ties to a recorded cmp gather. In the zone of interest, the AVO expression has similar character. We may therefore consider that the data has appropriate amplitude recovery.
Figure 5. Seismic cmp gather (left) ties to synthetic cmp gather (right).
Calibration at specific reflections may be performed. The amplitudes for a given event from both the actual seismic and the synthetic can be extracted and compared. Figure 6 shows an example in which
the seismic amplitudes from the top of a gas reservoir are compared with the synthetic gather. The Class I AVO anomaly with polarity reversal at the far offsets is confirmed as the AVO expression for
this reservoir.
Figure 6. Comparison of amplitudes from seismic and synthetic cmp gathers.
Further, calibration can be conducted on AVO attributes or inverted rock properties. P- and S-reflectivity synthetic may tie to stacked P- and S-reflectivity sections. It also can be conducted in
cross-plotting spaces such as P-reflectivity against S-reflectivity, and λρ against μρ. Figure 7 shows an example using well logs to calibrate inverted elastic rock properties for a gas charged
dolomite reservoir. In Figures 7a and 7b, the inverted elastic rock properties of the reservoir are highlighted in black squares. The overlain empirical relations are shale (solid black), water
saturated sand (solid blue), limestone (dashed black), dolomite (dashed green), and gas charged clean sand (red). We can see that the data points from the gas-charged reservoir are shifted towards
low λρ values and low λ/μ ratio. Figures 7c and 7d show the cross-plots of dipole sonic logs. The data from a gas charged dolomite reservoir are highlighted with red squares, and brine-saturated
porous dolomite with green squares. This comparison leads to an interpretation of a gas-charged dolomite based on the seismically-derived elastic rock properties (Figures 7a and 7b). In addition, the
porosity of the reservoir is similar to the porous dolomite that is highlighted by the green squares.
Figure 7. a) and b): elastic rock properties from inverted seismic; and c) and d): from well logs.
Global calibration implies a way to QC data on entire data set. For example, the amplitude variation with offset within a time window can be calculated and compared to that from synthetic gathers.
Consequently, offset-variant scaling corrections may be applied to the data set. Calibration may also be conducted based on relationships of AVO attributes such as P- and S-reflectivities. Background
constraints that are used in data modeling and elastic rock property inversion may also be calibrated through this method.
AVO modeling assists interpretation on cmp gathers, AVO attributes, and inverted elastic rock properties. It helps in validating AVO responses and linking seismic expression to known reservoir
conditions. AVO modeling can increase confidence in interpretation and reduce risk in reservoir characterization as it provides independent information. We can use synthetic to identify AVO anomalies
and determine AVO types on a cmp gather. Also, we can use synthetic pre-stack data to determine the S-wave information that often is ignored in conventional data processing. For instance, a strong
S-impedance contrast may exist for a reservoir even though P-impedance contrast is small. The derived AVO attributes such as fluid factor, and inverted elastic rock properties such as Vp/Vs ratio,
Poisson’s ratio, λρ and λ/μ ratio can be used to infer the fluid type in a reservoir.
Tuning may invoke or mask an AVO anomaly. The synthetics with varied bandwidth or reservoir thickness may give answers to tuning questions. Special lithologies or lithological contrasts may generate
AVO anomalies and can be proved by AVO modeling. Lithological complexity may also bring in difficulties in interpretation. Tight streaks may manifest themselves as AVO anomalies and brighten in a
stacked section. Coal, carbonate, and the lithologies that do not follow water saturated trend of rock properties for clastics may complicate fluid stack anomalies. The lack of understanding on some
seismic rock properties may prevent one to effectively explore those types of reservoirs. Further, high clay content in sand may result in low gas saturation. This type of partial gas saturation may
still have relative high Vp/Vs ratio that contradicts traditional theories of partial gas saturation. Therefore, AVO modeling may provide opportunities for distinguishing the partial gas saturated
reservoirs based on the lithological effect on rock properties.
Figure 8 shows an ideal workflow in using AVO modeling to assist data processing and interpretation. We can see that AVO modeling workflow is the same as that of data processing. Therefore, seismic
data and synthetics can be compared in the stages of cmp gathers, AVO attributes and inverted elastic rock properties. In interpretation, the information from all three branches can be integrated.
The risk in reservoir characterization may thus be reduced since the interpretation is broadly based, involving understanding of seismic, rock physical properties and geology.
Figure 8. Workflow using AVO modeling in assisting data processing and interpretation.
Several AVO modeling examples for AVO interpretation have been given in Part 1 and Part 2 of this article (Li et al., 2003; Li et al. 2004). Figure 9 shows an example using AVO modeling to understand
interference of multiples and converted energy at the Wabamun dolomite porosity. The modeling was conducted using both 1) Zoeppritz modeling with ray-tracing, and 2) full wave elastic wave equation.
For the elastic modeling, two cases were modeled: reservoir case (with porosity) and non-reservoir case (without porosity). The observations that can be made for this typical study include: a) Class
III AVO anomaly (a trough brightens with offset) at the top of the reservoir presents in the Zoeppritz modeling but it does not show in the elastic modeling; b) the elastic modeling shows that
multiples and converted energy (which are not accounted for in a Zoeppritz model) interfere with the reflections from both the top and the base of the reservoir; and c) the difference between the
reservoir case (Figure 9b) and the non-reservoir case (Figure 9c) indicates that it could be enough for differentiating the porosity case from non-porosity case. Further, we may notice the
inter-bedded multiples and converted energy generated by the reservoir (Figure 9d). This study provides the information of wave propagation and interference. We may use it as a guide for attenuating
the coherent noise and performing amplitude recovery.
Figure 9. Zoeppritz modeling and elastic modeling for a reservoir in the Wabamun Formation in the WCSB: a) Zoeppritz modeling; b) elastic modeling with reservoir; c) elastic modeling with no
reservoir; and d) difference between reservoir and non-reservoir cases.
The second example is from a study on carbonate reservoirs (Li et al., 2003). In Figure 10a, AVO modeling shows that a gas charged dolomite reservoir produces a Class III AVO anomaly. This is
consistent with the AVO response in the cmp gather at the location of the producing well. We can see that at the tight well locations, a completely different AVO response presents. The information
provided by AVO modeling validates the information from seismic.
Figure 10. AVO modeling and interpretation for a gas charged dolomite reservoir, a) synthetic cmp gather, and b) stack section and cmp gathers at well locations.
This paper, the third part of AVO modeling in seismic processing and interpretation, demonstrates some applications of AVO modeling involving data processing and interpretation. The discussion of
data modeling provides an insight in AVO attribute extraction. Calibration using AVO modeling in data processing sheds some light on how to optimize AVO solution. Combined with rock physical property
analysis, petrophysical analysis, and geological information, AVO modeling provides useful information in interpretation and thus increases certainty in reservoir characterization.
The authors thank Core Laboratories Reservoir Technologies Division for supporting this work.
Li, Y., Downton, J., and Goodway, B., 2003, Recent application of AVO to carbonate reservoirs in the Western Canadian Sedimentary Basin, The Leading Edge, 22, 671-674.
Li, Y., Downton, J., and Xu, Y., 2003, AVO modeling in seismic processing and interpretation, Part 1: fundamentals, Recorder, 28 December, 43-52.
Li, Y., Downton, J., and Xu, Y., 2004, AVO modeling in seismic processing and interpretation, Part 2: methodologies, Recorder, 29 January, 36-42.
Press, W.H., Flannery, B.P., Teukolsky, S.A., and Vetterling, W.T., 1989, Numerical Recipes, The Art of Scientific Computing, Cambridge University Press.
Rüger, A., 1996, Reflection Coefficients and Azimuthal AVO Analysis in Anisotropic Media, Ph.D. Thesis, Colorado School of Mines.
|
{"url":"https://csegrecorder.com/articles/view/avo-modeling-in-seismic-processing-and-interpretation-part-iii","timestamp":"2024-11-02T04:49:20Z","content_type":"text/html","content_length":"39765","record_id":"<urn:uuid:caeb671e-041a-4b52-af37-4a1da7b2afbf>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00433.warc.gz"}
|
The Variational Ising Classifier (VIC) Algorithm for Coherently Contaminated Data
Part of Advances in Neural Information Processing Systems 17 (NIPS 2004)
Oliver Williams, Andrew Blake, Roberto Cipolla
There has been substantial progress in the past decade in the development of object classifiers for images, for example of faces, humans and vehi- cles. Here we address the problem of contaminations
(e.g. occlusion, shadows) in test images which have not explicitly been encountered in training data. The Variational Ising Classifier (VIC) algorithm models contamination as a mask (a field of
binary variables) with a strong spa- tial coherence prior. Variational inference is used to marginalize over contamination and obtain robust classification. In this way the VIC ap- proach can turn a
kernel classifier for clean data into one that can tolerate contamination, without any specific training on contaminated positives.
1 Introduction
Recent progress in discriminative object detection, especially for faces, has yielded good performance and efficiency [1, 2, 3, 4]. Such systems are capable of classifying those positives that can be
generalized from positive training data. This is restrictive in practice in that test data may contain distortions that take it outside the strict ambit of the training positives. One example would
be lighting changes (to a face) but this can be addressed reasonably effectively by a normalizing transformation applied to training and test images; doing so is common practice in face
classification. Other sorts of disruption are not so easily factored out. A prime example is partial occlusion.
The aim of this paper is to extend a classifier trained on clean positives to accept also partially occluded positives, without further training. The approach is to capture some of the regularity
inherent in a typical pattern of contamination, namely its spatial coherence. This can be thought of as extending the generalizing capability of a classifier to tolerate the sorts of image distortion
that occur as a result of contamination.
As done previously in one-dimension, for image contours [5], the Variational Ising Classi- fier (VIC) models contamination explicitly as switches with a strong coherence prior in the form of an Ising
model, but here over the full two-dimensional image array. In addition, the Ising model is loaded with a bias towards non-contamination. The aim is to incorporate these hidden contamination variables
into a kernel classifier such as [1, 3]. In fact the Rel- evance Vector Machine (RVM) is particularly suitable [6] as it is explicitly probabilistic, so that contamination variables can be
incorporated as a hidden layer of random variables.
neighbours of i i
Figure 1: The 2D Ising model is applied over a graph with edges e between neigh- bouring pixels (connected 4-wise).
Classification is done by marginalization over all possible configurations of the hidden vari- able array, and this is made tractable by variational (mean field) inference. The inference scheme makes
use of "hallucination" to fill in parts of the object that are unobserved due to occlusion.
Results of VIC are given for face detection. First we show that the classifier performance is not significantly damaged by the inclusion of contamination variables. Then a contam- inated test set is
generated using real test images and computer generated contaminations. Over this test data the VIC algorithm does indeed perform significantly better than a con- ventional classifier (similar to
[4]). The hidden variable layer is shown to operate effec- tively, successfully inferring areas of contamination. Finally, inference of contamination is shown working on real images with real
2 Bayesian modelling of contamination
Classification requires P (F |I), the posterior for the proposition F that an object is present given the image data intensity array I. This can be computed in terms of likelihoods
P (F | I) = P (I | F )P (F )/ P (I | F )P (F ) + P (I | F )P (F ) (1)
so then the test P (F | I) > 1 becomes 2
log P (I | F ) - log P (I | F ) > t (2)
where t is a prior-dependent threshold that controls the tradeoff between positive and neg- ative classification errors. Suppose we are given a likelihood P (I|, F ) for the presence of a face given
contamination , an array of binary "observation" variables corresponding to each pixel Ij of I, such that j = 0 indicates contamination at that pixel, whereas j = 1 indicates a successfully observed
pixel. Then, in principle,
P (I|F ) = P (I|, F )P (), (3)
(making the reasonable assumption P (|F ) = P (), that the pattern of contamination is object independent) and similarly for log P (I | F ). The marginalization itself is intractable, requiring a
summation over all 2N possible configurations of , for images with N pixels. Approximating that marginalization is dealt with in the next section. In the meantime, there are two other problems to
deal with: specifying the prior P (); and specifying the likeli- hood under contamination P (I|, F ) given only training data for the unoccluded object.
2.1 Prior over contaminations
The prior contains two terms: the first expresses the belief that contamination will occur in coherent regions of a subimage. This takes the form of an Ising model [7] with energy
UI() that penalizes adjacent pixels which differ in their labelling (see Figure 1); the second term UC biases generally against contamination a priori and its balance with the first term is mediated
by the constant . The total prior energy is then
U () = UI() + UC() = [1 - (e - )] + ( 1 e2 j ), (4) e j
where (x) = 1 if x = 0 and 0 otherwise, and e1, e2 are the indices of the pixels at either end of edge e (figure 1). The prior energy determines a probability via a temperature constant 1/T0 [7]:
P () e-U()/T0 = e-UI()/T0e-UC()/T0 (5)
2.2 Relevance vector machine
An unoccluded classifier P (F |I, = 0) can be learned from training data using a Rele- vance Vector Machine (RVM) [6], trained on a database of frontal face and non-face im- ages [8] (see Section 4
for details). The probabilistic properties of the RVM make it a good choice when (later) it comes to marginalising over . For now we consider how to construct the likelihood itself. First the
conventional, unoccluded case is considered for which the posterior P (F |I) is learned from positive and negative examples. Kernel functions [9] are computed between a candidate image I and a subset
of relevance vectors {xk}, retained from the training set. Gaussian kernels are used here to compute
y(I) = wk exp - (Ij - xkj)2 . (6) k j
where wk are learned weights, and xkj is the jth pixel of the kth relevance vector. Then the posterior is computed via the logistic sigmoid function as
1 P (F |I, = 1) = (y(I)) = . (7) 1 + e-y(I)
and finally the unoccluded data-likelihood would be
P (I|F, = 1) (y(I))/P (F ). (8)
2.3 Hallucinating appearance
The aim now is to derive the occluded likelihood from the unoccluded case, where the con- tamination mask is known, without any further training. To do this, (8) must be extended to give P (I|F, )
for arbitrary masks , despite the fact the pixels Ij from the object are not observed wherever j = 0. In principle one should take into account all possible (or at least probable) values for the
occluded pixels. Here, for simplicity, a single fixed hallu- cination is substituted for occluded pixels, then we proceed as if those values had actually been observed. This gives
P (I|F, ) (~ y(I, ))/P (F ) (9)
|
{"url":"https://proceedings.nips.cc/paper_files/paper/2004/hash/a3d06db1f8c85b2837b4603a51834425-Abstract.html","timestamp":"2024-11-10T05:37:01Z","content_type":"text/html","content_length":"16823","record_id":"<urn:uuid:96d07751-d33a-4b4b-a449-9f7309becbbf>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00731.warc.gz"}
|
James Payor comments on Some constructions for proof-based cooperation without Löb
Something I’m now realizing, having written all these down: the core mechanism really does echo Löb’s theorem! Gah, maybe these are more like Löb than I thought.
(My whole hope was to generalize to things that Löb’s theorem doesn’t! And maybe these ideas still do, but my story for why has broken, and I’m now confused.)
As something to ponder on, let me show you how we can prove Löb’s theorem following the method of ideas #3 and #5:
• is assumed
• We consider the loop-cutter
• We verify that if activates then must be true:
• Then, can satisfy by finding the same proof.
• So activates, and is true.
In english:
• We have who is blocked on
• We introduce to the loop cutter , who will activate if activation provably leads to being true
• encounters the argument “if activates then is true, and this causes to activate”
• This satisfies ’s requirement for some , so becomes true.
|
{"url":"https://www.greaterwrong.com/posts/SBahPHStddcFJnyft/some-constructions-for-proof-based-cooperation-without-loeb/comment/LZy9E4YW9vPsRiAFz","timestamp":"2024-11-05T09:08:59Z","content_type":"text/html","content_length":"39351","record_id":"<urn:uuid:1ef363b5-ce67-4b0a-8ffc-ff1d687b8cbd>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00742.warc.gz"}
|
Math Problem Statement
what is the typical statement for ivt to prove it. because x is between blah blah
Ask a new question for Free
By Image
Drop file here or Click Here to upload
Math Problem Analysis
Mathematical Concepts
Intermediate Value Theorem
Continuity of Functions
f(c) = N
f(a) < N < f(b) or f(b) < N < f(a)
Intermediate Value Theorem
Suitable Grade Level
Grades 11-12, College level
|
{"url":"https://math.bot/q/intermediate-value-theorem-proof-FMOGXNzT","timestamp":"2024-11-05T23:49:15Z","content_type":"text/html","content_length":"85690","record_id":"<urn:uuid:5f2b8822-986b-471f-877e-786ff0a5e197>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00809.warc.gz"}
|
MathSciDoc: An Archive for Mathematician
We carry out the SYZ program for the local Calabi--Yau manifolds of type $\widetilde{A}$ by developing an equivariant SYZ theory for the toric Calabi--Yau manifolds of infinite-type. Mirror geometry
is shown to be expressed in terms of the Riemann theta functions and generating functions of open Gromov--Witten invariants, whose modular properties are found and studied in this article. Our work
also provides a mathematical justification for a mirror symmetry assertion of the physicists Hollowood--Iqbal--Vafa.
|
{"url":"https://archive.ymsc.tsinghua.edu.cn/pacm_category/0134?show=time&size=3&from=7&target=searchall","timestamp":"2024-11-12T00:21:49Z","content_type":"text/html","content_length":"60474","record_id":"<urn:uuid:3484f88a-6685-4a89-a6fa-dc2affc708ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00664.warc.gz"}
|
Overlapping confidence intervals or standard error intervals: What do they mean in terms of statistical significance?
How to translate text using browser tools
1 October 2003 Overlapping confidence intervals or standard error intervals: What do they mean in terms of statistical significance?
Mark E. Payton, Matthew H. Greenstone, Nathaniel Schenker
We investigate the procedure of checking for overlap between confidence intervals or standard error intervals to draw conclusions regarding hypotheses about differences between population parameters.
Mathematical expressions and algebraic manipulations are given, and computer simulations are performed to assess the usefulness of confidence and standard error intervals in this manner. We make
recommendations for their use in situations in which standard tests of hypotheses do not exist. An example is given that tests this methodology for comparing effective dose levels in independent
probit regressions, an application that is also pertinent to derivations of LC[50]s for insect pathogens and of detectability half-lives for prey proteins or DNA sequences in predator gut analysis.
Scientists often express the results of experiments and observations by the use of means along with a measure of variability. For example, an insect ecologist or physiologist might have an experiment
involving a number of treatments, and a systematist might have a sample of morphological measurements of the same character from a series of species. The results of each treatment or set of
measurements are represented by a mean ± standard deviation or estimated standard error. Some refer to the interval using the estimated standard error, or standard deviation divided by the sample
size, as a standard error interval. This approach is very useful in that it provides the reader with information regarding the measure of central tendency (mean) along with some idea of the
variability (standard error) realized in the experiment. However, researchers sometimes fall into a trap by trying to use such results as a substitute for a hypothesis test. When the mean ± the
estimated standard error for one treatment doesn't overlap with the corresponding interval for another treatment, the researcher might be tempted to conclude that the treatment means are different.
This is a dangerous practice since the error rate associated with this comparison is quite large, with differences between equal treatment means declared significant more often than desired (Payton
et al., 2000). Some will counter this problem by performing 95% confidence intervals and checking for overlap. However, this practice goes to the other extreme and creates extremely conservative
comparisons, making it difficult to detect significant differences in means.
Occasionally a situation arises in which a test for the equality of two population parameters is needed but none exists, or at least not one that is easily applied. An example of this is testing the
difference between coefficients of variation of random samples from two populations. This poses a unique testing problem since the technique for estimating the standard error associated with the
coefficient of variation is not widely known, and thus a measure of variability is often not available for performing a test. Tests for coefficients of variation do exist (e.g., Gupta and Ma, 1996;
Wilson and Payton, 2002), but they are somewhat complex and require specialized computer code that is not readily available. An approach one might take in this situation would be to calculate a
confidence interval for the coefficient of variation from each sample, then declare them significantly different if the intervals do not overlap (relatively straight-forward methods for calculating
confidence intervals for coefficients of variation are discussed in Vangel (1996) and Payton (1996)). The primary question becomes: What size of confidence interval should one set in this scenario to
assure that the resulting test is at an acceptable error rate, say, 5%?
Previous work on the topic of hypothesis testing includes Payton et al. (2000) and Schenker and Gentleman (2001), both of which explore the error rates observed when checking for overlap of standard
error bars or confidence intervals in a testing situation. Browne (1979) explored such use of these intervals in what he called “visual tests” and how they related to tests of means. Goldstein and
Healy (1995) proposed methodology that adjusted comparisons based on graphical representations of confidence intervals to attain a desired average type I error rate. We build on these articles and
explore further the examination of overlap between confidence intervals or standard error intervals in comparing two population parameters. We discuss adjustments to be made in the event such a
procedure needs to be used. We also extend this work to comparing lethal dose estimates and analogous response estimates from two independent probit regressions with the use of adjusted fiducial
limits, which has applications to insect pathology and arthropod predation studies.
Confidence intervals and corresponding adjustments for testing hypotheses
Let's consider the situation of having random samples from two normally distributed populations. Let Ȳ[1] and Ȳ[2] be the sample means and let S[1] and S[2] be the sample standard deviations
calculated from these random samples of size n1 and n2. What we wish to do in this scenario is demonstrate the consequences of checking for overlap between unadjusted confidence intervals or standard
error intervals to test hypotheses about the difference between two population means.
To calculate (1−α)100% confidence intervals for the mean, the formula is
This formula is calculated for the samples from both populations (i.e., for
= 1 and 2). We can calculate the probability that the two intervals will overlap. This involves creating a probability expression for the situation in which the upper confidence limit from either
sample is contained within the confidence limits of the other sample. If you allow the variable “
to denote these intervals overlapping, this expression is given by If
n1 = n2 = n
, formula
simplifies to
The details of the algebraic manipulation leading to the above formula are given in Payton et al. (2000). One should note that the F value arises by squaring the t value in the original formula.
If the two populations being sampled are identical normal populations (i.e., same means and variances), the quantity can be modeled with the F distribution with 1 and n−1 degrees of freedom.
Therefore, the probability that the two intervals overlap can be denoted by
A large-sample version of the above statement can be derived (again if one assumes that the two populations are the same):
is the upper 100α/2 percentile of a standard normal variate (
). The normal variate
is used as the large-sample approximation for the square root of an
-distributed variate, and the parenthetical expression in
is replaced by the value
under the assumption of equality of population standard deviations. This result will illustrate the problem associated with checking for overlap between 95% confidence intervals as a testing device.
If you set α = 0.05 and generate 95% confidence intervals, then the approximate probability of overlap can be calculated from expression
as In other words, the 95% confidence intervals will overlap over 99% of the time. The consequences of using 95% confidence intervals should be evident. If you compare these intervals with the
expectation of mimicking an α = 0.05 test, what you actually would be doing is performing a test with a much too conservative type I error rate. In other words, the 95% intervals are too wide,
resulting in a procedure that declares differences at a proportion much less than the desired α = 0.05 rate.
We can make similar calculations regarding the use of standard error intervals, or intervals calculated by adding and subtracting the estimated standard error from the mean. Often researchers report
their results in this fashion, and many times they will place standard error bars on graphs or figures. The easy trap to fall into, however, is thinking that because standard error bars associated
with two means don't overlap, these means must be significantly different.
The large-sample probability of standard error intervals overlapping when the two populations are identical can be easily found by using expression (5) and replacing z[α/2] with 1. Therefore
This probability is equal to 0.843. Thus, examining overlap between standard error intervals to test hypotheses regarding equality of means would be akin to performing a test with a type I error rate
of about 15% or 16%.
Schenker and Gentleman (2001) showed that for general estimation problems, the interval overlap method tends to be the most conservative when the (true) standard errors are equal. They found that for
large samples, the Type I error rate when comparing the overlap of 100(1−γ)% confidence intervals is
is the ratio of standard errors. An analogous expression for the case of estimating means was given in
Goldstein and Healy (1995)
. Replacing
with the value of 1 (i.e., assuming the standard errors are equal) will yield a multiplier for the
value in the probability statement of
, which corresponds to the value given in expression
Tables 1 and 2, based on expression (7), illustrate the relationship of standard error ratios to the likelihood of confidence intervals or standard error intervals overlapping. The data illustrate
that the probabilities of overlap decrease as the standard errors become less homogeneous.
We can use equation (7) to guide us in adjusting the confidence limits for the intervals to achieve a more desirable error rate. For a given ratio of standard errors, k, setting equation (7) equal to
a desired error rate of α=0.05 and solving for γ yields the correct large-sample confidence level that should be used for the individual intervals. For example, assuming equal standard errors (k = 1)
yields γ = 0.166. In other words, if you wish to use confidence intervals to test equality of two parameters when the standard errors are approximately equal, you would want to use approximately 83%
or 84% confidence intervals. A similar suggestion for the case of estimating means was made in Goldstein and Healy (1995). The sizes of the individual confidence intervals necessary to perform a 0.05
test grow as the standard errors become less homogeneous, as illustrated in Table 3.
A researcher will rarely know the true ratio of standard errors. One might estimate it with sample values. Of course, the method of comparing intervals is most useful for cases in which estimates for
standard errors are not available. A possible approximation to the ratio of standard errors could be the ratio of the square roots of the two sample sizes, since the standard error of an estimate
tends to be inversely proportional to the sample size.
We performed a simulation study to illustrate the calculations given above and to see how well the large-sample results apply to situations with small to moderate samples. Ten thousand pairs of
independent random samples were generated from a standard normal distribution using PC SAS (SAS Inst., Cary, NC, 1996) Version 8.2. We varied the sample sizes from n = 5 to n = 50. Three intervals
were constructed for each random sample: mean ± estimated standard error, 95% and 84% confidence intervals for the mean.
Results of the computer simulation are given in Table 4. The columns of the table record the proportion of times that the intervals for the pairs of random samples overlap. For instance, in the case
where the sample size was 10, the proportion of the 10,000 iterations in which the two intervals constructed by the 95% confidence intervals overlapped was 0.995. The proportion of the 10,000 trials
in which the two 84% confidence intervals overlapped for the n = 10 case was 0.949.
These simulation results validate much of the work done in the previous section. In particular, we have demonstrated that examining the overlap of 95% confidence intervals to test hypotheses is much
too conservative. Likewise, using standard error intervals will produce the opposite effect. Another important outcome is the results of using 84% confidence interval methodology (when the true
standard errors are equal). The adjusted intervals seem to work well for all sample sizes.
Comparing effective dosages from independent probit regressions
Binary regression is useful in experiments in which the relationship of a response variable with two levels to a continuous explanatory variable is of interest. These are often referred to as
dose-response models. Sometimes researchers are interested in estimating the dose that is needed to produce a given probability. For example, what insecticide dose is needed to provide an estimated
probability of 0.95 for killing an insect? An estimate of this dose is important because using more than is needed could be unnecessarily harmful to the environment or to humans, livestock and
wildlife in the proximity of the application (Dailey et al., 1998; Flickinger et al., 1991). Using less than is needed won't accomplish the control that was desired and might result in the evolution
of resistance to the insecticide (Shufran et al., 1996; Rider et al., 1998), and insecticides may reduce natural enemy populations, thereby exacerbating problems of control (Basedow et al., 1985;
Matacham and Hawkes, 1985; Croft, 1990). Generally, that dose is referred to as an effective dose-95 or ED[95]. Two other analogous applications for such an analysis are the derivations of ED[50]s
for insect pathogens (e.g., Kariuki and McIntosh, 1999) and of detectability half-lives for prey proteins or DNA sequences in predator gut analysis (Greenstone and Hunt, 1993; Chen et al., 2000).
Confidence intervals, often referred to as fiducial limits or inverse confidence limits, can be calculated on effective dosages.
For insecticide trials, the ED is often called the lethal dose (LD). The probability of killing an insect given a specific dose is often estimated with probit regression (Ahmad et al. 2003; Smirle et
al. 2003). If there are two or more independent groups of insects, it may be of interest to estimate, say, the LD[90] for each with probit regression for the purpose of deciding which are the same.
One way to do this was provided by Robertson and Preisler (1992) which involved calculating a confidence interval for the ratio of LDs. The resulting confidence interval can then be used to test the
equality of the two LDs (i.e., if the value 1 is contained in the interval for the ratio, then the LDs are not significantly different). This procedure, though not difficult to perform, is not
available in standardized statistical software packages such as SAS. Thus researchers might be tempted to check the overlap of fiducial limits as a substitute for the procedure outlined in Robertson
and Preisler. The problem exists in this situation as it does in the case to test two means from a normal distribution. If the researcher uses 95% fiducial limits, then checking whether they overlap
will result in a very conservative test. What we wish to investigate here is whether fiducial limits for each population's LD[90] can be calculated in a way that will allow us to determine whether
the values are significantly different by whether or not the intervals resulting from these fiducial limits overlap.
Ironically, Robertson and Preisler (1992) suggest this very idea. They write “Many investigators have used a crude method to address this question. They compare lethal doses by examining their 95%
confidence limits. If the limits overlap, then the lethal doses do not differ significantly except under unusual circumstances.” They continue with an example using a fictitious scientist named Dr.
Maven. Dr. Maven wanted to compare the LD[90] for a parent generation to that of a second laboratory generation. Robertson and Preisler continue: “The 95% confidence limits of these LD[90]s do not
overlap, and Dr. Maven concludes that they probably differ significantly. However, the exact significance level for this procedure is not clear: it is not 5%.”
The fiducial limits that can be calculated on each effective dose can be used to perform the desired test. Suppose fiducial limits of some predetermined size (say (1−α)100%) were calculated for each
population. If the fiducial limits overlapped, then the two effective dosages would be declared not significantly different. If the limits did not overlap, then the effective dosages would be
declared significantly different. The primary issue at hand is to determine what α is needed for these limits to assure that the desired error levels for testing the LDs are attained. Up to now, SAS
was unable to provide anything other than 95% inverse confidence limits. Beginning with SAS Version 8.2, an option was made available for the MODEL statement in PROC PROBIT that allows the user to
calculate any fiducial limit he or she desires. This can be achieved by placing a/ALPHA = value after the model statement, where “value” is the decimal alpha level desired for the fiducial limit.
In order to assess the effectiveness of this proposed procedure, we performed a simulation study in PC SAS Version 8.2. The first objective is to find an appropriate level to set the fiducial limits
so that they give a 0.05 test. This was accomplished by generating 5000 pairs of independent sets of binary data, with equal sample sizes of 40, from the same population (probit intercept = 0 and
slope = 1). For each set of data, effective doses were calculated for the 50^th, 75^th, 90^th, and 99^th levels of probability. Fiducial limits were calculated using alpha values ranging from 0.05 to
0.20 and the number out of the 1000 pairs that overlapped was noted. Robertson and Preisler's method was also performed for each pair to investigate how it performed. Table 5 presents the simulation
results for the proposed method. Note that a 0.95 probability of overlap occurs generally around α = 0.15−0.17, depending upon which effective dose is being tested. This is consistent with the
findings in the first section of this paper in which 83% or 84% confidence intervals were found to work well in the comparison of normal means. Table 6 presents the results of Robertson and
Preisler's ratio method for comparing LDs. One should note that, at least from this simulation, their method tends to reject too frequently when comparing LD[50]s, but seems to work well at the other
LDs exhibited.
An analysis of the powers of the proposed method using an adjusted fiducial alpha of 0.17 as compared to the ratio method presented in Robertson and Preisler is presented in Table 7. Different ratios
of slopes of two models were generated, and the probability of rejecting the hypothesis that the LDs were the same calculated for each method. This was done for tests for LD[50], LD[90] and LD[99].
As can be seen in Table 7, the method of comparing fiducial limits is not as powerful as the ratio method. As the differences in slopes of the two probit regressions get larger (and hence, the
differences in LDs), the ratio method becomes more likely to detect these differences relative to the method of comparing fiducial limits.
Caution should be exercised when the results of an experiment are displayed with confidence or standard error intervals. Whether or not these intervals overlap does not imply the statistical
significance of the parameters of interest. If the researcher wishes to use confidence intervals to test hypotheses, it appears that when the standard errors are approximately equal, using 83% or 84%
size for the intervals will give an approximate α = 0.05 test. Theoretical results for large samples as well as simulation results for a variety of sample sizes show that using 95% confidence
intervals will give very conservative results, while using standard error intervals will give a test with high type I error rates. When applying this idea to test lethal doses or effective doses for
two independent probit regressions, with the two populations being the same under the null hypothesis and the sample sizes being equal, using 83% level for fiducial limits will approximate a 0.05
test. However, the ratio test provided in Robertson and Preisler (1992) should be used to test effective doses since it has been demonstrated to be a more powerful method of comparison.
We thank Kris Giles (Oklahoma State University) and Jim Throne (USDA-ARS) for reviews of the manuscript.
M. Ahmad, M. I. Arif, and I. Denholm . 2003. High resistance of field populations of the cotton aphid Aphis gossypii Glover (Homoptera: Aphididae) to pyretthroid insecticides in Pakistan. Journal of
Economic Entomology 96:875–878.
Google Scholar
T. H. Basedow, H. Rzehak, and K. Voss . 1985. Studies on the effect of deltamethrin on the numbers of epigeal predatory arthropods. Pesticide Science 16:325–332.
Google Scholar
R. H. Browne 1979. On visual assessment of the significance of a mean difference. Biometrics 35:657–665.
Google Scholar
Y. Chen, K. L. Giles, M. E. Payton, and M. H. Greenstone . 2000. Identifying key cereal aphid predators by molecular gut analysis. Molecular Ecology 9:1887–1898.
Google Scholar
B. A. Croft 1990.
Arthropod Biological Control Agents and Pesticides
. John Wiley and Sons.
Google Scholar
G. Dailey, P. Dasgupta, B. Bolin, P. Crosson, J. du Guerney, P. Ehrlich, C. Folke, A. M. Jansson, N. Kautsky, A. Kinzig, S. Levin, K-G. Mäler, P. Pinstrup-Anderson, D. Siniscalco, and B. Walker .
1998. Food production, population growth, and the environment. Science 281:1291–1292.
Google Scholar
E. L. Flickinger, G. Juenger, T. J. Roffe, M. R. Smith, and R. J. Irwin . 1991. Poisoning of Canada geese in Texas by parathion sprayed for control of Russian wheat aphid. Journal of Wildlife Disease
Google Scholar
H. Goldstein and M. J. R. Healy . 1995. The graphical presentation of a collection of means. Journal of the Royal Statistical Society A 158:175–177.
Google Scholar
M. H. Greenstone and J. H. Hunt . 1993. Determination of prey antigen half-life in
Polistes metricus
using a monoclonal antibody-based immunodot assay. Entomologia Experimentalis et Applicata 68:1–7.
Google Scholar
R. C. Gupta and S. Ma . 1996. Testing the equality of the coefficient of variation in k normal populations. Communications in Statistics 25:115–132.
Google Scholar
C. Kariuki and A. H. McIntosh . 1999. Infectivity studies of a new baculovirus isolate for the control of diamondback moth (Lepidoptera:Plutellidae). Journal of Economic Entomology 92:1093–1098.
Google Scholar
E. J. Matacham and C. Hawkes . 1985. Field assessment of the effects of deltamethrin on polyphagous predators in winter wheat. Pesticide Science 16:317–320.
Google Scholar
M. E. Payton 1996. Confidence intervals for the coefficient of variation. Proceedings of the Kansas State University Conference on Applied Statistics in Agriculture 8:82–87.
Google Scholar
M. E. Payton, A. E. Miller, and W. R. Raun . 2000. Testing statistical hypotheses using standard error bars and confidence intervals. Communications in Soil Science and Plant Analysis 31:547–552.
Google Scholar
S. D. Rider, S. M. Dobesh-Beckman, and G. E. Wilde . 1998. Genetics of esterase mediated insecticide resistance in the aphid
Schizaphis graminum
. Heredity 81:14–19.
Google Scholar
J. L. Robertson and H. K. Preisler . 1992.
Pesticide Bioassays with Arthropods
. CRC Press.
Google Scholar
SAS Institute Inc 1999.
SAS/STAT User's Guide, Version 8, 4th Edition
. SAS Institute.
Google Scholar
N. Schenker and J. F. Gentleman . 2001. On judging the significance of differences by examining overlap between confidence intervals. The American Statistician 55:182–186.
Google Scholar
R. A. Shufran, G. E. Wilde, and P. E. Sloderbeck . 1996. Description of three isozyme polymorphisms associated with insecticide resistance in greenbug (Homoptera: Aphididae) populations. Journal of
Economic Entomology 89:46–50.
Google Scholar
M. J. Smirle, D. T. Lowery, and C. L. Zurowski . 2003. Susceptibility of leafrollers (Lepidoptera: Tortricidae) from organic and conventional orchards to azinphosmethyl, Spinosa, and
Bacillus thuringiensis
. Journal of Economic Entomology 96:879–874.
Google Scholar
M. G. Vangel 1996. Confidence intervals for a normal coefficient of variation. The American Statistician 50:21–26.
Google Scholar
C. A. Wilson and M. E. Payton . 2002. Modelling the coefficient of variation in factorial experiments. Communications in Statistics-Theory and Methods 31:463–476.
Google Scholar
Table 1.
Large-sample probability of overlap of 95% confidence intervals under the null hypothesis
Table 2.
Large-sample probability of overlap of standard error intervals under the null hypothesis
Table 3.
Large-sample confidence levels of individual intervals that yield a probability of overlap of 0.95
Table 4.
Simulation results using two confidence intervals for the mean from the same normal population.
Table 5.
Simulation results using two inverse confidence intervals from probit regressions performed on the same population.
Table 6.
Simulation results using the ratio method to test LDs (Robertson & Preisler, 1992).
Table 7.
Simulation results comparing powers of ratio test to use of fiducial limits to test differences in LD 50s, LD 90s and LD 99s in probit regressions.
Mark E. Payton, Matthew H. Greenstone, and Nathaniel Schenker "Overlapping confidence intervals or standard error intervals: What do they mean in terms of statistical significance?," Journal of
Insect Science 3(34), 1-6, (1 October 2003). https://doi.org/10.1673/031.003.3401
Received: 20 June 2003; Accepted: 1 October 2003; Published: 1 October 2003
|
{"url":"https://complete.bioone.org/journals/journal-of-insect-science/volume-3/issue-34/031.003.3401/Overlapping-confidence-intervals-or-standard-error-intervals--What-do/10.1673/031.003.3401.full","timestamp":"2024-11-02T18:10:24Z","content_type":"text/html","content_length":"206206","record_id":"<urn:uuid:e09d69db-1ee6-4bb3-990c-f53630240148>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00175.warc.gz"}
|
What is disciplined convex programming?
Convex programming unifies and generalizes least squares (LS), linear programming (LP), and quadratic programming (QP). It has received considerable attention recently for a number of reasons: its
attractive theoretical properties; the development of efficient, reliable numerical algorithms; and the discovery of a wide variety of applications in both scientific and non-scientific
fields. Courses devoted entirely to convex programming are available at Stanford and elsewhere.
For these reasons, convex programming has the potential to become a ubiquitous numerical technology alongside LS, LP, and QP. Nevertheless, there remains a significant impediment to its more
widespread adoption: the high level of expertise in both convex analysis and numerical algorithms required to use it. For potential users whose focus is the application, this prerequisite poses
a formidable barrier, especially if it is not yet certain that the outcome will be better than with other methods. We have developed a modeling methodology called disciplined convex programming with
the goal of lowering this barrier.
As its name suggests, disciplined convex programming imposes a set of conventions to follow when constructing problems. Compliant problems are called, appropriately, disciplined convex programs, or
DCPs. The conventions are simple and teachable, taken from basic principles of convex analysis, and inspired by the practices of experts who regularly study and apply convex optimization today. The
conventions do not limit generality; but they do allow the steps required to solve DCPs to be automated and enhanced. For instance, determining if an arbitrary mathematical program is convex is an
intractable task, but determining if that same problem is a DCP is straightforward. A number of common numerical methods for optimization can be adapted to solve DCPs. The conversion of DCPs to
solvable form can be fully automated, and the natural problem structure in DCPs can be exploited to improve performance.
Disciplined convex programming also provides a framework for collaboration between users with different levels of expertise. In short, disciplined convex programming allows applications-oriented
users to focus on modeling, and—as they would with LS, LP, and QP—to leave the underlying mathematical details to experts.
Clicking on a title leads to a separate page with an abstract and a downloadable PDF.
Graph Implementations for Nonsmooth Convex Programs by M. Grant and S. Boyd. Recent Advances in Learning and Control (tribute to M. Vidyasagar), V. Blondel, S. Boyd, and H. Kimura, editors, Springer,
2008, pp. 95-110.
This paper is to be preferred over the next two for two reasons: one, it is considerably shorter; and two, it makes use of the current, public version of CVX as opposed to an earlier prototype.
Disciplined Convex Programming by M. Grant, S. Boyd, and Y. Ye. Chapter in Global Optimization: From Theory to Implementation, L. Liberti and N. Maculan, eds., in the book series Nonconvex
Optimization and Applications, Springer, 2006, pp. 155-210.
This paper is the first public presentation of disciplined convex programming and how it can be supported in modeling software. The discussion refers heavily to a never-released prototype of CVX,
our modeling software. The current version is considerably different than this prototype.
Disciplined Convex Programming by M. Grant. Ph.D. dissertation, Stanford University, 2005.
My dissertation covers the same material presented in the above paper, but expands the introduction and covers a number of more tedious but important details such as the conversion to solvable
form and the recovery of equivalent dual information. Like the paper, it refers to a never-released prototype of CVX. A full reading is recommended only as a cure for insomnia; but the
introduction provides a good overview of convex optimization, the practical challenges to solving convex problems, and our proposed solution.
|
{"url":"https://cvxr.com/dcp/","timestamp":"2024-11-14T11:01:38Z","content_type":"text/html","content_length":"35530","record_id":"<urn:uuid:229ab2c7-6fd5-4e2e-b930-cbc976b5906a>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00821.warc.gz"}
|
What is a multiplication chart?
A multiplication chart is a table that helps you learn and remember multiplication facts. It shows the products of numbers when multiplied together. For example, if you want to know what 3 times 4
is, you find the number 3 on one side of the chart and the number 4 on the other side. Where the row and column meet, you will see the answer, which is 12. It's a helpful tool for practicing and
understanding multiplication.
What is the history of the multiplication chart?
The history of the multiplication chart goes back a long time. People have been using multiplication for thousands of years. The ancient Egyptians and Babylonians used methods to multiply numbers. In
China, around 2,200 years ago, they created early multiplication tables. The multiplication chart as we know it today became popular in schools in the 19th century. It has been a useful way to teach
children multiplication and help them with math.
Who should use a multiplication chart?
A multiplication chart is good for anyone who wants to learn or practice multiplication. It is very helpful for kids who are learning how to multiply numbers. Kids can use a multiplication chart to
learn how to multiply numbers. It helps them see patterns and remember the answers. For example, they can quickly find out that 2 times 3 is 6 by looking at the chart.
At what age should kids start using a multiplication chart?
Kids can start using a multiplication chart around the age of 7 or 8, when they begin learning multiplication in school. However, younger kids can also benefit from seeing the patterns and practicing
with the chart.
How do you use a multiplication chart?
To use a multiplication chart, click on the product you want to find. The related factors (multiplicands and multipliers) will change color along with the product, making it easy to see the
relationship between the numbers. Additionally, the multiplication formula will be displayed below.
Can a multiplication chart help with division?
Yes, a multiplication chart can help with division. By knowing the multiplication facts, you can use the chart to see the relationships between numbers and solve division problems more easily.
How to generate a dynamic multiplication chart?
To generate a dynamic multiplication chart, you can append "/2-10" to your domain URL. This will create a multiplication chart for the range of numbers from 2 to 10. Similarly, adding "/3-13" to the
domain will generate a multiplication chart for numbers 3 to 13. Users can modify the URL to dynamically generate charts for any range between 1 and 100. For example, "/5-20" will create a
multiplication chart for numbers 5 to 20. This feature allows users to explore multiplication tables for any desired range within the specified limits.
|
{"url":"https://multiplicationschart.com/81-88","timestamp":"2024-11-03T06:42:25Z","content_type":"text/html","content_length":"224322","record_id":"<urn:uuid:2b9b4c72-a975-478a-bec5-c4369aa5fc1a>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00369.warc.gz"}
|
A Parallel Sorting Algorithm
What do we mean by a parallel sorting algorithm in a distributed-memory envi-ronment? What would its “input” be and what would its “output” be? The answers depend on where the keys are stored. We can
start or finish with the keys distributed among the processes or assigned to a single process. In this section we’ll look at an algorithm that starts and finishes with the keys distributed among the
processes. In Programming Assignment 3.8 we’ll look at an algorithm that finishes with the keys assigned to a single process.
If we have a total of n keys and p = comm._sz processes, our algorithm will start and finish with n/p keys assigned to each process. (As usual, we’ll assume n is evenly divisible by p.) At the start,
there are no restrictions on which keys are assigned to which processes. However, when the algorithm terminates, .. the keys assigned to each process should be sorted in (say) increasing order, and
if 0 q < r < p, then each key assigned to process q should be less than or equal to every key assigned to process r.
So if we lined up the keys according to process rank—keys from process 0 first, then keys from process 1, and so on—then the keys would be sorted in increasing order. For the sake of explicitness,
we’ll assume our keys are ordinary ints.
1. Some simple serial sorting algorithms
Before starting, let’s look at a couple of simple serial sorting algorithms. Perhaps the best known serial sorting algorithm is bubble sort (see Program 3.14). The array a stores the unsorted keys
when the function is called, and the sorted keys when the function returns. The number of keys in a is n. The algorithm proceeds by comparing the elements of the list a pairwise: a[0] is compared to
a[1], a[1] is compared to a[2], and so on. Whenever a pair is out of order, the entries are swapped, so in the first pass through the outer loop, when list_length = n, the largest value in the list
will be moved into a[n-1]. The next pass will ignore this last element and it will move the next-to-the-largest element into a[n-2]. Thus, as list_length decreases, successively more elements get
assigned to their final positions in the sorted list.
Program 3.14: Serial bubble sort
void Bubble_sort(
int a[] /* in/out */,
int n /* in */) {
int list_length, i, temp;
for (list_length = n; list_length >= 2; list_length -- )
for (i = 0; i < list_length-1; i++)
if (a[i] > a[i+1]) {
temp = a[i];
a[i] = a[i+1];
a[i+1] = temp;
} /* Bubble sort */
There isn’t much point in trying to parallelize this algorithm because of the inher-ently sequential ordering of the comparisons. To see this, suppose that a[i-1] = 9, a[i] = 5, and a[i+1] = 7. The
algorithm will first compare 9 and 5 and swap them, it will then compare 9 and 7 and swap them, and we’ll have the sequence 5, 7, 9. If we try to do the comparisons out of order, that is, if we
compare the 5 and 7 first and then compare the 9 and 5, we’ll wind up with the sequence 5, 9, 7. Therefore, the order in which the “compare-swaps” take place is essential to the correctness of the
A variant of bubble sort known as odd-even transposition sort has considerably more opportunities for parallelism. The key idea is to “decouple” the compare-swaps. The algorithm consists of a
sequence of phases, of two different types. During even phases, compare-swaps are executed on the pairs
(a[0], a[1]), (a[2], a[3]), (a[4], a[5]), …….,
and during odd phases, compare-swaps are executed on the pairs
(a[1], a[2]), (a[3], a[4]), (a[5], a[6]), ……….
Here’s a small example:
Start: 5, 9, 4, 3
Even phase: Compare-swap (5, 9) and (4, 3), getting the list 5, 9, 3, 4.
Odd phase: Compare-swap (9, 3), getting the list 5, 3, 9, 4.
Even phase: Compare-swap (5, 3) and (9, 4), getting the list 3, 5, 4, 9.
Odd phase: Compare-swap (5, 4), getting the list 3, 4, 5, 9.
This example required four phases to sort a four-element list. In general, it may require fewer phases, but the following theorem guarantees that we can sort a list of n elements in at most n phases:
Theorem. Suppose A is a list with n keys, and A is the input to the odd-even transposition sort algorithm. Then, after n phases A will be sorted.
Program 3.15 shows code for a serial odd-even transposition sort function.
Program 3.15: Serial odd-even transposition sort
void Odd_even_sort(
int a[] /* in/out */, {
int n /* in */)
int phase, i, temp;
for (phase = 0; phase < n; phase++)
if (phase % 2 == 0) { /* Even phase */
for (i = 1; i < n; i += 2)
if (a[i 1] > a[i]) {
temp = a[i];
a[i] = a[i-1];
a[i-1] = temp;
} else { /* Odd phase */
for (i = 1; i < n-1; i += 2)
if (a[i] > a[i+1]) {
temp = a[i];
a[i] = a[i+1];
a[i+1] = temp;
} /* Odd even sort */
2. Parallel odd-even transposition sort
It should be clear that odd-even transposition sort has considerably more opportu-nities for parallelism than bubble sort, because all of the compare-swaps in a single phase can happen
simultaneously. Let’s try to exploit this.
There are a number of possible ways to apply Foster’s methodology. Here’s one:
.[.] Tasks: Determine the value of a[i] at the end of phase j.
Communications: The task that’s determining the value of a[i] needs to commu-nicate with either the task determining the value of a[i-1] or a[i+1]. Also the value of a[i] at the end of phase j needs
to be available for determining the value of a[i] at the end of phase j + 1.
This is illustrated in Figure 3.12, where we’ve labeled the tasks determining the value of a[i] with a[i].
Now recall that when our sorting algorithm starts and finishes execution, each process is assigned n/p keys. In this case our aggregation and mapping are at least partially specified by the
description of the problem. Let’s look at two cases.
When n = p, Figure 3.12 makes it fairly clear how the algorithm should proceed. Depending on the phase, process i can send its current value, a[i], either to process i-1 or process i + 1. At the same
time, it should receive the value stored on process i-1 or process i + 1, respectively, and then decide which of the two values it should store as a[i] for the next phase.
However, it’s unlikely that we’ll actually want to apply the algorithm when n = p, since we’re unlikely to have more than a few hundred or a few thousand processors at our disposal, and sorting a few
thousand values is usually a fairly trivial mat-ter for a single processor. Furthermore, even if we do have access to thousands or even millions of processors, the added cost of sending and receiving
a message for each compare-exchange will slow the program down so much that it will be useless. Remember that the cost of communication is usually much greater than the cost of “local”
computation—for example, a compare-swap.
How should this be modified when each process is storing n/p > 1 elements? (Recall that we’re assuming that n is evenly divisible by p.) Let’s look at an example. Suppose we have p = 4 processes and
n = 16 keys assigned, as shown in Table 3.8. In the first place, we can apply a fast serial sorting algorithm to the keys assigned to each process. For example, we can use the C library function
qsort on each pro-cess to sort the local keys. Now if we had one element per process, 0 and 1 would exchange elements, and 2 and 3 would exchange. So let’s try this: Let’s have 0 and 1 exchange all
their elements and 2 and 3 exchange all of theirs. Then it would seem natural for 0 to keep the four smaller elements and 1 to keep the larger. Similarly, 2 should keep the smaller and 3 the larger.
This gives us the situation shown in the third row of the the table. Once again, looking at the one element per process case, in phase 1, processes 1 and 2 exchange their elements and processes 0 and
3 are idle. If process 1 keeps the smaller and 2 the larger elements, we get the distribution shown in the fourth row. Continuing this process for two more phases results in a sorted list. That is,
each process’ keys are stored in increasing order, and if q < r,
then the keys assigned to process q are less than or equal to the keys assigned to process r.
In fact, our example illustrates the worst-case performance of this algorithm:
Theorem. If parallel odd-even transposition sort is run with p processes, then after p phases, the input list will be sorted.
The parallel algorithm is clear to a human computer:
Sort local keys;
for (phase = 0; phase < comm sz; phase++)
{ partner = Compute_partner(phase, my_rank); if (I’m not idle) {
Send my keys to partner; Receive keys from partner; if (my rank < partner)
Keep smaller keys;
Keep larger keys;
However, there are some details that we need to clear up before we can convert the algorithm into an MPI program.
First, how do we compute the partner rank? And what is the partner rank when a process is idle? If the phase is even, then odd-ranked partners exchange with my rank 1 and even-ranked partners
exchange with my rank+1. In odd phases, the calculations are reversed. However, these calculations can return some invalid ranks: if my_rank = 0 or my_rank = comm_sz -1, the partner rank can be - 1
or comm_sz. But when either partner = - 1 or partner = comm._sz, the process should be idle. We can use the rank computed by Compute_partner to determine whether a process is idle:
if (phase % 2 == 0) /* Even phase */
if (my_rank % 2 != 0) /* Odd rank */
partner = my_rank 1;
else /* Even rank */
partner = my_rank + 1;
else / Odd phase */
if (my_rank % 2 != 0) /* Odd rank */
partner = my_rank + 1;
else /* Even rank */
partner = my_rank 1;
if (partner == - 1 || partner == comm_sz)
partner = MPI_PROC NULL;
MPI_PROC_NULL is a constant defined by MPI. When it’s used as the source or destina-tion rank in a point-to-point communication, no communication will take place and the call to the communication
will simply return.
3. Safety in MPI programs
If a process is not idle, we might try to implement the communication with a call to
MPI_Send and a call to MPI_Recv:
MPI_Send(my_keys, n/comm_sz, MPI_INT, partner, 0, comm);
MPI_Recv(temp_keys, n/comm_sz, MPI_INT, partner, 0, comm,
This, however, might result in the programs’ hanging or crashing. Recall that the MPI standard allows MPI Send to behave in two different ways: it can simply copy the message into an MPI-managed
buffer and return, or it can block until the matching call to MPI_Recv starts. Furthermore, many implementations of MPI set a threshold at which the system switches from buffering to blocking. That
is, messages that are relatively small will be buffered by MPI_Send, but for larger messages, it will block. If the MPI Send executed by each process blocks, no process will be able to start
executing a call to MPI_Recv, and the program will hang or deadlock, that is, each process is blocked waiting for an event that will never happen.
A program that relies on MPI-provided buffering is said to be unsafe. Such a program may run without problems for various sets of input, but it may hang or crash with other sets. If we use MPI_Send
and MPI Recv in this way, our program will be unsafe, and it’s likely that for small values of n the program will run without problems, while for larger values of n, it’s likely that it will hang or
There are a couple of questions that arise here:
In general, how can we tell if a program is safe?
How can we modify the communication in the parallel odd-even sort program so that it is safe?
To answer the first question, we can use an alternative to MPI_Send defined by the MPI standard. It’s called MPI_Ssend. The extra “s” stands for synchronous and MPI_Ssend is guaranteed to block until
the matching receive starts. So, we can check whether a program is safe by replacing the calls to MPI Send with calls to MPI_Ssend. If the program doesn’t hang or crash when it’s run with appropriate
input and comm_sz, then the original program was safe. The arguments to MPI_Ssend are the same as the arguments to MPI_Send:
int MPI_Ssend( /* in */,
void* msg_buf_p
int msg_size /* in */,
MPI_Datatype msg_type /* in */,
int dest /* in */,
int tag /* in */,
MPI_Comm communicator /* in */);
The answer to the second question is that the communication must be restructured. The most common cause of an unsafe program is multiple processes simultaneously first sending to each other and then
receiving. Our exchanges with partners is one example. Another example is a “ring pass,” in which each process q sends to the process with rank q + 1, except that process comm._sz-1 sends to 0:
MPI_Send(msg, size, MPI_INT, (my_rank+1) % comm_sz, 0, comm);
MPI_Recv(new msg, size, MPI_INT, (my_rank+comm_sz-1) % comm_sz,
0, comm, MPI_STATUS_IGNORE).
In both settings, we need to restructure the communications so that some of the pro-cesses receive before sending. For example, the preceding communications could be restructured as follows:
if (my rank % 2 == 0) {
MPI_Send(msg, size, MPI_INT, (my_rank+1) % comm sz, 0, comm);
MPI_Recv(new msg, size, MPI_INT, (my_rank+comm sz 1) % comm_sz,
0, comm, MPI_STATUS_IGNORE).
} else {
MPI_Recv(new msg, size, MPI_INT, (my_rank+comm_sz -1) % comm sz, 0, comm, MPI_STATUS_IGNORE).
MPI_Send(msg, size, MPI_INT, (my_rank+1) % comm_sz, 0, comm);
It’s fairly clear that this will work if comm sz is even. If, say, comm_sz = 4, then processes 0 and 2 will first send to 1 and 3, respectively, while processes 1 and 3 will receive from 0 and 2,
respectively. The roles are reversed for the next send-receive pairs: processes 1 and 3 will send to 2 and 0, respectively, while 2 and 0 will receive from 1 and 3.
However, it may not be clear that this scheme is also safe if comm_sz is odd (and greater than 1). Suppose, for example, that comm_sz = 5. Then, Figure 3.13 shows a possible sequence of events. The
solid arrows show a completed communication, and the dashed arrows show a communication waiting to complete.
MPI provides an alternative to scheduling the communications ourselves—we can call the function MPI_Sendrecv:
int MPI_Sendrecv(
void* send_buf_p /* in */,
int send_buf_size /* in */,
MPI_Datatype send_buf_type /* in */,
int dest /* in */,
int send_tag /* in */,
Safe communication with five processes
void* recv_buf_p
int recv_buf_size
MPI_Datatype recv_buf_type
int source
int recv_tag
MPI_Comm communicator
MPI_Status status_p
This function carries out a blocking send and a receive in a single call. The dest and the source can be the same or different. What makes it especially useful is that the MPI implementation
schedules the communications so that the program won’t hang or crash. The complex code we used earlier—the code that checks whether the process rank is odd or even—can be replaced with a single call
to MPI_Sendrecv. If it happens that the send and the receive buffers should be the same, MPI provides the alternative:
int MPI_Sendrecv_replace(
void buf_p /* in/out */,
int buf_size /* in */,
MPI_Datatype buf_type /* in */,
int dest /* in */,
int send_tag /* in */,
int source /* in */,
int recv_tag /* in */,
MPI_Comm communicator /* in */,
MPI_Status status_p /* in */);
4. Final details of parallel odd-even sort
Recall that we had developed the following parallel odd-even transposition sort algorithm:
Sort local keys;
for (phase = 0; phase < comm._sz; phase++) {
partner = Compute_partner(phase, my_rank); if (I’m not idle) {
Send my keys to partner;
Receive keys from partner;
if (my_rank < partner)
Keep smaller keys;
Keep larger keys;
In light of our discussion of safety in MPI, it probably makes sense to implement the send and the receive with a single call to MPI Sendrecv:
MPI_Sendrecv(my_keys, n/comm_sz, MPI_INT, partner, 0, recv_keys, n/comm_sz, MPI_INT, partner, 0, comm, MPI_Status_ignore);
It only remains to identify which keys we keep. Suppose for the moment that we want to keep the smaller keys. Then we want to keep the smallest n/p keys in a collection of 2n/p keys. An obvious
approach to doing this is to sort (using a serial sorting algorithm) the list of 2n/p keys and keep the first half of the list. However, sorting is a relatively expensive operation, and we can
exploit the fact that we already have two sorted lists of n/p keys to reduce the cost by merging the two lists into a single list. In fact, we can do even better, because we don’t need a fully
general merge: once we’ve found the smallest n/p keys, we can quit. See Program 3.16.
To get the largest n/p keys, we simply reverse the order of the merge, that is, start with local_n- 1 and work backwards through the arrays. A final improvement avoids copying the arrays and simply
swaps pointers (see Exercise 3.28).
Run-times for the version of parallel odd-even sort with the “final improvement” are shown in Table 3.9. Note that if parallel odd-even sort is run on a single processor, it will use whatever serial
sorting algorithm we use to sort the local keys, so the times for a single process use serial quicksort, not serial odd-even sort, which would be much slower. We’ll take a closer look at these times
in Exercise 3.27.
Program 3.16: The Merge low function in parallel odd-even transposition sort
void Merge low(
int my_keys[], /* in/out */
int recv_keys[], /* in */
int temp_keys[], /* scratch */
int local_n /* = n/p, in */) {
int m_i, r_i, t_i;
m_i = r_i = t_i = 0; while (t_i < local n) {
if (my_keys[m_i] <= recv_keys[r_i]) { temp_keys[t_i] = my_keys[m_i];
t_i++; m_i++;
} else {
temp_keys[t_i] = recv_keys[r_i]; t_i++; r_i++;
for (m_i = 0; m_i < local_n; m_i++)
my_keys[m_i] = temp_keys[m_i];
/* Merge low */
|
{"url":"https://www.brainkart.com/article/A-Parallel-Sorting-Algorithm_9337/","timestamp":"2024-11-01T22:49:07Z","content_type":"text/html","content_length":"233206","record_id":"<urn:uuid:bb2ba8e3-efc9-4608-9f5b-9c79b64b07d9>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00812.warc.gz"}
|
ARR - indiectionary
ARR — Meaning and details
What is ARR?
Annual Recurring Revenue (ARR) is a metric used primarily by SaaS or subscription businesses that have term subscription agreements, representing the value of the contracted recurring revenue
components of your term subscriptions normalized to a one-year period. ARR is a subset of MRR (Monthly Recurring Revenue) and is used to measure the predictability and health of the revenue stream.
How is ARR calculated?
ARR is typically calculated by taking the monthly recurring revenue (MRR) and multiplying it by 12. For instance, if a company's MRR is $10,000, its ARR would be $10,000 x 12, or $120,000.
|
{"url":"https://upstack.cc/indiectionary/arr","timestamp":"2024-11-02T06:01:55Z","content_type":"text/html","content_length":"13734","record_id":"<urn:uuid:3fe213ce-7c68-4643-ac8a-8aa8bc02e6d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00788.warc.gz"}
|
Practice Finding the Average, Median, Mode, and Range with Children
Finding the average (also called mean), median, mode and range can be fun! Here’s an activity I loved doing with my fourth graders year after year. We used the heights of players of an NBA team to
practice finding the mean, median, mode, and range.
Quick Math Review:
Average/Mean: When finding the mean (which is also called the average), you will add up a list of numbers and divide by the total number of data entries. Perhaps you want to figure out the average
height of an NBA team.
Median: the middle number (when you order from least to greatest or greatest to least). If there are an even number, you will have two middle numbers. Add them up and divide by 2 (a.k.a. find the
average). : )
Mode: the most common number in the list
Range: the difference between the highest and lowest number in the list
Outlier: a number that is much higher or much lower than all the other numbers in the list (an outlier can skew data, so it is good to know if there is one present)
Math Activity:
Basketball seems like a great sport to choose for this math activity during March Madness, but I'm choosing an NBA team (not a college one) to focus on. You can easily modify this for a team or sport
your child, or class, prefers. I’ll be using the example of the Golden State Warriors. If this math activity using a NBA team roster isn’t something your child would enjoy, select a list of data that
is of interest to him or her.
2. Find each player’s roster page on the NBA website (https://www.nba.com/) or find a team roster (https://www.basketball-reference.com/teams/GSW/2020.html or https://www.nba.com/warriors/rosterfor
the Golden State Warriors). It will save time to use a team roster, but children love research so you may want them to have the pleasure of looking up each player's height from their roster (or
providing a print out) with your supervision (we know it is easy to get distracted).
3. Record the height of each player. Tip: Convert their heights from feet to inches only (I have found that it is much easier to work with inches only, but you can modify this activity to fit your
needs). There are 12 inches in a foot so multiply 6 and 12 to find that there are 72 inches in six feet (yes, NBA players are often at least six feet tall). Here’s the list of players for the Golden
State Warriors (as of March 9, 2020, using information gathered on March 9, 2020 at https://www.basketball-reference.com/teams/GSW/2020.html).
4. Find the average (also called the mean) by adding up the height of all players and dividing by the number of players.
84+73+81+75+78+77+81+78+76+74+82+78+78+79 = 1,094 inches
Now divide by 12. 1,094/12 = 78.1429 inches
5. Find the median by ordering the list of heights from least to greatest or greatest to least, then locate the middle number.
Example (in inches): 73, 74, 75, 76, 77, 78, 78, 78, 78, 79, 81, 81, 82, 84
Since there are 14 players, we have two middle numbers (78 and 78). Normally we would add the two numbers up and divide by two in order to find the average between them and call it the “middle
number.” In this case the two middle numbers are the same, so 78 is the middle number. You can still find the average of the two if you would like to practice: 78+78 = 156.
Now divide 156 by 2 = 78! Yay!
The median of the heights for the Golden State Warriors is 78 inches (a.k.a. 6 ft 6 in).
6. Find the mode by seeing which number occurs the most in the list. This is not difficult when you've already ordered the numbers in the set from least to greatest or greatest to least.
Example (in inches): 73, 74, 75, 76, 77, 78, 78, 78, 78, 79, 81, 81, 82, 84
Looking at our list of heights, we see that there are four players who are 78 inches tall and two who are 81 inches tall. The most frequent number is 78.
The median height of the Golden State Warriors is 78 inches.
7. To find the range of the numbers, look at the greatest and least numbers in the list. Subtract the least from the greatest.
84 inches – 73 inches = 11 inches
The range of the heights for the players is 11 inches.
I find this to be the most interesting piece of information when doing this project. The tallest and shortest player are less than a foot apart in height! They are all very tall.
8. To find an outlier look at the string of numbers you have been working with. Are there any that are much higher or lower than the others?
Example (in inches): 73, 74, 75, 76, 77, 78, 78, 78, 78, 79, 81, 81, 82, 84
All of the heights for the team are very close together. There is no outlier.
Math concepts should be engaging. When we present math with a NBA team roster to a basketball fan, hopefully review and practice are more exciting. There are countless ways we can help practice
finding the average, median, mode, and range with children.
Was incorporating basketball with practice in finding the mean, median, mode, and range helpful?
What else could your children find the average (mean) of?
How does working with a list of numbers children are interested in help make the concepts more enjoyable?
A Song:
Fun Facts about the Warriors:
Yes, the Golden State Warriors were previously the Philadelphia Warriors. It seems fitting to focus on this team since they’ve been making history for a long time. March 2, 1962 is the day that
Warrior Wilt Chamberlin scored the NBA record 100 points in a legendary game (169-147 win against New York Knicks; photo from https://sportsecyclopedia.com/nba/pwar/phlwarriorsshots.html#images-27)
More about Mean, Median, Mode, and Range
|
{"url":"https://www.lifeandlearning365.com/post/practice-finding-the-average-median-mode-and-range-with-children","timestamp":"2024-11-11T17:17:05Z","content_type":"text/html","content_length":"1050489","record_id":"<urn:uuid:38eb6fc3-2a2d-410c-8224-def4a554192e>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00378.warc.gz"}
|
Let ⟨G+⟩ be an abelian group. A ring R with additive group ⟨R+⟩ isomorphic to ⟨G+⟩ is a ring on G. G is nil (radical) if and only if R^2 = (0)( R is nilpotent) for all rings R on G. It is shown that
G is a mixed radical group if and only if T is divisible and G∕T is radical, where T is the maximal torsion subgroup of G. Thus, the study of radical groups is reduced to the torsion free case. A
torsion free group G is of field type if and only if there exists a ring R on G such that Q ⊗ R is a field. It is shown that a torsion free group of finite rank is radical if and only if it has no
strongly indecomposable component of field type. It follows that finite direct sums of finite rank radical groups are radical. If G is torsion free an element x ∈ G is of nil type if and only if the
height vector h(x) = ⟨m[x]⟩ is such that 0 < m[i] < ∞ for infinitely many i. Multiplications on torsion free groups all of whose nonzero elements are of nil type are discussed under the assumption
of three chain conditions on the partially ordered set of types. Two special classes of rank two torsion free radical groups are characterized. An example is given of a torsion free radical group
homogeneous of non-nil type, and a simple condition is given for such a homogeneous group to be nonradical.
Mathematical Subject Classification 2000
Primary: 20K15
Received: 21 December 1970
Revised: 19 July 1971
Published: 1 January 1972
|
{"url":"https://msp.org/pjm/1972/40-1/p26.xhtml","timestamp":"2024-11-09T01:34:51Z","content_type":"application/xhtml+xml","content_length":"16875","record_id":"<urn:uuid:4054ba6a-348b-47d2-9d63-b2294b1983a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00027.warc.gz"}
|
Introduction to casebase sampling
Analysis of the veteran dataset
The first example we discuss uses the well-known veteran dataset, which is part of the survival package. As we can see below, there is almost no censoring, and therefore we can get a good visual
representation of the survival function:
evtimes <- veteran$time[veteran$status == 1]
hist(evtimes, nclass = 30, main = '', xlab = 'Survival time (days)',
col = 'gray90', probability = TRUE)
tgrid <- seq(0, 1000, by = 10)
lines(tgrid, dexp(tgrid, rate = 1.0/mean(evtimes)),
lwd = 2, lty = 2, col = 'red')
As we can see, the empirical survival function resembles an exponential distribution.
We will first try to estimate the hazard function parametrically using some well-known regression routines. But first, we will reformat the data slightly.
veteran$prior <- factor(veteran$prior, levels = c(0, 10), labels = c("no","yes"))
veteran$celltype <- factor(veteran$celltype,
levels = c('large', 'squamous', 'smallcell', 'adeno'))
veteran$trt <- factor(veteran$trt, levels = c(1, 2), labels = c("standard", "test"))
Using the eha package, we can fit a Weibull form, with different values of the shape parameter. For shape = 1, we get an exponential distribution:
y <- with(veteran, Surv(time, status))
model1 <- weibreg(y ~ karno + diagtime + age + prior + celltype + trt,
data = veteran, shape = 1)
If we take shape = 0, the shape parameter is estimated along with the regression coefficients:
model2 <- weibreg(y ~ karno + diagtime + age + prior + celltype + trt,
data = veteran, shape = 0)
Finally, we can also fit a Cox proportional hazard:
model3 <- coxph(y ~ karno + diagtime + age + prior + celltype + trt,
data = veteran)
As we can see, all three models are significant, and they give similar information: karno and celltype are significant predictors, both treatment is not.
The method available in this package makes use of case-base sampling. That is, person-moments are randomly sampled across the entire follow-up time, with some moments corresponding to cases and
others to controls. By sampling person-moments instead of individuals, we can then use logistic regression to fit smooth-in-time parametric hazard functions. See the previous section for more
First, we will look at the follow-up time by using population-time plots:
# create popTime object
pt_veteran <- popTime(data = veteran)
# plot method for objects of class 'popTime'
Population-time plots are a useful way of visualizing the total follow-up experience, where individuals appear on the y-axis, and follow-up time on the x-axis; each individual’s follow-up time is
represented by a gray line segment. For convenience, we have ordered the patients according to their time-to-event, and each event is represented by a red dot. The censored observations (of which
there is only a few) correspond to the grey lines which do not end with a red dot.
Next, we use case-base sampling to fit a parametric hazard function via logistic regression. First, we will include time as a linear term; as noted above, this corresponds to an Gompertz hazard.
model4 <- fitSmoothHazard(status ~ time + karno + diagtime + age + prior +
celltype + trt, data = veteran, ratio = 100)
Since the output object from fitSmoothHazard inherits from the glm class, we see a familiar result when using the function summary. We can quickly visualize the conditional association between each
predictor and the hazard function using the plot method for objects that are fit with fitSmoothHazard. Specifically, if \(x\) is the predictor of interest, \(h\) is the hazard function, and \(\mathbf
{x_{-j}}\) the other predictors in the model, the conditional association plot represents the relationship \(f(x) = \mathbb{E}(h|x, \mathbf{x_{-j}})\). By default, the other terms in the model (\(\
mathbf{x_{-j}}\)) are set to their median if the term is numeric or the most common category if the term is a factor. Further details of customizing these plots are given in the Plot Hazards and
Hazard Ratios vignette.
The main purpose of fitting smooth hazard functions is that it is then relatively easy to compute absolute risks. For example, we can use the function absoluteRisk to compute the mean absolute risk
at 90 days, which can then be compared to the empirical measure.
absRisk4 <- absoluteRisk(object = model4, time = 90)
ftime <- veteran$time
mean(ftime <= 90)
We can also fit a Weibull hazard by using a logarithmic term for time:
model5 <- fitSmoothHazard(status ~ log(time) + karno + diagtime + age + prior +
celltype + trt, data = veteran, ratio = 100)
With case-base sampling, it is straightforward to fit a semi-parametric hazard function using splines, which can then be used to estimate the mean absolute risk.
# Fit a spline for time
model6 <- fitSmoothHazard(status ~ bs(time) + karno + diagtime + age + prior +
celltype + trt, data = veteran, ratio = 100)
str(absoluteRisk(object = model6, time = 90))
As we can see from the summary, there is little evidence that splines actually improve the fit. Moreover, we can see that estimated individual absolute risks are essentially the same when using
either a linear term or splines:
linearRisk <- absoluteRisk(object = model4, time = 90, newdata = veteran)
splineRisk <- absoluteRisk(object = model6, time = 90, newdata = veteran)
plot.default(linearRisk, splineRisk,
xlab = "Linear", ylab = "Splines", pch = 19)
abline(a = 0, b = 1, lty = 2, lwd = 2, col = 'red')
These last three models give similar information as the first three, i.e. the main predictors for the hazard are karno and celltype, with treatment being non-significant. Moreover, by explicitly
including the time variable in the formula, we see that it is not significant; this is evidence that the true hazard is exponential.
Finally, we can look at the estimates of the coefficients for the Cox model, as well as the last three models (CB stands for “case-base”):
Cumulative Incidence Curves
Here we show how to calculate the cumulative incidence curves for a specific risk profile using the following equation:
\[ CI(x, t) = 1 - exp\left[ - \int_0^t h(x, u) \textrm{d}u \right] \] where \( h(x, t) \) is the hazard function, \( t \) denotes the numerical value (number of units) of a point in prognostic/
prospective time and \( x \) is the realization of the vector \( X \) of variates based on the patient’s profile and intervention (if any).
We compare the cumulative incidence functions from the fully-parametric fit using case base sampling, with those from the Cox model:
# define a specific covariate profile
new_data <- data.frame(trt = "test",
celltype = "adeno",
karno = median(veteran$karno),
diagtime = median(veteran$diagtime),
age = median(veteran$age),
prior = "no")
# calculate cumulative incidence using casebase model
smooth_risk <- absoluteRisk(object = model4,
time = seq(0,300, 1),
newdata = new_data)
cols <- c("#8E063B","#023FA5")
# cumulative incidence function for the Cox model
plot(survfit(model3, newdata = new_data),
xlab = "Days", ylab = "Cumulative Incidence (%)", fun = "event",
xlim = c(0,300), conf.int = F, col = cols[1],
main = sprintf("Estimated Cumulative Incidence (risk) of Lung Cancer\ntrt = test, celltype = adeno, karno = %g,\ndiagtime = %g, age = %g, prior = no", median(veteran$karno), median(veteran$diagtime),
# add casebase curve with legend
plot(smooth_risk, add = TRUE, col = cols[2], gg = FALSE)
legend = c("semi-parametric (Cox)", "parametric (casebase)"),
col = cols,
lty = c(1, 1),
bg = "gray90")
Note that by default, absoulteRisk calculated the cumulative incidence. Alternatively, you can calculate the survival curve by specifying type = 'survival' in the call to absoulteRisk:
smooth_risk <- absoluteRisk(object = model4,
time = seq(0,300, 1),
newdata = new_data,
type = "survival")
plot(survfit(model3, newdata = new_data),
xlab = "Days", ylab = "Survival Probability (%)",
xlim = c(0,300), conf.int = F, col = cols[1],
main = sprintf("Estimated Survival Probability of Lung Cancer\ntrt = test, celltype = adeno, karno = %g,\ndiagtime = %g, age = %g, prior = no", median(veteran$karno), median(veteran$diagtime),
# add casebase curve with legend
plot(smooth_risk, add = TRUE, col = cols[2], gg = FALSE)
legend = c("semi-parametric (Cox)", "parametric (casebase)"),
col = cols,
lty = c(1, 1),
bg = "gray90")
|
{"url":"https://cran.case.edu/web/packages/casebase/vignettes/smoothHazard.html","timestamp":"2024-11-10T09:37:07Z","content_type":"text/html","content_length":"42400","record_id":"<urn:uuid:f702152a-fbc9-48b3-8946-05ecd3693a59>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00683.warc.gz"}
|
Decomposition-integral: Unifying Choquet and the concave integrals
This paper introduces a novel approach to integrals with respect to capacities. Any random variable is decomposed as a combination of indicators. A prespecified set of collections of events indicates
which decompositions are allowed and which are not. Each allowable decomposition has a value determined by the capacity. The decomposition-integral of a random variable is defined as the highest of
these values. Thus, different sets of collections induce different decomposition-integrals. It turns out that this decomposition approach unifies well-known integrals, such as Choquet, the concave
and Riemann integral. Decomposition-integrals are investigated with respect to a few essential properties that emerge in economic contexts, such as concavity (uncertainty-aversion), monotonicity with
respect to stochastic dominance and translation-covariance. The paper characterizes the sets of collections that induce decomposition-integrals, which respect each of these properties.
Funders Funder number
Google Inter-university center for Electronic Markets and Auctions
Israel Science Foundation 538/11
• Capacity
• Choquet integral
• Concave integral
• Decision making
• Decomposition-integral
• Non-additive probability
Dive into the research topics of 'Decomposition-integral: Unifying Choquet and the concave integrals'. Together they form a unique fingerprint.
|
{"url":"https://cris.tau.ac.il/en/publications/decomposition-integral-unifying-choquet-and-the-concave-integrals","timestamp":"2024-11-06T23:20:06Z","content_type":"text/html","content_length":"51109","record_id":"<urn:uuid:115ac710-c0ff-40b8-84e5-b7a2175b14b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00113.warc.gz"}
|
50+ Important Data Structure MCQ for Class 12 CBSE Python
50+ Important Data Structure MCQ for Class 12
1. ___________________ is a way to represent data in memory.
a) Data Handling
b) Data Structure
c) Data Dumping
d) Data Collection
2. Python built-in data structures are
a) integer,float,string
b) list,tuple,dictionary,sets
c) math,pyplot
d) All of the above
3. Data structure can be of two type’s namely___________
a) Simple and Compound
b) Simple and Nested
c) Sequential and random
d) All of the above
4. Array or linear list comes under the category of______
a) Simple Data Structure
b) Compound Data Structure
c) random
d) None of these
5. Compound Data structure can be ______ & _______
a) Sequential and random
b) Simple & Nested
c) Linear & Non Linear
d) Simple and Linear
6. The examples of Linear Data Structures are
a) Stacks, Queues, Linked list
b) int, float, complex
c) Operators, tokens, punctuators
d) All of the above
7. Stacks follows____________ order
a) FIFO (First In First Out )
b) LIFO (Last In First Out)
c) Random
d) All of the above
8. Queue follows____________ order
a) FIFO (First In First Out )
b) LIFO (Last In First Out)
c) Random
d) None of the above
9. Main Operations in Stacks are called
a) Insertion and deletion
b) append and insertion
c) Push and Pop
d) append and deletion
10. Main Operations in Queue are called
a) Insertion and deletion
b) append and insertion
c) Push and Pop
d) append and deletion
11. In Stack Insertion and deletion of an element is done at single end called ________
a) Start
b) Last
c) Top
d) Bottom
12. In stack we cannot insert an element in between the elements that are already inserted.
a) True
b) False
13. The process of visiting each element in any Data structure is termed as ____________
a) Visiting
b) Searching
c) Traversing
d) Movement
14. While implementing Stack using list when we want to delete element we must use pop function as__________
a) list.pop(pos)
b) list.pop(0)
c) list.pop()
d) list.push()
15. Arranging elements of a data structure in increasing or decreasing order is known as_________
a) Searching
b) Arrangement
c) Sorting
d) Indexing
16. Searching of any element in a data structure can be done in 2 ways _________ and ________
a) Sequential and random
b) linear and non linear
c) linear and binary
d) sequential and binary
17. _________ is an example of nonlinear data structure
a) Stack
b) Queue
c) Sorting
d) Tree
18. In a stack, if a user tries to remove an element from empty stack it is called _________
a) Underflow
b) Empty
c) Overflow
d) Garbage Collection
19. What is the value of the postfix expression 6 3 2 4 + – *
a) 1
b) 40
c) 74
d) -18
20. If the elements “A”, “B”, “C” and “D” are placed in a stack and are deleted one at a time, in what order will they be removed?
a) ABCD
b) DCBA
c) DCAB
d) ABDC
21. Which of the following data structure is linear type?
a) Stack
b) Array
c) Queue
d) All of the above
22. The postfix form of the expression (A+ B)*(C*D- E)*F / G is?
a) AB + CDE * – * F *G /
b) AB+ CD*E – FG /**
c) AB + CD* E – F **G /
d) AB + CD* E – *F *G /
23. The postfix form of A*B+C/D is?
a) *AB/CD+
b) AB*CD/+
c) A*BC+/D
d) ABCD+/*
24. Which of the following statement(s) about stack data structure is/are NOT correct?
a) Stack data structure can be implemented using linked list
b) New node can only be added at the top of the stack
c) Stack is the FIFO data structure
d) The last node at the bottom of the stack has a NULL link
|
{"url":"https://cbsepython.in/important-data-structure-mcq-for-class-12/","timestamp":"2024-11-06T00:55:48Z","content_type":"text/html","content_length":"107509","record_id":"<urn:uuid:a1049c34-02d3-4419-a8bb-7fa265a57809>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00362.warc.gz"}
|
QR3.5.3 The Quantum Lottery
What decides where a photon hits a screen when it arrives? In quantum theory, the quantum wave defines the probability it will hit at any point but where it actually hits is a random choice from
those probabilities. The probabilities are exact but the actual hit point varies with no known physical cause.
Quantum theory calculates the probability a photon will hit a screen point as follows:
1. The wave equation describes how the photon cloud spreads through both slits.
2. Given two paths to a screen point, positive and negative wave values add to a net result.
3. The net amplitude squared is the probability the photon will physically exist at that point.
Quantum theory then explains Young’s experiment as follows:
The photon quantum wave spreads through both slits, then its positive and negative values add or cancel at the screen to give interference that affects the probability of where it hits.
All this quantum activity is seen as entirely imaginary so it doesn’t really happen but in quantum realism, there really is a quantum wave that really does generate physical events. If a quantum wave
is a processing wave and a physical event is a node overload that restarts the server, what decides that? Servers have many clients so a quantum server response to a client node reboot request could
1. Access. The server restarts its processing at that node, which denies all other nodes access to it and collapses the quantum wave. This then is a physical event.
2. No access. The server doesn’t respond as it is busy elsewhere so the node drops the process and carries on. This then was a potential physical event that didn’t happen.
Quantum collapse is random to us because it is a winner takes all lottery run by a quantum server we can’t observe. When many nodes reboot, the first to initiate a server restart locks out the others
and wins the prize of being the photon, leaving other instances to wither on the grid. It follows that screen nodes with more server access are more likely to reboot successfully.
Quantum theory defines its probabilities based on the square of the quantum wave amplitude because a quantum wave is a sine wave and the power of a sine wave is its amplitude squared. This power
defines the processing demand that determines access to the photon server. That positive and negative quantum amplitudes cancel locally is an expected efficiency. Nodes that access the server more
often have a greater probability to successfully reboot and host a physical event.
When many screen nodes overload at once, where a photon actually hits depends on server activity that is to us random, as quantum theory says. But quantum theory can deduce the probability of where a
photon hits from the square of the quantum wave amplitude at each point because the power of the quantum wave at a node defines its server access. Quantum realism derives what quantum theory declared
based on known data, so it describes Young’s experiment in server access terms as follows:
a. The photon processing wave spreads instances through both slits.
b. If they reach the same node by different paths, positive/negative values cancel or add.
c. When many screen nodes overload and reboot, the net quantum amplitude squared defines the probability of server access that results in a physical event.
In Young’s experiment, the photon server supports client instances that pass through both slits then interfere as they leave, even for a single photon. This interference alters the server access that
decides the probability a node overload will succeed. The first screen node to overload and restart the server is where the photon “hits”. If detectors are in both slits, both fire equally because
both have equal server access. If a detector is in one slit, it only fires half the time because the server is attending to instances going through the other slit half the time. Table 3.1 below
interprets Feynman’s summary of quantum mechanics (Feynman et al., 1977) p37-10 as a calculation of server access.
This model now answers questions like:
a. Does the photon go through both slits at once? Yes, photon instances go through both slits.
b. Does it arrive at one screen point? Yes, photon processing restarts at one screen node (point).
c. Did it take a particular path? Yes, the instance that caused the node reboot took a specific path.
d. Did it also take all other possible paths? Yes, other instances, now disbanded, took every path.
If quantum theory is literally true, a photon really is a “wave” that goes through both Young’s slits but it arrives at a screen point because a physical event is a server restart triggered by one
node. A photon as server processing never dies because it can be born again from any of its legion of instances. Quantum realism explains what physical realism cannot: how one photon can go through
both Young’s slits at once, interfere with itself, but still arrive at a single point on a screen. It can explain a mystery of light that has baffled scientists for centuries.
Table 3.1. Quantum theory as server access
Quantum theory Server access
1. Existence. The probability a quantum entity exists is the absolute square of its 1. Restart.The probability a quantum entity restarts a server in a physical event depends on node access,
complex quantum amplitude value at a point in space which is the absolute quantum amplitude squared
2. Interference. If a quantum event can occur in two alternate ways, the positive and 2. Combination.If quantum processing can arrive at a node by alternate network paths, the positive and
negative amplitudes combine, so they interfere negative values combine, so they interfere
3. Observation.Observing one path lets the other occur without interference, so the 3. Interaction. Interacting with a quantum wave on one path lets the other occur without interference, so the
outcome probability is the simple sum of the alternatives, so the interference is lost probability of either path occurring is the simple sum of the alternatives, so the interference is lost
|
{"url":"https://brianwhitworth.com/quantum-realism-3-5-3-the-quantum-lottery/","timestamp":"2024-11-04T04:09:01Z","content_type":"text/html","content_length":"106078","record_id":"<urn:uuid:8d98aa7b-b823-400f-ab09-d69662d97d38>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00354.warc.gz"}
|
MA3354 DISCRETE MATHEMATICS Anna University Syllabus | College Guide
MA3354 DISCRETE MATHEMATICS Anna University Syllabus
MA3354 DISCRETE MATHEMATICS L T P C 3 1 0 4
• To extend student’s logical and mathematical maturity and ability to deal with abstraction.
• To introduce most of the basic terminologies used in computer science courses and application of ideas to solve practical problems.
• To understand the basic concepts of combinatorics and graph theory.
• To familiarize the applications of algebraic structures.
• To understand the concepts and significance of lattices and boolean algebra which are widely used in computer science and engineering.
UNIT I LOGIC AND PROOFS 9+3
Propositional logic – Propositional equivalences – Predicates and quantifiers – Nested quantifiers – Rules of inference – Introduction to proofs – Proof methods and strategy.
UNIT II COMBINATORICS 9+3
Mathematical induction – Strong induction and well ordering – The basics of counting – The pigeonhole principle – Permutations and combinations – Recurrence relations – Solving linear recurrence
– Generating functions – Inclusion and exclusion principle and its applications.
UNIT III GRAPHS 9+3
Graphs and graph models – Graph terminology and special types of graphs – Matrix representation of graphs and graph isomorphism – Connectivity – Euler and Hamilton paths.
UNIT IV ALGEBRAIC STRUCTURES 9+3
Algebraic systems – Semi groups and monoids – Groups – Subgroups – Homomorphism’s – Normal subgroup and cosets – Lagrange’s theorem – Definitions and examples of Rings and Fields.
UNIT V LATTICES AND BOOLEAN ALGEBRA 9+3
Partial ordering – Posets – Lattices as posets – Properties of lattices – Lattices as algebraic systems – Sub lattices – Direct product and homomorphism – Some special lattices – Boolean algebra –
Sub Boolean Algebra – Boolean Homomorphism.
TOTAL: 60 PERIODS
At the end of the course, students would :
CO1:Have knowledge of the concepts needed to test the logic of a program.
CO2:Have an understanding in identifying structures on many levels.
CO3:Be aware of a class of functions which transform a finite set into another finite set which relates to input and output functions in computer science.
CO4:Be aware of the counting principles.
CO5:Be exposed to concepts and properties of algebraic structures such as groups, rings and fields.
1. Rosen. K.H., “Discrete Mathematics and its Applications”, 7th Edition, Tata McGraw Hill Pub. Co. Ltd., New Delhi, Special Indian Edition, 2017.
2. Tremblay. J.P. and Manohar. R, “Discrete Mathematical Structures with Applications to Computer Science”, Tata McGraw Hill Pub. Co. Ltd, New Delhi, 30th Reprint, 2011.
1. Grimaldi. R.P. “Discrete and Combinatorial Mathematics: An Applied Introduction”, 5thEdition, Pearson Education Asia, Delhi, 2013.
2. Koshy. T. “Discrete Mathematics with Applications”, Elsevier Publications, 2006.
3. Lipschutz. S. and Mark Lipson., “Discrete Mathematics”, Schaum’s Outlines, Tata McGraw Hill Pub. Co. Ltd., New Delhi, 3rd Edition, 2010.
MA3354 DISCRETE MATHEMATICS Anna University Syllabus cse reg 2021
You must be logged in to post a comment.
|
{"url":"https://collegeguide.co.in/ma3354-discrete-mathematics-anna-university-syllabus/","timestamp":"2024-11-05T21:56:54Z","content_type":"text/html","content_length":"174628","record_id":"<urn:uuid:24ba3e36-c447-4376-8b6d-36191654aa0b>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00638.warc.gz"}
|
Unscramble WAFFLER
How Many Words are in WAFFLER Unscramble?
By unscrambling letters waffler, our Word Unscrambler aka Scrabble Word Finder easily found 61 playable words in virtually every word scramble game!
Letter / Tile Values for WAFFLER
Below are the values for each of the letters/tiles in Scrabble. The letters in waffler combine for a total of 20 points (not including bonus squares)
• W [4]
• A [1]
• F [4]
• F [4]
• L [1]
• E [1]
• R [5]
What do the Letters waffler Unscrambled Mean?
The unscrambled words with the most letters from WAFFLER word or letters are below along with the definitions.
• waffler () - Sorry, we do not have a definition for this word
|
{"url":"https://www.scrabblewordfind.com/unscramble-waffler","timestamp":"2024-11-06T14:10:36Z","content_type":"text/html","content_length":"49113","record_id":"<urn:uuid:60a7da9b-354c-457f-94e0-171b99b3fec4>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00626.warc.gz"}
|
On the deformation theory of discontinuous groups acting on solvable homogeneous spaces
Let $G$ be a Lie group, $H$ a closed subgroup of $G$ and $\Gamma$ a discontinuous group for the homogeneous space $\mathscr{X}=G/H$, which means that $\Gamma$ is a discrete subgroup of $G$ acting
properly discontinuously and fixed point freely on $\mathscr{X}$. The subject of the talk is to to deal with some questions related to the geometry of the parameter and the deformation spaces of the
action of $\Gamma$ on $\mathscr{X}$, when the group $G$ is solvable. The local rigidity conjecture in the nilpotent case and the analogue of the Selberg-Weil-Kobayashi rigidity Theorem in such
non-Riemannian setting is also discussed.
|
{"url":"https://indico.math.cnrs.fr/event/3969/?print=1","timestamp":"2024-11-11T14:45:50Z","content_type":"text/html","content_length":"10964","record_id":"<urn:uuid:61eb6d78-e435-4c55-82b6-a59d8d3511fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00788.warc.gz"}
|
Submitted by Atanu Chaudhuri on Thu, 23/06/2016 - 19:15
The conventional approach to math problem solving relies heavily on manipulation of terms using low level mathematical constructs without using the problem solving abilities of the student. Following
only this approach to solving problems, students may tend to become used to mechanical and procedural thinking suppressing their inherent creative and innovative out-of-the-box thinking abilities...
|
{"url":"https://suresolv.com/quick-math?page=3","timestamp":"2024-11-06T04:45:45Z","content_type":"text/html","content_length":"49284","record_id":"<urn:uuid:6015bf6a-919b-4210-b901-bace07ae2d33>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00295.warc.gz"}
|
Turning data from Python into SPSS data
I've shown how you can
grab data from SPSS and use it in Python commands
, and I figured a post about the opposite process (taking data in Python and turning it into an SPSS data file) would be useful. A few different motivating examples are:
• Creating a set of permutations using the itertools Python library (see 1 and 2 for examples)
• Identifying isolated neighborhoods in a set of network data and returning those to an SPSS data file
• Grabbing items from the Google Places API
So first as a simple illustration, lets make a set of simple data in Python as a list of lists.
BEGIN PROGRAM Python.
MyData = [(1,2,'A'),(4,5,'B'),(7,8,'C')]
Now to export this data into SPSS you can use
, append variables using
and then add cases using
(see the Python programming PDF that comes with SPSS in the help to peruse all of these functions plus the documentation). This particular codes adds in 3 variables (two numeric and one string) and
then loops through the
python object and adds those cases to the define SPSS dataset.
BEGIN PROGRAM Python.
import spss
spss.StartDataStep() #start the data setp
MyDatasetObj = spss.Dataset(name=None) #define the data object
MyDatasetObj.varlist.append('X1',0) #add in 3 variables
for i in MyData: #add cases in a loop
Here this will create a SPSS dataset and give it a generic name of the form
where ? will be an incrementing number based on the session history of naming datasets. To specify the name beforehand you need to use the
and then place the dataset name as the option in the
As linked above I have had to do this a few times from Python objects, so I decided to make a bit of a simpler SPSS function to take care of this work for me.
BEGIN PROGRAM Python.
#Export to SPSS dataset function
import spss
def SPSSData(data,vars,types,name=None):
VarDict = zip(vars,types) #combining variables and
#formats into tuples
datasetObj = spss.Dataset(name=name) #if you give a name,
#needs to be declared
#appending variables to dataset
for i in VarDict:
#now the data
for j in data:
This code takes an arbitrary Python object (
), and two lists, one of the SPSS variable names and the other of the format for the SPSS variables (either 0 for numeric or an integer for the size of the strings). To transform the data to SPSS, it
needs a list of the same dimension as the variables you have defined, so this works for any
object that can be iterated over and that can be coerced to returning a list. Or more simply, if
returns a list of the same dimensions for the variables you defined, you can pass the
object to this function. This won't work for all situations, but will for quite a few.
So with the permutation examples I previously linked to, we can use the itertools library to create a set of all the different permutations of string
. Then I define a set of variables and formats as lists, and then we can use the
function I created to make a new dataset.
DATASET DECLARE Combo.
BEGIN PROGRAM Python.
import itertools
YourSet = 'ABC'
YourLen = 3
x = itertools.permutations(YourSet,YourLen)
v = ['X1','X2','X3']
t = [1,1,1]
This work flow is not optimal if you are creating the data in a loop (such as in the Google Places API example I linked to earlier), but works well for static python objects, such as the object
returned by itertools.
Tue January 23, 2018 08:14 PM
You would either need to pipe the results from the console to a text file, or save the results via SPSS (such as exporting a chart to a PNG file, or saving the output spv file). If you can be more
specific about what you are doing than I can give better advice.
Mon January 22, 2018 02:03 AM
Excuse me, I want to ask a question. Recently, I write a python script to use SPSS, but how can I save the result? it is only outputed in the console. Thank you.
|
{"url":"https://community.ibm.com/community/user/ai-datascience/blogs/archive-user/2014/09/19/turning-data-from-python-into-spss-data","timestamp":"2024-11-08T20:50:37Z","content_type":"text/html","content_length":"590629","record_id":"<urn:uuid:154229fa-6eaa-46e0-8447-1ae95a2373a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00709.warc.gz"}
|
Differential geometry
is basically the study of
. It has many applications in
, especially in the
theory of relativity
. The central objects of study are Riemannian
, geometrical objects such as surfaces which locally look like
Euclidean space
and therefore allow the definition of analytical concepts such as tangent vectors and
tangent space
, differentiability, and vector and
fields. The manifolds are equipped with a metric, which introduces geometry because it allows to measure distances and
locally and define concepts such as geodesics
, curvature
and torsion
|
{"url":"https://chita.us/wikipedia/nost/index.pl?Differential_geometry","timestamp":"2024-11-04T15:06:23Z","content_type":"text/html","content_length":"3613","record_id":"<urn:uuid:be561f96-4867-4f23-befb-366cb10d637c>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00124.warc.gz"}
|
[GAP Forum] Factorizing polynomials over GF(2^m)?
Jaco Versfeld Jaco.Versfeld at wits.ac.za
Fri Jun 5 15:53:42 BST 2015
Thank you very much.
From: Alexander Hulpke [hulpke at fastmail.fm]
Sent: 05 June 2015 03:28 PM
To: Jaco Versfeld
Cc: forum at gap-system.org
Subject: Re: [GAP Forum] Factorizing polynomials over GF(2^m)?
Dear GAP Forum,
> On Jun 5, 2015, at 7:19 AM, Jaco Versfeld <Jaco.Versfeld at wits.ac.za> wrote:
> I want to factor polynomials over GF(2^m). As a quick test, I did the following:
> R:=PolynomialRing(GF(8),["x"]);
> x:=Indeterminate(GF(8),"x");
> p := x^7 + 1;
> Factors(p);
> The result that I obtain is:
> [ x+Z(2)^0, x^3+x+Z(2)^0, x^3+x^2+Z(2)^0 ]
> This doesn't make sense, since I expected (x-\alpha^0), (x-\alpha^1) ... (x-\alpha^6) to have been the roots.
Polynomials do not carry the actual ring, but only the characteristic and get factored over their coefficient rings. To factor over GF(8), specify the polynomial ring, i.e.
gap> Factors(R,p);
[ x+Z(2)^0, x+Z(2^3), x+Z(2^3)^2, x+Z(2^3)^3, x+Z(2^3)^4, x+Z(2^3)^5, x+Z(2^3)^6 ]
Alexander Hulpke
<table width="100%" border="0" cellspacing="0" cellpadding="0" style="width:100%;">
<td align="left" style="text-align:justify;"><font face="arial,sans-serif" size="1" color="#999999"><span style="font-size:11px;">This communication is intended for the addressee only. It is confidential. If you have received this communication in error, please notify us immediately and destroy the original message. You may not copy or disseminate this communication without the permission of the University. Only authorised signatories are competent to enter into agreements on behalf of the University and recipients are thus advised that the content of this message may not be legally binding on the University and may contain the personal views and opinions of the author, which are not necessarily the views and opinions of The University of the Witwatersrand, Johannesburg. All agreements between the University and outsiders are subject to South African Law unless the University agrees in writing to the contrary. </span></font></td>
More information about the Forum mailing list
|
{"url":"https://www.gap-system.org/ForumArchive2/2015/004984.html","timestamp":"2024-11-03T07:17:31Z","content_type":"text/html","content_length":"5358","record_id":"<urn:uuid:0cca3fc7-3a3d-4257-84e0-d8ec235c6238>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00697.warc.gz"}
|
[QSMS Symplectic geometry seminar 2023-10-24] Linear bounds on rho-invariants and simplicial complexity of manifolds
• Date: 2023-10-24 (Tue) 11:00 ~ 12:00
• Place: 27-116 (SNU)
• Speaker: Geunho Lim (Einstein Institute of Mathematics, Hebrew University of Jerusalem)
• TiTle: Linear bounds on rho-invariants and simplicial complexity of manifolds
• Abstract: Using L^2 cohomology, Cheeger and Gromov define the L^2 rho-invariant on manifolds with arbitrary fundamental groups, as a generalization of the Atiyah-Singer rho-invariant. There are
many interesting applications in geometry and topology. In this talk, we show linear bounds on the rho-invariants in terms of simplicial complexity of manifolds. First, we obtain linear bounds on
Cheeger-Gromov invariants, using hyperbolizations. Next, we give linear bounds on Atiyah-Singer invariants, employing a combinatorial concept of G-colored polyhedra. As applications, we give new
concrete examples in the complexity theory of high-dimensional (homotopy) lens spaces. This is a joint work with Shmuel Weinberger.
|
{"url":"https://qsms.math.snu.ac.kr/index.php?mid=board_sjXR83&document_srl=2714&order_type=desc&listStyle=viewer&page=3","timestamp":"2024-11-13T02:49:58Z","content_type":"text/html","content_length":"20926","record_id":"<urn:uuid:4e74cfe5-5c2d-459c-986a-d290952b1244>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00817.warc.gz"}
|
MATLAB Operators and Symbols: Types and Uses - Dataaspirant
MATLAB Operators and Symbols: Types and Uses
MATLAB - short form for “Matrix Laboratory” is a rich programming language. MATLAB programming finds its use in diverse applications, including numerical calculations, mathematical modeling, and
complex simulations. In simple terms, MATLAB operators are character symbols that perform certain actions on their operands.
MATLAB is not limited to matrix operations or array operations; in fact, MATLAB also works with scalars.
MATLAB Operators and Symbols: Types and Uses
For those looking to achieve MATLAB assignment mastery, CWAssignments offers specialized support that significantly enhances learning outcomes and project completion efficiency. This platform is a
go-to for students and professionals aiming to deepen their understanding of MATLAB's functionalities and apply them effectively in their work.
Understanding Arithmetic Operators in MATLAB
MATLAB is very rich in arithmetic operators. It has arithmetic operators that work with scalars and some special arithmetic operators for matrix operations and array operations.
Arithmetic operators function according to the PEMDAS acronym, i.e., Parentheses, Exponent, Multiply, Divide, Add, and Subtract. In MATLAB programming, generally, arithmetic operators are evaluated
from left to right.
+ Addition operator – works similarly on scalars, arrays, and matrices (element-wise addition)
- Subtraction operator – works similarly on scalars, arrays, and matrices (element-wise addition)
* Multiplication operator – for arrays and matrices, the columns of the first operand must be equal to the rows of the second.
.* Element wise multiplication operator – especially for matrices and arrays. Works when the size of both operands is the same.
.^ Element wise power operator – especially for matrices and arrays. Matrix powered matrix. Works when the size of both operands is the same.
.\ Element-wise array left division – especially for matrices and arrays. Divides left operand by right element-wise. The size of operands must be the same.
\ Regular left division – Performs matrix inversion, i.e., X=A-1B. Notice that the inverted matrix is on the left.
./ Element-wise array right division – especially for matrices and arrays. Divides right operand by left element-wise. The size of operands must be the same.
/ Regular right division – Performs matrix inversion, i.e., X=BA-1. Notice that the inverted matrix is on the right.
The Role of Relational Operators in MATLAB
Relational operators are MATLAB operators used for logical decision-making based on the relation between operands. The result of relational operators is either a logical true (1) or false (0). Some
relational operator functionalities are mentioned below.
> Checks if left operand is greater than the right operand
>= Checks if left operand is greater than or equal to the right operand
< Checks if left operand is less than the right operand
<= Checks if left operand is less than or equal to the the right operand
== Checks if left and right operands are equal
~= Checks if left and right operands are not equal
Logical and Bitwise Operators: Enhancing MATLAB Operations
In MATLAB programming, logical operators perform logical operations, e.g., AND, OR, NAND, and NOT on scalars and vectors. Logical operators’ functionalities are listed below:
&& AND logical operator – Works with logical scalar values
& Element wise AND Operator – Works with logical arrays and matrices if the size of operands matches. Also serves as a bitwise operator.
|| OR logical operator – Works with logical scalar values
| Element wise OR Operator – Works with logical arrays and matrices if the size of operands match. Also serves as a bitwise operator.
~ NOT logical operator – It operates on a single operand and negates its state
Let’s differentiate between bitwise operators and logical operators by an example. In Figure 1, it is clear that ‘&&’ does not work with a logical array because it is a logical operator and not a
bitwise operator.
Exploring Set Operators in MATLAB Programming
Set operators in MATLAB operate on vectors and arrays to form new sets. Assume A & B are two sets (vectors in MATLAB). Some MATLAB set operator functionalities are mentioned next:
union(A,B) Enhancement: Combines unique elements of two sets to form a new set
intersect(A,B) Enhancement: Retrieves common elements from two sets
setdiff(A,B) Enhancement: Equivalent to set operation A-B. Retrieves elements of A that are not in B
Ismember(A,B) Enhancement: If A-B = A, it returns an array of 1’s with a size equal to that of A.
Differences Between Matrix and Array Operators
To differentiate between matrix operations and array operations in MATLAB programming, it is useful to understand the difference between matrix and array. Matrix operations work according to algebra
rules, whereas array operations are element-wise operations.
-A dot before the arithmetic operator differentiates array operators from matrix operators. Let’s understand this with the help of an example.
Figure 2 defines two matrices. The result of bitwise multiplication produces a valid result. However, the algebraic multiplication throws an error as the number of columns in A is unequal to rows in
B. Note the ‘.’ before * for the array operation.
Practical Applications of MATLAB Operators
MATLAB programming finds its applications in almost every discipline of research and computing. The simple MATLAB operators perform critical functions in high-level projects. Digital image processing
is an ideal application of MATLAB operators.
Digital images are actually matrices, and any modification to the image requires manipulating the matrix using MATLAB operators.
Quantum computing and control systems are two more prime applications of MATLAB programming. In both cases, matrices, arrays, or vectors save the states of the system. To manipulate the states,
MATLAB operators are very frequently used.
MATLAB Operators: Tips and Tricks for Efficient Coding
In the context of MATLAB operators, rather than using loops in MATLAB programming, use matrix and vector MATLAB operators. Using a loop for indexing consumes more time and memory. MATLAB programming
language defines the concept of vectorization: performing operation on all matrix elements at once.
Vectorization significantly increases code efficiency and readability. Similarly, instead of loops, using logical indexing for retrieving array or matrix elements saves time and memory.
In MATLAB programming, short-circuit evaluation is a mechanism where in a logical expression, the second operand needs not to be evaluated if the result can be determined by the first operand alone.
This obviously saves time and increases speed. The MATLAB operators ‘&&’ and ‘||’ follow a short-circuit mechanism.
Follow us:
I hope you like this post. If you have any questions ? or want me to write an article on a specific topic? then feel free to comment below.
|
{"url":"https://dataaspirant.com/matlab-operators-symbols/","timestamp":"2024-11-03T00:31:59Z","content_type":"application/xhtml+xml","content_length":"227137","record_id":"<urn:uuid:d2c9fce3-1d67-4656-91da-6bb14315a648>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00896.warc.gz"}
|
Time and the 26 Generations
by Jeffrey Meiliken | May 30, 2024 | Kabbalah Secrets, Revelations | 0 comments
There is Nothing Random about Pi (π) Part 4:
The 26 Generations
26 generations as in the length of the YHVH (יהוה), starting with Adam and ending with Moses. We call that time history. We call that time and history linear as if one moment led to the next with an
inevitable outcome and nothing was going on parallel to it, as if there were no side stories, no alternative pathways. But is that really what the Torah was explaining to us, or what science
observes? Does the mathematics behind the Cosmos really bear that out? Or does the Torah, physics, math, and metaphysics tell a different story—a more coherent and logical one?
The Metaphysical Origin of 666
Cosmically numbers are symbols of the various specific regions of consciousness like vertex points in a matrix or lattice. When we zoom in deeper though, we see that those points each reflect a vast
network of other matrices and sub-matrices. It is no different than staring up at a twinkling star with our eyes versus using a powerful telescope. One method sees simple beauty and the other sees an
entire galaxy, yet in both instances what both are seeing is mostly, or 99.9% aether, the invisible connective tissue of the cosmos and our universe within them. Numbers in cosmology and physics can
mean totaling different things. So can the mathematical operations that relate them. They can shift based on perspective as in the example above, but the one constant throughout is geometry, whether
simple or hyper-spatial.
2/3 or a chain of 6’s like .666, but it is only through geometry that a number must be split into the kabbalistic 1/3, 2/3 or .333 and .666 proportion. This refers to the most basic shape, the
triangle, the one to which the metaphysical, numerical, and the Alef-bet Triplets conform. It does not matter which type of triangle or what its’ dimensions. If we draw lines from a triangle’s 3
vertices to the 3 midpoints on their opposite edges, these 3 medians will intersect at a point called the centroid, which will split the 3 medians into the .333 – .666 proportion. Moreover, the 6
internal triangles formed will also fold up into 3 identical triangles when folded, no matter what proportions the original triangle had.
Once again, metaphysical geometry shapes our physical universe, and reminds us that metaphysically anytime we see the number or concept 666 used in a physical measurements there exists in the
universe its .333 counterpart. Since triplets form the vertices of triangles, the value .666 takes us to the centroid or metaphysical center of each of the Essential Triplets. Conversely, .333 takes
us to the same central point when we start from the center of its edges. For every physical center of gravity there is a spiritual one in the same place. It is a matter of perspective which one we
When we start understanding that there is a logical reason behind everything and that everything, starting with its dimensions and quantitative description, is a portal to expansion, we can utilize
that expansion to elevate our consciousness.
Binah, Ehyeh (אלף־הי־יוד־הי), 161 that when seen as a median of a cosmic triangle, it splits into 107 and 54, as in (107 x 54) = 5778, the 54 cycles of 107 years that equals the Event Horizon radius.
And when we multiple the base centroid median split or (.333… x .666…) we get .2222, as in (5778 + 2222) = 8000, Binah. This gives metaphysical and geometric reference to how Rav Ashlag, of blessed
memory, calculated the Event Horizon as 5778 HC, independently from every other method explained in our papers. Starting with an equilateral triangle where each median was the prophetic 6000 years of
the 6 Days, he found the centroid split into 2000 and 4000 years. Then taking the longer section as another median of another equilateral triangle, he split 2000 into 666.666 and 1333.333 years. Then
he did the same geometric formation with a 3^rd iteration into 222.222 and 444.444, just like the sum of the 3 locations within Pi (π) for the 3 derivative locations of the exponential triplet
reductions of the 5778185778 sequence or (355707 + 3739 + 84998) = 444444, which is 5778/.013000 = 444444. Adding the 3 sequential longer median segments gave him (4000 +1333.333 + 444.444) = 5778,
which he made clear was 444.444 years after the death of the Arizal. When we get to the 3^rd and final iteration the resultant equilateral triangle that has a median of 444.444 has edge lengths of
496.903, as in the final Sefira Malchut (496) and the 42^nd Triangular Field (903)—another indication to stop at 3 iterations, especially since all positive integers are the sum of up to 3 Triangular
Fields, as proven by Gauss long ago.
The result is (6000 – 5778) = 222.222 or .37037 of 6000, bringing us back to the Core Essential Prime Number, 37.
What is amazing is that an equilateral triangle that has a median of 6000 has 3 sides of 6708, the value of the Sword of Moses, the 42-letter verse embedded in the Song of the Sea, which is (6666 +
42). Moreover, the product of letters in the first word or handle of that verse is (10 x 600) = 6000. The precise median length is 6708.203 that adds the .203, representing the first Essential
Triplet of Creation (ברא). There are no fractional Hebrew Letters so anything after the decimal place is bonus information.
6708 is unique in that its area is essentially equivalent to 3 times its edge length, which makes this triangle the idea geometry to begin the 3-fold reduction process. Even more amazing is that it
is exactly 3000 times that precise edge length, or Sword of 42, which aligns it directly with the Creation Circle of 3000. To be clear, metaphysically Rav Ashlag began with 3 Swords of 42 arranged
into this unique equilateral triangle and used its’ 6000-year median to determine the length of the Event Horizon. Those 126 letters in the 3 Swords align with the 6 faces and 6 directions of the
Magic Essential Cube of Creation of Constant 42, and with the 12600 sum of the 26 generations whose 26 logarithms add up to 66.66.
The centroid point in this unique equilateral triangle is also the exact central point in the 6-pointed Magen David, and in the Magen David each of the 6 edges forms a natural median as it is always
divided into 3 segments of 1/3 or to .333 and .666. In this case each of those 6 edges would be the Sword of Moses. When the precise edge value 6708.203 serves as the median rather than an edge the
edges of the resultant equilateral triangle are exactly 7500 and its perimeter is exactly 150^2.
The ratio between the edges and the medians in all these equilateral triangles is (7500/6708.203) = (6708.203/6000) = 1.1180… as in the value of the 15 Essential Triplets of the Shema, 1118, and even
more directly aligned with (φ – .5) = 1.1180…, the hypotenuse of the right triangle with a height of (One) 1 and base of .5.
248 dimensions that when seen as a centroid median split, separates 248 into two sections of 165 and 82.666 with 165 splitting further into two sections of 110 and 55, which are all multiples of the
Primal Frequency of the universe (27.5 Hz) that permeates the aether. We further understand from this that the two longer sections equal (165 + 100) = 275 or (10 x 27.5) and that 110 splits for the 3
^rd centroid iteration into approximately 73 and 37, as in the (73 x 37) Core Essential Prime Number equation of the First Verse of Creation. Thus, we see the harmonic link between the Core Cosmic
Concept of 248, the Primal Frequency (27.5) and Creation.
This triangular geometric design is also like concept of (4 x 42) or 168 that when seen as a centroid median split, separates into two sections of 112 and 56, as in the 112 Essential Triplets and as
in the 4 main diagonal vectors through the heart of the Magic Essential Cube of Creation that split into the 112 sum of the 8 corners and the (4 x 14) = 56 at the central core.
This triangular geometric design is also like 186, as in the speed-of-light, that when seen as a centroid median split, separates into the two sections 124 and 62. Thus, illustrating that the speed
of light of 186,000 mps is (3 x 62,000) mps or 3 times Keter (620), as in the crown at the top of the 3 columns of the Tree-of-Life, and like all those similar elements of the Torah and our solar
system that are based on variations of 62, including the 620 letters in the 10 Commandments and the limits of Earth’s atmosphere at 62 miles and 6200 miles. Moreover, we understand that (4 x 62,000)
= 248,000, making the speed-of-light approximately 3/4 the speed of the Torah (248,000φ).
Why 37?
It comes down to 3, literally. The sum of the continual reductions of the simple fraction 1/3^rd or .333333… by 10^n is (.333333 + .0333333 + .00333333…) = .37037 as in 222.222/6000. Meanwhile the
endless number .333333… is the sum of the continual inflation of .3 by 10^n or (.3 + .03 + .003…). So, starting from .3 or 3 we go up and down by orders of magnitude are reach 37, the Core Essential
Prime Number of Creation and our DNA, an element of the 6^th Day of Creation. Whether it is the 10^3 structure of Binah consciousness, of the Tree-of-Life consciousness the symmetry of the Cosmos is
constant and universal, which is why the triangular geometry and triplet structure is so critical and significant whenever we encounter it.
The 26 Kingdoms and the Concept of 3
The ages of the 26 Generations of Adam are far from arbitrary overall or individually. Great care went into their design and layering, but that only tells the story from our perspective, not the
perspective of the cosmos or consciousness that brought together 26 layers or realms, kingdoms, of consciousness with a singular purpose. Their 26 ages or lifespans sum to exactly 12600 or (300 x 42)
or (3 x 4200), like sum of the first 20 Tetra Prime Numbers that equal 4200 or (20 x 210); and like the approximate surface area (A = 4πr^2) of the Spherical Time bubble, 420000000; like the 3 times
that the second highest word-value 1400 in the Torah is found there; or 3 times the Spherical Time circumference/Great Precession ratio, 36304.24470/25,920 = 1.400.
26 logarithms of those lifespans sum to precisely 66.6612, matching the speed of the Earth in orbit around the Sun, 66612 mph as in the Centroid point (.666) of those vectors. So, in some abstract
hypothetical metaphysical giant triangle with an orbital speed vector as a median, the earthly portion is the .666 section to the centroid and in another of the median vectors, this one of mass, the
solar portion is the shorter segment to the centroid, as the Sun/Earth mass ratio is 333,000.
This also means that that the sum of the logarithms of the 26 Generations of Adam or 66.6612 is equivalent to the centroid point on a median of 100. The sides of the equilateral triangle with a
median of 100, are each 111.80, as in the 15 Essential Triplets and first verse of the Shema Ysrael, 1118, and its perimeter is 335.410, as in (330 + 5.41) = (12 x 27.5) + 5.410 with 541 being the
value of Israel and 410 being the value of the Hebrew word for Holy, Kadosh (קדוש) and of Shema (שמע), the first Essential Triplet of the 15. The other 14 Essential Triplet in this set are comprised
of the concealed 42 Holy letters of the thrice expanded Name YHVH (יהוה) that equal 708, as we see in the Sword of 42 that totals 6708 or (6666 + 42), and in the year Israel became a nation, 5708 HC.
Nevertheless, the sum of the squares of the first 460 digits in the Field of Pi (π) equals exactly 12,600 and this is exactly 541 digits from the end of the first 1000 digits, as in Israel (541).
While there being 541 rooms in the US Capitol Building may be a coincidence, this is not. This is no more a coincidence than the 11 initials of the 10 plagues adding up to 541 and ending in MB (מב),
42. Those 11 initials of the 10 plagues are also an obvious allusion to the 10 and 11 Sefirot structure of the Tree-of-Life, and just like the 10 Commandments they were split into the upper 3 and
lower 7 plagues, which in both instances is the whole number centroid split in the plagues and the Commandments and in the Tree-of-Life.
Moreover, the 541-value of Israel (ישראל) is the 10^th Star (Magen David) Number, and the value of 46 is that of the cubit, amah (אמה), thus translating the number 460 into “10 cubits.” The permuted
word Me’ah (מאה), meaning “100” also has a value of 46 and thus 10 times Me’ah (מאה) equals 1000 or Binah. The sum of the squares of the remaining 541 digits to the end of the first 1000 digits is
15776 or 10,000 plus 76^2, which is alternatively 14,000, as in David plus 1776.
With the very next digit in Pi (π), the sum of the first 461 digits is 2447, as in the 2447 years until the Exodus and the reception of the 10 Commandments, meaning the remaining 540 digits totals
2023. This is the same 2447 found in the Spherical Time circumference, 36304.24470.
The 26 generations are layered, and the Torah tells us specifically how those layers overlap. Moreover, the sum of the overlapping between the fathers and sons’ lives within the 26 generations, in
other words the overlapping between the subsequent generations, is 10,112 years, as in 10^4 plus the 112 Essential Triplets, meaning the net difference from the total for the 26 generations is (12,
600 – 10,112) = 2488, the year the Israelites entered the Promised Land, a concealed and metaphoric reference to entering the 248 dimensions of E8 Symmetry, the cosmic realm of Binah Consciousness.
24^th Triangular Field is 300, as in (300 x 42) = 12600 years, the sum of the 26 Generations of Adam, while the 24^th Tetra Field is 2600. So, all the positive integers counting from 1 to 24 equal
300 and all those Triangular sums from 1 to 24 or (1 + 3 + 6 + 10…+ 300) = 2600, and it was decided that there should be 24 hours in 1 day. We measure 1 days’ time on Earth based on how long it takes
our spherical planet to make one rotation vis-à-vis the Sun. We measure our 365 days a year’s time based on how long it takes our spherical planet to make one revolution around the Sun. We measure
the time between our months and the Jewish holidays based on the revolution of the moon around the spherical Earth. All our measurements of time are based on circles and spheres, yet somewhere along
the way someone decided that time had to be linear. We are also taught to calculate our equations linearly, yet Phi (φ) and especially Pi (π) are so integral to so many of them and they define
curvature and cyclicality in nature. All the field lines, electromagnetic, di-electric, etc. are curved, looped, and often toroidal, and as we have shown waves on all scales whether perceived in
particle-wave duality within atoms or in fields of any size are based on Pi (π) and the circle. Strings and twisted and/or bundled dimensions are all curved as in chaos. If every aspect of our
physics is curved, why do we need time to be linear and limited to make sense of it all?
Even our 10 digits cycle around modularly. Integers increase incrementally by 1 at a steady even pace, but that incremental increase is ever smaller relative to the size of the cumulative trail of
numbers behind it and to the numbers right in front of it, placing its growth on a diminishing curve, as opposed to a straight line. When we plot subsequent integers (1,1; 2,2; 3,3…) in 2-dimensions
(x, y) the graph is a straight line, but in 3-dimensions (1,1,1; 2,2,2; 3,3,3…) the plotted (x, y, z) cubes can be graphed through the furthest corner or through their midpoint. Either way, the graph
curves around up and out with each additional cube.
If in 3-dimensional space progression is curved, and if we exist in 3-dimensional space why do we believe so heartily that time progresses linearly in discreet intervals? Some would say that our
greatest impediment to enlightenment is our inability to perceive time as curved and spherical. It may be our greatest limitation. That is probably why time exists for us; to give us time to figure
it out.
The midpoints of the first natural centered cubes (3^3, 5^3, 7^3, 9^3, 11^3…) progress starting with the familiar position 14 in the center of the 3^3 cube. The odd-number based cubes have defined
central cubic positions, which is why they are natural. The even-number based cubes have a geometric center but not an arithmetic one. The progression in the natural cubes is always (n^3 + 1)/2 and
thus the 5^3 cube has a midpoint of 63, and the 7^3 cube has a midpoint of 172, both of which are associated with aspects of the YHVH (יהוה) and various unifications of the Names of the Creator.
While the first 3 midpoints collectively add up to (14 + 63 + 172) = 249, the 9^3 cube has a midpoint of 365, which not only reflects the time component of the circular and cyclical 365 days in a
solar year from the perspective of Earth but together (249 – 1) and 365 reflect the 248 proactive, and 365 restrictive precepts as laid out in the Torah. Meanwhile, the 11^3 cube has a midpoint of (
1331 + 1)/2 = 666, as in the triangular centroid position.
365.256 x 24) = 8766.1 hours, end up within the first 1000 digits in Pi (π)? At digit #980 we find the numeric string …87661…. And while the last 20 digits to 1000 equal 87, the last 15 digits from
the end of the string equal 66 as in 8766. The sum of the squares of those first 980 digits is 27777 and the sum of the digits in 8766.1 equals 28. To insert a digit string at a specific place one
would have to know how it ends before it begins, and in this case one would have to know there would eventually be an Earth and a Sun, their size, speed, orbital shape, rotation, and distance apart
and that someone would come along to decide that there should be 24 hours in a day. Then, 260 digits later at digit #1240, we find the string …9467678… which is only 242 parts of a minute off the (
365.256 x 24 x 60 x 18) = 9467436 parts of minute in that same sidereal year, like the standard solar year 365.242.
Field Phasing and Other Primordial Equations
3 Phased Fields
Like the basic dynamics of our atom and the principles of electromagnetism, physicality presents itself as a reflection of metaphysics in 3 phases, or fields phased in 3 directions that work
together. The 26 generations of Adam are presented to us in 3 groupings or phases from Adam to Noach, Shem to Abraham and Isaac to Moses. Together, these 3 phases that total 12,600 years and thus
average (12600/3) = 4200 years are cosmically aligned with the Primordial Equation 26(φ^2 + φ + 1/φ) = 126 = (3 x 42), the primordial relationship between the 3 phases of the Phi (φ) Field, and the
Core Concepts of the Radiance of the Creator, the YHVH (יהוה), and the Singularity of 42. Metaphysically, each generation is equivalent to 300 (42/26) = 300φ.
The 26 generations are layered, and the Torah tells us specifically how those layers overlap. When the Fields of the Core Concepts are layered, we describe them as phases. These phases can be nearly
infinite, and because they are each powers of the previous phase they grow exponentially as do dimensions, so with our limited hyper-dimensional comprehension we need to limit our exploration of them
to the near few, or limited phasing, and the most influential are the triple, triplet, or 3-prased variety.
While the Primordial Equation of Phi (φ) equals 26(φ^2 + φ + 1/φ) = 126 = (3 x 42) or alternatively equals 26(φ^2 + φ + 1/φ)/3 = 42, when we apply the same 3 dimensional phasing to the Field of Pi
(π), we find that (π^2 + π + 1/π) = 13.32950 or (π^2 + π + 1/π)/2 = 6.665 and that 26(π^2 + π + 1/π)/2 = (26 x 6.665) = 13(π^2 + π + 1/π) = 173.283590, as in the 173 Keys to Heaven, the 42-Letter
Name Matrix, and as in the √30000. More precisely, it is equivalent to the square root of 30000 plus the inverse of the Cosmic Harmonic, or √30000 + 1/12.73, which is thus √30000 + 40/π. It is off
by only .000030342.
This also illustrates that (π^2 + π + 1/π)/2 = 6.665 is 2/3 or the centroid point of a median of 10, as in the 10 Sefirot of the Tree-of-Life and the 10 Commandments and any segment or set of 10 in
our universe. Moreover, (π^2 + π + 1/π) = 13.33 is thus the centroid point of 20, as in Keter.
That Pi (π) is related to 2/3 or .666 and the speed of the Earth’s orbit is astonishing on its own, but that it is a balancing field between 26, the value of the YHVH (יהוה), and 173, the value of
the Singularity of 42, in the same manner as the 3-phase Phi (φ) Field is beyond extraordinary. Yet both these astonishing extraordinary metaphysical equations were written into the Torah through the
delineated 26 Generations of Adam and coordinated through their 26 logarithms, totals, and averages. This can only serve to emphasize their significance to a hierarchy within the universal
consciousness and to highlight the processes that we can learn and follow. There must be a treasure trove of cosmic wealth contained within their numbers that begin in the 13^th Paragraph of the
Torah and continued for a total of 27 Paragraphs; after all, these are 26 kingdoms of consciousness in 3 paradigms that came together to form the One nation of Israel.
While (π + π^2) = 13.0111, intimating two aspects (13 and 111) of One, (π + π^2 + π^3) = 44.01747; (π^2 + π + 1/π)/2 = 4.443168; and (1/π^4 + 1/π^2 + 1/π + π + π^2 + π^3) = 44.447370, similar to the
values of the first two plagues, 44, and 444, and obviously linked to the Concept of 4; the Cosmic Harmonic, 4/π and the 444-value of “from generation to generation,” like the layers and the phases.
These are also all the products of the Concept of 4 and the Concept of One (1), as in (4 x 11); (4 x 1111); and doubly so for (4 x 111) that is also 4 x Alef (אלף).
5 Phases of Phi (φ)
The unique properties and phases of Phi (φ) never cease to amaze. While Phi (φ) has the unique property that its 3 phases (φ^2 + φ + 1/φ) = Phi (φ), it also has the unique property that (1/φ^4 + 1/φ^
2 + 1/φ + φ^2 + φ^3) = 8, exactly 8. This is similar to the 9 sequential digit equation (987654321/123456789) = 8.000000729. We must keep in mind that in the Phi (φ) equation we are adding 5
irrational numbers to get the integer or whole number 8, as in the 8 dimensions (sefirot) of universal Binah consciousness. And while the 5 phases of Phi (φ) that equal 8, is evocative of the 5^8
elements in the Torah, this Primordial Phi (φ) Equation gives us an average of (1/φ^4 + 1/φ^2 + 1/φ + φ^2 + φ^3) = 8/5, which is the inverse of 5/8 = .625, Keter, and the square root of those 5^8
elements in the Torah.
A natural corollary to this equation is (1/φ^4 + 1/φ^2 + 1/φ + φ + φ^2 + φ^3) = 9.16180339887… = (8 + φ), yet a not so natural offshoot of these equations happens when we replace 3 times Phi (φ) for
Phi (φ)^3 and get (1/φ^4 + 1/φ^2 + 1/φ + φ^2 + 3φ) = 8.16180339887… = (7 + φ).
The Cosmos could almost substitute the phased Fields in the equation (π^3 x (1/φ^4 + 1/φ^2 + 1/φ + φ^2 + φ^3)) for the Concepts in the equation (31 x 8) = 248. This is how mathematics and
understanding bring us closer to the metaphysical realm of consciousness. For most people numbers are stumbling blocks, but for those who seek out understanding they are the addresses on the gates.
The 6 Trunks of Higher Consciousness – Oneness.
Again, while (π + π^2) = 13.0111, intimating two aspects of One, when we apply the Primordial Equation algorithm to the Concept of 13, Echad, One, we get (1/13 + 13+ 13^2)/2 = 91.03846… as in the 13^
th Triangular Field, 91 in the Path of One (1 – 13 – 91 – 455 – 1820). Because of the nature of Triangular Fields, (13+ 13^2)/2 = 91 exactly, as in the value of the unification of the YHVH (יהוה)
and Adonai (אדני).
Then while (1/13 + 13 + 13^2) = 182.077, the equation (1/13^2 + 1/13 + 13 + 13^2) = 182.082840, both equations thus reflective of the 1820 YHVH (יהוה) and the Path of One again. This is not unlike
the exact parts per minute found in One (1) year (365.242 x 24 x 60 x 18) = 9467073 parts per minute that are found as …9467073… within Pi (π) at digit #182,770.
Meanwhile, the average of that equation is (1/13^2 + 1/13 + 13 + 13^2)/4 = 45.5207, as in 455 of the Path of One and as in the 3 Aspects of the Name of Binah consciousness Ehyeh (אהיה) that equals
455. It is also as in 1/13^3 = .000455, while the sum of the midpoints of the consecutive natural cubes 12^3 and 13^3 is (666 + 1099) = 1765 = (42^2 + 1). And as we know 1099 is not only the sum of
the squares of the first 37 digits in Pi (π), but the sum of the 3 Constants (π/2 + π + 2π) = 10.9955742…, while the first string …1099… within Pi (π) is at the digit #23805, representing the sum of
the 5 elements of the Path of One that equal (1 + 13 + 91 + 455 + 1820) = 2380 or 1/42.
Moreover, extending the layers to 6 phases of 13, we get (1/13^3 + 1/13^2 + 1/13 + 13 + 13^2 + 13^3) = 2379.0832 or approximately 2380.
Blessing of the Cohanim Redux
One point that we have overlooked in our previous explanations about the Blessing of the Cohanim within the Torah is that those 6 outer pyramidal words that total 1820 and align with the 1820 YHVH (
יהוה) in the Torah are comprised of 26 letters, again as in the YHVH (יהוה). Also, as a reminder, we say Amen (אמן) of numerical value 91 in response to each line of the Blessing, and like the 91 in
the center of the Path of One (1 – 13 – 91 – 455 – 1820) the 3 central initials within the Blessing of the Cohen (יפא) add up to 91 both vertically and horizontally. Those 3 Amen (אמן) thus equal (3
x 91) = 273, as in the the Cosmic Harmonic residue (.273). When the Amen (אמן) are said with understanding in response to each line they activate those central triplets (יפא) that have matching
values of 91 and matching ordinal values of 28 as the triplet of Amen (אמן), which in turn activates the field of protection of the 1820 YHVH (יהוה). Each Amen (אמן) creates the unification of the
Names YHVH (יהוה) and Adonai (אדני) and amplifies the field.
The initials of the 3 levels corresponding to the 3 verses, break down into (ייו) as 26, as in the value of the YHVH (יהוה), for the first level; (ייפאו) or 107, as in the Spherical Time radius, 5778
years that is the 107^t^h Triangular Field for the 5 initials of the second level; and (ייפאו) yet again as 107 for the first 5 initials of the third level, plus 330 (לש) for the final two as we saw
in the center of the Bereshit Rectangle. Setting aside the 2 consecutive sets of 5 initials that each equal 107, the remaining 5 initials that sandwich them equal (26 + 330 + 2) = 358, Moshiach
consciousness, and draw the blessing into a circle.
Moreover, the 15 words and/or 60 letters can be split into the first 4 words or 18 letters, and just like the 11 Sefirot and 42 letters of the Tree-of-Life whose average ordinal value is 42, the
final 11 words or 42 letters (יהוה פניו אליך ויחנך ישא יהוה פניו אליך וישם לך שלום) of the blessing have a total ordinal value of 420.
While the sofit value of those first 18 letters is exactly 45^2, their ordinal sofit value is 216 or 6^3, as in the 216 letters of the 72 Essential Triplets, once again exhibiting the necessary unity
of the 42-Letter Name Matrix and the 72 Essential Triplet Matrix in the metaphysical alchemy of our world and increasing the threshold of blessings available when recited with understanding.
More Phased Understanding
Given the Primordial Equation 26(φ^2 + φ + 1/φ) = 126 = (3 x 42) and applying the same algorithm directly to the Concept of 26 that aligns with the YHVH (יהוה) we see that (1/26 + 26 + 26^2) =
702.038… as in the value of Shabbat (702) coupled with the Torah’s first Essential Triplet (ברא), 203. Meanwhile (26 + 26^2) = 702, and (26 + 26^2 + 26^3) = 18278, as in the Concept of 18 that is
connected to Moshiach and as in 278, Ohr Haganuz, the Light or radiance of Moshiach. If we add the 6 phases, including the 3 reciprocals, the total is (1/26 + 1/26^2 + 1/26^3+ 26 + 26^2 + 26^3) =
18278.0400. Metaphysically, these Concepts are all integrally connected, meaning there are direct pathways between them.
Meanwhile, the 4^th Triangular Field is 10, and the average of the 4 phases of 10 is (1/10^2 + 1/10 + 10 + 10^2)/4 = 27.5275, a reverberation of the Primal Frequency (27.5), illustrating its origins
to both the Concepts of 9 and of 10, and to the sum of the 275 elements in the 4 Aspects of the YHVH (יהוה). Its connection to 5 is also apparent through the equation (1/5^2 + 1/5 + 5 + 5^2)/4 = 7.56
or approximately 27.5^2, the radiant primal Frequency.
Clouds of Consciousness
When we taste something for the first time, a sensation is formed within the synapses and neural pathways of our brains. When we taste it again, those neural links and pathways are strengthened, and
more associations are added to it. If those associations are later reinforced, they too are strengthened and a whole network is established and strengthened. The same happens when we study cosmic
pathways and their associated networks. Our desire for them increases, neural memories are created, and our ability to connect with the proper associations and benefits of that heightened
consciousness is reinforced and strengthened. This will eventually lead to understanding, then elevation, and eventually ascension. The same forces and processes that bind us to physicality and
physical desires can activate the spirituality within us.
The knowledge and tracing of the pathways of the 6 trunks or 6 clouds (ענן) of the main higher consciousness elements, Binah Consciousness, Moshiach Consciousness, the Path of One Consciousness, the
YHVH (יהוה) Consciousness, the 112 Essential Triplets, and the Primal Frequency (27.5) Consciousness strengthens their ability to reach into our souls and to train and guide our consciousness
development in the highest and best way.
42 is above these Consciousness clouds and represents the essence of the 42-Letter Name Matrix. The actual 42-Letter Name Matrix represents 14 of the 112 Essential Triplets, an autonomous cloud
within the cloud. The First Verse of the Torah and of Creation is also an autonomous Consciousness cloud, yet it is derived from the fusion of the 14 Triplets of the 42-Letter Name Matrix with the 11
Triplets of Bereshit and influenced by the other trunk clouds as well. None of these clouds are meant to be worshiped or idolized in any way; they are meant to be appropriately accessed. In some
descriptions of ancient esoteric literature, they are called palaces and the Singularity as the throne. We can only describe the metaphysical world with the limitations of our physical language and
It is not for nothing that the word cloud (ענן) has the sofit gematria of 820, as in the heart of the Torah, “but thou shalt love thy neighbor as thyself (וְאָהַבְתָּ לְרֵעֲךָ כָּמוֹךָ)” or that it is found 55
times in the Torah, as in (2 x 27.5), twice the Primal Frequency. While the word cloud (ענן) is found 8 times as a triplet, the word “The Cloud (הענן) has a value of 175, as in the 175 years of
Abraham of numerical value 248, which is thus aligned with the 248 dimensions of the E8 Symmetry, the surrounding consciousness scaffolding to Binah, the Cloud.
As for the Concept of 9, we see that its average is (1/9 + 9 + 9^2)/3 = 30.03703703703, as in the 3003 value of the 11 Essential Triplets of Creation coupled with the Core Concept of 37 and with 703,
the 37^th Triangular Field built into those Essential Triplets. This Concept is integrated with the average number of YHVH (יהוה) per Torah portion, (1820/54) = 33.7037037037… or (1/9 + 9 + 9^2)/3 +
3.7. Thus, we see the Consciousness Networks of the Core Essential Prime Concept of 37 integrated and networked with the Consciousness Cloud or Field of the YHVH (יהוה) and the Consciousness network
of the Concept of 9; we see the Pathways. As we visualize these Consciousness Pathways, tracks are being laid down in our souls. Where they take us and how quickly depends on how much effort we
dedicate to understanding them. It is not a matter of time, but of desire and focus. If we think of time as linear, we will never find the time for spiritual development. If we think of time as
spherical, we can access whatever we need from wherever omnidirectionally. It is our desire and focus that draws the information and spirit from the omnipresent clouds to us, regardless of its
metaphysical location or whether we have the right address. Spiritual Time and Physical Time are two different paradigms and the difference between them is the speed-of-light. Strengthening the
Pathways strengthens our ability to remain in the Spiritual Time paradigm and frees us to access more pathways and thus further develops our affinity for higher Consciousness.
The process happens invisibly. Once we have absorbed this knowledge, our connections happen automatically whenever we see one of the Names such as the YHVH (יהוה), and if it is in a Torah portion
that Pathway and connection to the Core Essential Prime Concept of 37 that governs life is also strengthened. If we are conscious and focused on these connections and meditating for others while
doing so, they are strengthened and deepened even more. The same happens with numbers.
Phases are layers. The basic bi-layer constant of Pi (π) is (π + π^2) = 13.0111 which is connected to One, and the basic bi-layer constant of Phi (φ) is (φ + φ^2) = φ^3 = 4.236, which is connected
to Moshiach Ben the magnetic fields, etc. So, it is metaphysically revealing that (e + e^2) = 10.1073379273896954625, as in the 10 Sefirot, the 107^th Triangular Number (5778), the (73 x 37) value of
the First Verse of Creation, followed by the 2738 value of the molecule weight of the 20 Amino Acids in our DNA, or 2738.0 g/mol, which is also the value of the First Verse (2701) along with its
reciprocal (1/2701) that also known as the 42-Letter Name, or (2701 + 37.01) = 2738.017, which is also 2 x (37^2). It is followed by the layer of the 969-year age of Methuselah, the world’s oldest
Man, whose name’s numerical value is 784 or 28^2, as in the 28 letters of that same First Verse. Methuselah’s 969-year lifespan, which ended the year of the Flood, is coupled with the location within
Pi (π), digit #954, of the third and final string of the Event Horizon …5778…, which is then coupled with the 546-sum of the 10 initials of the 10 Sefirot of the Tree-of-Life and with H’Keter, 625,
the crowning level of those Sefirot.
e) has its own network and associations and yet has a specific purpose for being located here in its specific place and order. It is metaphysical alchemy. These results that are derived wholly from
constants crucial to mathematics, physics, and our physical existence describe a cosmic chemical compound, so which each molecule has been carefully chosen and weighed out and applied with just the
right enzyme and conditions, in much the same way as we think life happened miraculously on Earth. For example, the age of Earth’s longest-lived Man, 969 years, is wedged in between the network of
our DNA and the second allusion to the Event Horizon and finality of physicality. That allusion is a coded location within Pi (π), which provides a hint to us that we should examine 969’s locations
within Pi (π). The two most prominent locations for a numeric string vis-à-vis Pi (π) are its first appearance and its digit location equivalent. In this case …969… first occurs at digit #2756, a
blatant reference to the Primal Frequency (27.5) and to the radiant Primal Frequency (27.5^2) or 756, which may be a hint that turning into the Primal Frequency equates to long life. The sum of the
squares through the 969^th digit (…0661300…) within Pi (π), as the alternative fork in this network, is 27496 or (27500 – 4) reflecting the Primal Frequency (27.5) again and the Sefira of Malchut (
496). The 31 digits through the end of the 1000 digits equal 130.
3 bi-phasic Constant Fields is (π + π^2) + (φ + φ^2) + (e + e^2) = (13.0111 + 4.236 + 10.1073379273896954625) = 27.35444, which is .14556 less than the Primal Frequency (27.5), and 3000 more that the
Spherical Time diameter and galactic wavelength, 11,556 years, as in the Creation Circle of 3000. What further ties these concepts together is that the numeric string …14556… is found at digit #
89205 within Pi (π) and the numeric string …11556… is found at digit #89350, a separation of exactly (89350 – 89205) = 145 digits, as we find in 14556. Even the numeric strings …89205… and …89350…
are found within (#46355 – #45425) = 930 digits of one another, as in the 930 years of Adam (45), who started the clock on Spherical Time and gave 70 years of his life to David (14) at the center of
Spherical Time, as in the coupled 14 and 45 in both 145 and 14556. The odds alone of these two pairs of numbers each occurring within 1000 digits of one another are 100 million to 1, while the
location of the uber significant numeric string …11556… at 89350 is extremely close to the counterspace of the number of rows and columns in the Torah or (100,000 – 10,648) = (10^5 – 22^3) = 89352,
with …10648… moreover found at digit #60052 within Pi (π), as in (60052 + 10648) = 70700, which is 29300 less than 100,000, just as (89352 – 60052) = 29300, as in the 62^nd Prime Number, 293.
How can numbers work out this way? Numbers are not floating out there in cosmic space; they are names or addresses associated with consciousness units, and while we see it from our perspective as a
string of digits 10.1073379273896954625, on a simple level the metaphysical world understands it as a layering of addresses, and thus a formula or code: 10 + 00.107 + 00.0073 + 00.000037 +…., layers
that interact with each other in various ways, some fixed, and some dependent on other nearby layers and primed to react differently depending on what fields they encounter.
It is a highly complex system of neural networking that we do not have to learn, and this is just the mechanical component to it. The universe is not a mechanical program or even an AI; it is living
consciousness. What we are viewing are just the reflections of the autonomous processes in it. From our limited perspective even these are abstract, so comprehending the true nature of that
consciousness takes a lot of training and development, which is why we are here in this simulation incubator. The universe has a lot of patience. We do not.
5778 years, as represented by the 107 field in resultant 10.1073379273896954625. That result is based on the constant e on which the base[10] logarithmic scale is formulated. In applying the
Primordial Equation algorithm to 5.778, as representative of the logarithm of the 600,000 souls in Adam and at Sinai, and of the Event Horizon radius, 5778, we begin with the special properties of
this number such that 1/5.778 gives us .17307026, as in the 173 Keys to Heaven given to Moses, which are the 14 Essential Triplets of the 42-Letter Name Matrix. Next, 5.778^2 = 33.385284, as in the
year 3338 HC, when the First Holy Temple was destroyed. Then yet another provocative layer is that 1/5.778^3 equals .00051840, as in 72^2 = 5184, and the slope angle (51.84^o) of the Great Pyramid,
making 5778 reflective of both time boundaries and the balance of the 42-Letter Name Matrix and 72 Essential Triplet Matrix, along with the 33 letters in the 11 Essential Triplets of Creation.
Of course, it is always possible that the 385 in 5.778^2 = 33.385284 represents “the Divine Presence, the Shechinah (שכינה)” of numerical value 385.
Nonetheless, when we add the 3 layers (5.778 + 5.778^2 + 5.778^3) they equal 232.06345, as in the 232-value of the 4 Aspects of the YHVH (יהוה), and as in 345, the numerical value of Moses. And while
(π + π^2) = 13.01119, or 13.0112, the average of the 3 layers of the Primordial Equation (1/5.778 + 5.778 + 5.778^2)/3 = 13.112, reemphasizing in both cases the existential connection to the Path of
One, the 112 Essential Triplets, and to each other.
2 Phased Pathways and the Vectors of Exodus
It is the simple 2-layer paths (n + n^2) that tell simple stories. It is the simple path of 3, (3 + 3^2) = 12, that shows the progression of the 3 Patriarchs to the 12 Tribes. Then, as the simple
path of One (13) equals (13 + 13^2) = 182, Jacob, so too does the simple path of 12, or (12 + 12^2) = 156, Joseph, as in the 12 moons of the Cosmic Wheel and the Tribes that bowed down to Joseph.
Furthermore, the simple path of 10, (10 + 10^2) = 110, as in the 110 years of Joseph, and the Major Interval with the Primal Frequencies (4 x 27.5).
Meanwhile, the 6 columns of the 42-Letter Name Matrix took the simple path (6 + 6^2) = 42, and the 8 columns of the 72-Triplet Name Matrix took the simple path (8 + 8^2) = 72.
These are the Pronic Numbers in the Path of the Exodus that “Split the Sea” of integers in a square lattice similar to Ulam’s Spiral. All the numbers in this naturally formed narrow straight are the
ones the Torah utilized in the saga of the Exodus. This pattern, as we recently saw is also inherent in Ulam’s Spiral of Primes that begins with the One (1) in the center.
While the simple path of 19, or (19 + 19^2) = 380, Egypt, the simple path of Keter, 20, or (20 + 20^2) = 420, as in the Singularity of 42, which follows from the simple Path of 4 or (4 + 4^2) = 20.
And while the simple path of 14, at the center of the Magic Essential Cube of Creation, or (14 + 14^2) = 210, as in the 210 years of the Exile in Egypt and the Great Pyramid’s height in levels and in
cubits, the simple path of the 27 letters and/or positions in that Cube, (27 + 27^2) = 756, as in the radiant Primal Frequency (27.5^2) and the base of that same Great Pyramid.
There is a reason the Torah made such a dramatic statement with the Exodus. It is the full purpose of the Torah to help us reach our own Exodus, freedom from slavery, much of which is self-imposed.
Elohim (אלהים)
Meanwhile the simple path of the 24 hours in a day, (24 + 24^2) = 600, as the 600 choice chariots of Egypt sent to drown in the sea while chasing the 600,000 Israelites. The Paths and Primordial
Paths of 24 are special in that the average of the simple path is 600/2 = 300, as in the 24^th Triangular Number and as in the 300-value of the spelled out Name Elohim (אלהים), the 3^rd word in the
Torah. Also, while the 3 phases equal (1/24 + 24 + 24^2) = 600.0, its average is 600.0/3 = 200.013, as in the 200-value of the regressive spelling of the Name Elohim (א־אל־אלה־אלהי־אלהים). While
600.0 can also represent 6000, as in the 6^th millennium aligned with the 6 Days of Creation, the average of the 4 phases (1/24^2 + 1/24 + 24 + 24^2)/4 = 600.0434/4 = 150.010, as in the highest word
value of the Torah, 1500. Nevertheless, all 6 phases equal (1/24^3 + 1/24^2 + 1/24 + 24 + 24^2 + 24^3) = 14,424.04, representing the Path through the central 14 of David (14) to Moshiach Ben David
consciousness. The 24 hours in a day are considered a cycle of nature, which in Hebrew is Teva (טבע) of the same gematria (86) as Name Elohim (אלהים). We understand from the Torah that Egypt/Pharaoh
knew of Elohim (אלהים) consciousness, but not the YHVH (יהוה). While the ordinal value of Teva (טבע) is 27, as in the 27 letters of the Alef-bet and the 27 positions of the Essential Magic Cube of
Creation of Constant 42, we find the triplet (טבע) embedded in the 5^th tier (חקבטנע) of the 42-Letter Name Matrix. It is separated by the letter Nun (נ) of ordinal value 14, which gives the 4
sequential letters (בטנע) containing Teva (טבע) the ordinal value (27 + 14) = 41, as in that of the Name Elohim (אלהים).
Moreover, it is these 3 Aspects of the Name Elohim (אלהים), the Torah’s 3^rd word, that the Arizal explains align with the power of the Shofar (שופר) of numerical value (300 + 86 + 200) = 586 and of
Jerusalem (ירושלם) 210 + 376) = 586 or (600 – 14) = 586, as in the 300-value spelled out Name Elohim (אלהים) plus the 86 value of Elohim (אלהים) plus the 200-value of the regressive spelling of the
Name Elohim (א־אל־אלה־אלהי־אלהים).
These paths are simple enough on paper for us to grasp and to marvel at with awe and yearning for greater understanding; after all, they lead to freedom from slavery as explicitly stated in the
Torah. Yet even the simplest ones involve the collapsing of the 2 dimensions of space into one and the ability to follow and vibrate with the crisscrossing streams of consciousness plasma.
All these simple paths are twice a Triangular Number, meaning they each generate 2 symmetrical n^th Triangular Fields each of total value of (n + n^2)/2, thus the simple path of Keter, or 20 that
leads to 420, generates 2 symmetrical Triangular Fields each of 20 levels that stretches from 1 to 210.
This is like Abraham, the 20^th generation, whose name is found 210 times in the Torah, 151 as Abraham and 49 as Abram, which corresponds to the 151-value of the expanded Name of Binah, Ehyeh (
אלף־הה־יוד־הה) 7 x 7) Gates or levels that it takes to reach Binah consciousness as in the Omer period from Pesach to Shavuot.
This further aligns with the string …756… that has the value of sefirot (ספירות) and is found at the 210^th digit in Pi (π), and that is most importantly the radiance of the Primal Frequency 27.5^2
or 756 which is reflectedin 756’ base of the Great Pyramid, while 210 is reflected in the 210 levels and 210 cubits of its height. Considering the Creation Circle of 3000, a square formed from its
circumference has a perimeter of 3000 and 750 per side, or 6 units per side less than the square base of the Great Pyramid that is based on the radiant Primal Frequency and that has a 3024’ perimeter
The Galactic Wave
27.5 Hz, and when that is tuned to the Singularity of 42, or precisely 420.21818 cycles, which is essentially the same as 420.21820, it equals exactly 11,556 cycles, thus creating the 11,556-year
measure of the Spherical Time diameter and the galactic wavelength of 11,556 years that coincides with that precise diameter. Within each wavelength or periodicity of the Galactic Parker Instability
Wave that recycles and refreshes every star in our galaxy, including our Sun, through fresh cosmic dust intrusions and micronovae there are 420.21820 or 420.22 annual cycles of 27.5. This is why the
surface temperature (5778 K) of our Sun is tuned to the 11,556/2 = 5778 half cycle of that continual cosmic event, or 210.11 annual cycles of 27.5 like the height of the Sun Pyramid and the Great
Pyramid that was precisely based on its schematics. At the far end of the Event Horizon, around the year 5805 HC, the grace period is stretched to the bursting point, which takes the galactic
wavelength to its furthest point of 11,611 years, the palindromic refection of the 11 Sefirot of the Tree-of-life, the Torah (611) and with 161 in its center, the highest Aspect of the Name of Binah,
Ehyeh (אלף־הי־יוד־הי). This is 422.22 annual cycles of 27.5, as in the 422 value of the Hebrew word for 70, and as in the year –422 BCE when the First Holy Temple was destroyed, and 70 CE when the
second Holy Temple was destroyed after standing for 422 years. Are the Holy Temples also metaphors for the Earth, the Creator’s Holy Temple? The year and Spherical Time radius of 5778 equals (10,000
– 4222.2) or (10^4 – 4222.2) with 10 to the 4^th order being the extent of Binah.
We thus see that the crowning singular Concept of Keter generates 2 symmetrical n^th Triangular Fields each of total value of (n + n^2)/2, or (2 x 210) = 420 as it expands and radiates dimensionally,
generating 2 symmetrical Triangular Fields each of 20 levels that stretches from 210 to 1 to 210, forming an hourglass torus whose diameter matches the diameter of the Spherical Time bubble and the
periodic wavelength of the Galactic Parker Instability Wave that the Sun Pyramid and Great Pyramid were emulating.
Following the Paths
Following the Paths leads to deeper understanding. They are akin to climbing the ladder. As discussed, Abraham and Sarah are metaphysically related and aligned to Jacob and Rachel, and we know that
(182 + 238) = 420 and that while the numerical value (238) of Rachel is equivalent to 1/42, Abraham is found (5 x 42) times in the Torah. We noted that the 28056 letters Hei (ה) of value 5 equal (4 x
7014), which metaphysically connects the values 4, 14 and 70, just as we recently saw with Rachel and the 70 descendants of Jacob, the 70 years of King David (14), and the 5 times his name (דוד) was
included in the births of the tribes, in particular with Rachel’s ability to give birth. Given all that, we see that the Primordial 3-Phasic Equation Primordial 3-Phasic Equation equals (1/14 + 14 +
14^2) = 210.07142857 and that its’ average is (1/14 + 14 + 14^2)/3 = 70.0238095, which breaks down to 70, Rachel (238), and “the King (95),” or exactly (70 + 1/42). This Path has taken us on a
converging journey with the Path of the 14 Triplets of the 42-Letter Name Matrix that resulted in 70 + Pi (π). These are journeys steered by living consciousness, illustrating how 210, 14, 70, the 42
-Letter Name Matrix and Pi (π) are all integrally connected in the metaphysical Cosmos and how those connections are clearly reflected for us in the Torah.
Meanwhile, the simple path of 12 that leads to (12^1 + 12^2) = 156, Joseph, Rachel’s son, or (6 x 26), extends to (12^1 + 12^2 + 12^3) = (6 x 314). This is in the same way that 1 expands to (1^1 + 1^
2 + 1^3) = 3 and 2 expands to (2^1 + 2^2 + 2^3) = 14, key concepts in the construction and design of the Magic Essential Cube of Creation, along with (13^0 + 13^1 + 13^2 + 13^3) = 2380 or 1/42, from
the Path of One, whereby (1 + 13 + 91 + 455 + 1820) = 2380. Simple consciousness expansion and/or layering leads to ever more complex Concepts.
We also see that 120 is not just the number assigned by the Creator to the age of Man and to Moses, but it is the path of the 4 phases or expansions of 3, or (3^1 + 3^2 + 3^3 + 3^4) = 120. The
Primordial Equation algorithm is designed to work best with numbers between 1 and 10 because they do not rise or contract too steeply and thus do not skew the totals; nevertheless, any number can be
utilized to its fullest by adding a decimal place, allowing us to apply the Primordial Equation algorithm to 1424, the Holy of Holies, as 1.424. Thus, we see that the simple Path of 1.424 leads to (
1.424 + 1.424^2) = 3451776 and that (1.424^1 + 1.424^2 + 1.424^3 + 1.424^4) = 10.4512045. A little comprehension and the cosmic interface language reveals that 345 is that value of Moses, who lived
to 120, the Age of Man (45), and that the year 1776 HC, when of the erev rav attempted to establish a new world order and the Tower of Babel, was 120 years after the Flood in 1656, and that while
1776 is (16 x 111), Moses was the 16^th generation after the Flood, which was the 10^th generation.
The 3 Phased Names and the Speed-of-Light
When we add the phased expansion of the 3 Names Ehyeh (אהיה), YHVH (יהוה), and Adonai (אדני) that add up to (21 + 26 + 65) = 112 in their first phase, as in the 112 Essential Triplets, we see that
collectively their 3 second phases (n + n^2) = equal 5454, or depending how we want average it, (2727 x 2) or (1818 x 3), like the 3 iterations (expansions) of the letter Alef (א) that equal 1818.
This is similar to the difference between the radiant value (378 x 27.5) and the standard value (4995) of the 27 letters of the Alef-bet that is (10,395 – 4995) = 5400. Keeping in mind that the 27.5
is a wave frequency before it is a fixed number, such as the 27.5” in a cubit, the value of the radiant 27 positions of the Alef-bet and/or Magic Essential Cube of Creation (378 x 27.5) is 21 less
than the Torah’s (42 x 248) Row-Column Matrix, (10,416 – 10,395) = 21, the value of the Name Ehyeh (אהיה).
The values (5454, 2727, and 1818) are all multiples of 101, as in the Archangel of Protection, Michael. Meanwhile, the sum of the 4 phases of the 3 Names Ehyeh (אהיה), YHVH (יהוה), and Adonai (אדני)
divided by 101 is (n + n^2 + n^3 + n^4)/101 = 18808998/101 = 186,227.7, as in the speed-of-light, off by only 54 mps, in the same way that the sum of the 72 square roots of the 72 Essential Triplet
Matrix and the 9 square roots of the rows of the 72 Essential Triplet Matrix equal 1000.054, and the 54-unit shift in the odd/even Alef-bet split to the Phi (φ) split. This also means that the 3
Triangular Fields of the 3 Names of 21, 26, and 65 = 2727, like the 27 letters and positions of the Alef-bet and Magic Essential Cube of Creation. This cosmic equation of the speed-of-light matches
the ratio of Spiritual Time to Physical Time, (3760/2018) = 1.86232, whereby the sum of the digits in (3760/2018) = 27, as they do in 5778 or (54 x 107).
The Phased Alef-bet
As we know, it is the residues in mod 27.5 that make the Primal Frequency unique and that when we limit these rep numbers to triplets of 3 decimal places, their sums of .090, .181, .272, .363, .454,
.545, .636, .727, .818, and .909 equal 4.995, as in the sum of the 27 letters of the Alef-bet, 4995. The value for the 22 letters on the other hand is 1495, and when we use 1.495 as a basis for the
Primordial Equation algorithm as above, we see that 1.495^4 = 4.995336, as in the 4995 sum of the 27 letters of the Alef-bet, which is also the sum of the 10 residues of the Primal Frequency, linked
together with the 336 letters in the 112 Essential Triplets. With one simple operation all these primordial parts of the physical and metaphysical universe are integrally connected and related
simultaneously, offering yet another proof that the 5 final letters (ךםןףץ) that take us from 22 to 27 letters were designed into the universe from the start, regardless of the musings of career
Moreover, the simple Path of 1.495 leads to (1.495 + 1.495^2) = 3.7300 as in the (37 x 73) structure of the First Verse of Creation. The number 373 is somewhat special as the 74^th Prime Number
because it is the sum of 5 consecutive Prime Numbers (67 + 71 + 73 + 79 + 83) = 373 that are also centered on the First Verse Core Prime Number 73 and that begin with Binah (67) and end with 83, the
Creation Prime Number associated with the ordinal values of the 7 words of Creation. It is also the sum of the squares of the first 5 odd prime numbers (3^2 + 5^2 + 7^2 + 11^2 + 13^2) = 373.
Moreover, while Adam’s lifespan is a ratio of 930/2488 = .3737 of the 26 generations, the cube of 72 equals 72^3 = 373248, aligning it with the 4 phases of the Cosmic Harmonic residue that equals (
.273^0 + .273^1 + .273^2 + .273^3) = .37342, and the Core Concept of 248 that encapsulates the Torah, along with the 42 x 248 Row-Column Matrix of the Torah.
In reference to the 72 Triplet Matrix, the sum of the 27 letters that total 1.495 when the 4 phases (n^1 + n^2 + n^3 + n^4) are applied to each letter is 11.43426537, as in 1143-value of the Matrix’
first row total of 1143, and 8 columns that equal (9143/8) = 1143, coupled with 42, 65, and 37. Meanwhile, the average of their 3 phases of the 27 letters is (n^1 + n^2 + n^3)/3 = 3.300270, as in the
cycles in the age of Man (120 x 27.5) or (1.100090 x 3).
Moreover, the 4 phases of the 27 letters equals (4.995^1 + 4.995^2 + 4.995^3 + 4.995^4) = 777.074147… as in Lamech, the 9^th generation, who lived 777 years, and as in those 9 that make the Primal
Frequency (27.5 Hz) unique. Furthermore, the 74 in this result not only aligns with the 74^th Prime Number result of the simple Path of 1.495 but (37 x 74) = 2738, as in the molecule weight of the 20
Amino Acids in our DNA, 2738.01 g/mol, which likewise ties in the Cosmic Harmonic residue (273). Everything can be traced back and reverse engineered to layers upon layers of geometries, even our
DNA, and even to the 147 years of Jacob and the ladder of (7 x 21).
Within the 26 Layers
26 Generations of Adam we see that Enoch, who lived 365 years and was taken by G-d, can be paired with Mahalalel and together they lived (365 + 895) = 1260 years, or exactly 1/10^th the 12600
collective years of the 26 generations, matching the Primordial Equation 26(φ^2 + φ + 1/φ) = 126, and the faces and planes of the Magic Essential Cube of Creation, each 126. Enoch represents the 365
solar days in the Earth orbit that progresses at 66612 mph, like the 66.6612 sum of the 26 Logarithms of the 26 generations and like the Primordial Pi (π) Equation (π^2 + π + 1/π)/2 = 6.665.
Enoch’s life can thus be said to be a measure of time. Moreover, Enoch’s lifespan aligns with the 365 years between Abraham’s death in 2123 HC and the passing of Moses 6 generations later in 2488 HC.
They were the two men closest to the Creator in the Torah after Adam. Abraham was born 1326 years after Enoch, which is exactly (12 x 27.5) = 330 years less than the date (1656 HC) of the Flood, the
Great Reset, as in the 330 cu base of the Great Pyramid, etc. Thus linked together, we also see Enoch’s 365 years and Abraham’s numerical value of 248 being reflective of the two sets of 365 and 248
Those 365 years also parallel Abraham’s father, Terach who was born 570 years before the Exodus and the reception of the Torah, as in the 570-value of the Hebrew word for 10, Eser (עשר). Terach died
365 years before that same pivotal date, 2448 HC. One of the secrets that unfold from the Hebrew word for 10, Eser (עשר), which is built into the second tier of the 42-Letter Name Matrix (קרעשטן), is
that the 3 Names or spelled-out letters in the triplet Eser (עשר) equal (130 + 360 + 510) = 1000, as in 10^3, which represents geometrically a cube of 10^3, like the 1000 letters of the Shema, like
the 1000.0 sum of the square roots of the 72 Essential Triplet Matrix, and as in the fullness of Binah consciousness. This shows us that the double expansion of 10, as in the 10 Sefirot or
dimensions are latent within the Name Eser (עשר) from the outset. When that expansion to 1000 is added to that second tier of the 42-Letter Name Matrix (קרעשטן) its full value becomes (9^3 + 10^3) =
1729, as in the 1729 times the word value 26 appears in the Torah.
10, Eser (עשר), brings up the letter Ayin (עין) for 70, whose spelled-out Name value is 130, which has a cube root in turn of 5.065797, or 506 plus 5797 or 21 years beyond the start of the Event
Horizon in 5778. The value 130 is that of Sinai, the place where the Israelites received the 10 Commandments in the Torah’s 70^th Chapter. While 506 is the complete value of Moshiach Ben David, the
complete value of “In (at) Mt. Sinai (בהר־סיני) with the kolel is 420, as in the 420 YHVH (יהוה) in the Torah up to the 10 Commandments and the 420 cycles of the Primal Frequency in the periodicity
of the Galactic Wave and the diameter of Spherical Time.
Meanwhile, those 12600 total years occurred over a span of 2488 years, ending when Moses passed away and the Israelites entered Israel, a possible reference to the 248 dimensions of the E[8] 24880
-mile circumference of the Earth. The average is (12600/2488) = 5.06, as in the first tier (506) of the 42-Letter Name Matrix Ladder. This is also cosmically aligned with the 2 phases or layers of
the equation of the 22 letters, that equal (22 + 22^2) = 506. As the head of the 7 tiered 42-Letter Name, the first tier (אבגיתצ) is also aligned with the sum of the squares of the first 506 digits
in Pi (π) which equal 14127 = (14,000 + 127) = (7 x 2000 + 127), as in the 7^th Mersenne Prime, 127. This in turn cosmically aligns with the 12700^th digit in Pi (π) which is where we find the first
numeric string ….1111…. or …111126358…., coupling it with the Concepts of 112, 26, and 358, Moshiach. Meanwhile, the sum of the squares of the remaining (1000 – 506) = 494 digits to the end of the
first 1000 total 14285, as in 1/7 = .14285…
The design of those generations serves multiple metaphysical purposes, and they act as bridges between Concepts or areas of consciousness. The logical split in the generations takes place at the
Noach’s 10^th generation when the Flood or Great Reset occurred, and this is how the Torah presents them to us, as a chronicle of 10. The next division occurs with the second group of 10 generations
chronicled through Abraham. The numeric difference between Noach’s death in 2006 HC and Abraham’s birth in 1948 HC, the Divine Calendar mirror of the year 1948 CE when Israel became a nation, is 58
years, as in the numeric value of Noach (נך), 58. Like the 26 generations, the year 2006, most obviously hints at the 26-value of the YHVH (יהוה) consciousness cloud. One of the bridges within the
circuitry is found in the 930 years that passed between Noach’s son, Shem’s birth in 1558 HC, which begins the 11^th generation, and the passing of Moses in 2488. This bridge aligns with Adam’s
lifespan of 930 years. Adam was the 1^st generation and Shem was the 1^st generation after the Flood or Great Reset.
The Final Set of Generations
6 generations lasted 541 years from the birth of Abraham through the passing of Moses and this aligns with the nation of Israel, whose numerical value is 541. Technically, those are 7 generations,
including Abraham, and they have an average lifespan of exactly (175 + 180 + 147 + 137 + 133 + 137 + 120)/7 = 147 years, matching Jacob’s lifespan, and matching the 7 Names Ehyeh (אהיה) in the Torah
or (7 x 21) = 147, one for each of these generations. Jacob, the 22^nd generation in the middle of those 541 years was given the additional and original name of Israel. This just scratches the
surface of the metaphysical worlds in the same way that our physical one is but a thin translucent film on the surface of a soap bubble. The sum of those 7 generations that average exactly 147 is (
175 + 180 + 147 + 137 + 133 + 137 + 120) = 1029 years, which leaves us with (12,600 – 1029) years for the total lifespans prior to Abraham. This is equivalent to 11,556 years plus 15 years, which
represents the Spherical Time Diameter plus 15 years, as in the waters of the Flood that rose 15 cubits above the Earth.
The sum of the lifespans of the last 6 generations, though, equals (180 + 147 + 137 + 133 + 137 + 120) = 854 years, which aligns directly with Phi (φ)^4 or 6.8541 that contains 854 and 541 (Israel),
and with 854 years being 6.778% of the 12,600 total years, matching the log of 6,000,000 or 6.778.
Moreover, the sum of the net differences between one generation and the next for these last 6 generations is (33 + 10 + 4 + 4 + 17) = 68, the value of chaim (חיים), life, which is also 6.778 rounded
off and is Phi (φ)^4, 6.8541 truncated. While the ratio of the two mega metaphysical Concepts 42/26 is Phi (φ), their sum (26 + 42) = 68 just like a shift of 5 in their values that realigns the other
two mega metaphysical Concepts, (21 + 37) = 68.
Within those last 6 generations we have Levi, the progenitor of all the Cohanim and Levim including Moses and Aaron. Levi was born in the year 1565 BCE, as in the string value of the YHVH (יהוה) and
he lived 137 years as in the word Kabbalah, meaning “to receive.” The first time we find the numeric string …1565… in Pi (π) it is not until the 13586 digit, and it is part of a duplicate sequence
…17156551565666111…. Without ascribing meaning to the sequence, its address implies 13 Echad, One; 358 Moshiach; 1358, the value of the Baruch Shem…; 586, Jerusalem and Shofar including the 3 Aspects
of Elohim (אלהים); and concluding with 86, Elohim (אלהים). Or it can imply nothing at all; it just depends on our level of consciousness. Paths can be followed or ignored. They have already been
blazed; we just have to follow the signs.
Up to that point within Pi (π) there have already been over 13,000 four-digit numbers, yet it was here that it is decided to implant 1565 back-to-back. Levi died in the year 1428 BCE, once again as
in 1/7^th. The string 1565 is also formed by the 4 initials (אהוה) of the 4 final words of the 7 words of Creation (אֵת הַשָּׁמַיִם, וְאֵת הָאָרֶץ). The katan value of the initials (ברא) of the first 3 (בְּרֵאשִׁית,
בָּרָא אֱלֹהִים) of those 7 words is 5 and (1565/5) = 313, as in the number of occurrences of the Name Elohim (אלהים) in the Torah.
Another example of deeper metaphysical integration among the generational ladder is found in the 962-year lifespan of Yered’s 6^th generation, which when multiplied by 6 gives us (6 x 962) = 5772, as
in the Euler-Mascheroni Constant (.5772156…) that is integral to particle physics and mathematical progression, and as in the Spherical Time radius (5772 + 6) = 5778, the Event Horizon. It is also
the sum of the Alef-bet, 4995 together with its first string …4995… location in Pi (π), or (4995 + 777) = 5772, as in the age of the 9^th generation, Lamech, 777.
The final 6 generations began with the birth of Isaac and lasted 441 years through the passing of Moses in 2488 for an average of 73.333 per generation. Those 441 years is not only equivalent to the
first 6 natural cubes (1^3 + 2^3 + 3^3 + 4^3 + 5^3 + 6^3) = 441, as in the 6 generations, but 441 is also 21^2 with 21 being the value of the higher Name Ehyeh (אהיה), and with 441 also being the
value of Emet (אמת), Truth, an appellation for the Torah and the final letters of the first 3 words of the Torah (בְּרֵאשִׁית, בָּרָא אֱלֹהִים).
While Moses lived (120/180) = .666 or 2/3 as long as Isaac, Moses lived (180 – 120) = 60 years less than him and since Isaac fathered Jacob at the age of 60, Moses lived the same number of years (120
) as Isaac and Jacob shared. While Moses lived (137 – 120) = 17 years less than his father, Amran, who’s 137 years is evocative of the Fine Structure Constant (1/137) and the Hebrew word for
parallel, kabbalah (137), this is equivalent to the 17 good years that Jacob spent with Joseph at the end of his life and/or the 17 years that he spent with him at the beginning of his life, and like
the 17^th Day of the 7^th month when Noach’s ark came to rest, and as in the katan value of the word Torah.
Adding Abraham, as in the 7 generations whose lives average 147 years, the sum of the net age differences becomes goes from 68 to (5 + 68) = 73, as in the value of Chochma (חכמה) the highest
attainable level (Sefira) and as in the 73^rd Triangular Field, the value of the First Verse.
Some of the more notable connections and alignments between the generations are easily recognized in the cumulative differences between their lifespans. The first 8 generations from Adam to
Methuselah, the oldest man, totals 1313, as in the constant of the exponential curve of the Alef-bet (1.313^x), and as in the year 1313 BCE when the Torah was received at Mt Sinai, and as in Abraham
and Sarah (808 + 505) = 1313.
Then there are the cumulative differences between their lifespans of the first 7 generations from Adam to Enoch, 709, which matches and aligns directly with the value of the 7 double letters (בגדכפרת
) in the Alef-bet that Abraham explained are the guiding letters to the 7 planets in the Cosmic Wheel. These are the 7 double letters (בגדכפרת) that orbit the center of the 42-Letter Name Matrix, and
the two stacked Magic Essential Cubes of Creation of Constant 42. So up to Enoch, who lived 365 years, as in the annual time periodicity based on Earth’s motion around the Sun, the age differentials
between the generations was 709 years, which relates to the 7 planets in out cosmic solar system, and though his generation into the 8^t^h generation, as in Binah and the consciousness of freedom,
the total age differentials is 1313 years, as in the Constant of the Alef-bet and the year freedom and elevation was offered to the Israelites at Mt Sinai of numerical value 130, as in the age that
Seth was both to Adam. We can read the Torah as quaint historical stories and boring chronicles, or we can read it to gain actual understanding about the nature of history and the workings of the
Meanwhile, the first 6 generational differences from Adam to Yered sum to 112, as in the 112 Triplets in the Torah that are aligned with the 112 chakras in Man’s body, while the sum of the 5
generations from Adam to Mahalalel is 45, as in the numerical value of Adam, Man, and the Aspect of the YHVH (יהוה) associated with Man, the YHVH-ban (יוד־הא־ואו־הא).
The first 3 generational differences from Adam to Kenan represent 4 generations, all having meaning as well. For example, the Concept of 18 formed by the difference between Adam and his son Seth,
connect to chai (חי), life, Moshiach Ben David consciousness (4.24), etc., while the sum of these first two generations equals (930 + 912) = 1842 years, as the 47^th Tetra Field, 18424, etc. that
expresses the number 18 and its square root (4.24), not to mention the sum of the 22 Names of the Alef-bet in reverse, 4248, while 47 is the unification of the Names Ehyeh (אהיה) and the YHVH (יהוה).
Meanwhile, Seth was born in 3630 BCE, and he died in 2718 BCE, as in the values of H’Moshiach (363) associated with the Spherical Time circumference, 36304.24470, and as in 2.7182818…., Euler’s
Number (e), also known as Napier’s Constant (e), respectively.
And the years corresponding to the 13^th generation or Shelach’s birth and death total (2067 BCE + 1634 BCE) = 3701, as in the value of the 42-Letter Name Matrix, while Nachor, the 18^th generation,
died in 1764 BCE, as in 42^2, just as 13 is the gematria of One, Echad (אחד) and 18 is the gematria of “The One H’Echad (האחד), ” with 130 years in between them (1764 – 1634) = 130 years.
That difference dovetails with Adam, who fathered Seth after 130 years, as in Sinai (130). Meanwhile, Amran’s fathered Moses, the 26^th generation, when Amran was 107, as in the 107^th Triangular
Field that is the 5778 Event Horizon. Moreover, Kehath fathered Amran at age 46 and Levi fathered Kehath at age 20, meaning the last 3 generations leading up to Moses are equal to (107 + 46 + 20) =
173, as in the katan gematria of the 42-Letter Name Matrix that are the 173 keys to Heaven given to Moses, while the last 6 generations of fatherhood from Isaac’s birth to Abraham through Moses’ to
Amran equal (100 + 60 + 87 + 20 + 46 + 107) = 420.
Logically, the sum of all the fatherhood ages equal (130 + 105 + 90…+ 107) = 2368, as in the year 2368 HC when Moses was born. Meanwhile, the total for the 4 consecutive forefathers (Kenan,
Mahalalel, Yered, and Enoch) equal (90 + 70 + 65 + 162) = 387 years, as in the 387-value of the 3^rd tier (נגד־יכש) of the 42-Letter Name Matrix, while the sum of the fatherhood ages for the 4
consecutive fathers (Enosh, Kenan, Mahalalel, Yered) equal (105 + 90 + 70 + 65) = 330, as in the value of the Essential Triplet (יכש) from that same tier, and as (12 x 27.5), which is 57 years less
than 387, as in the value of the other Essential Triplet (נגד) from that tier.
The Gateway Equation
While the sum of the net age differences the 7 generations from Abraham to Moses is (5 + 33 + 10 + 4 + 4 + 17) = 73, the first 3 cumulative generational differences, between Adam, Seth, Enosh, and
Kenan is also (18 + 25 + 30) = 73, aligning those 11 generations and 9 generational differentials between them with the 9^th Sefira of Chochma and with the 73^rd Triangular Field that defines
Creation. Meanwhile, the difference between the 9^th and 10^th generations or the last one until the Flood, between Lamech and Noach, is 173, as in the katan gematria of the 42-Letter Name Matrix,
and as in 1/5778 = .0017307026, which directly links the two Great Resets. Moreover, the sum of the final 17 generational differentials is 1024 years, as in the Torah’s 2^10 hypercube Word-Value
Matrix and as in the 32^2 Paths of the Tree-of-Life.
B’nei Israel after they completed the work of building the Mishkan of 42. For the second time the Torah lists the 42 elements of the Mishkan (Tabernacle) consecutively. They are presented in two
different manners with the first one being more of a materials blueprint and the second time occurring during the summary of its completion in Pekudei 39:33. Unlike the Blessing of the Cohanim, the
specifications of Moses’ blessing are not given, only that “Moses blessed them (וַיְבָרֶךְ אֹתָם, מֹשֶׁה)” which has the numerical value of 1024, as in the 32^2 Paths of the Tree-of-Life. Its 3 words and 11
letters also align with the structure of the Tree-of-Life with its 42 letters divided into 11 sefirot and separated into 3 columns. The Zohar explains that the Sefirot came first and then were
organized into the Tree-of-Life structure so once again we see that the 42 Letters came about as a singular consciousness and perhaps self-organized into the 11 Sefirot and then 3 Columns connected
by discrete pathways.
Using the 42-Letter Name Matrix as their blessing, Moses connected them (the workers) and the 42 elements of the Mishkan to the Tree-of-Life. The word “them otam (אֹתָם)” is a permutation of “Emet (
אמת), Truth” and aligns with the 42-Letter Name Matrix that begins with Alef (א) and ends with Tav (ת) and has 40 (מ) letters in between. The 3 initials and 3 final letters in “Moses blessed them (
וַיְבָרֶךְ אֹתָם, מֹשֶׁה)” equal (47 + 65) = 112, as in the 112 Essential Triplets and/or the invocation of the 3 Names Ehyeh (אהיה) and the YHVH (יהוה) and Adonai (אדני), leaving the middle 5 letters to equal
912, as in the Seth’s age.
Meanwhile, the albam gematria for the phrase “Moses blessed them (וַיְבָרֶךְ אֹתָם, מֹשֶׁה)” equals 963, as in 963-value of the spelled out (אלף־חית־דלת) milui gematria form of the Hebrew word for One, Echad (
אחד), invoking Oneness, and as in 5778/6 = 963 and/or (107 x 9) = 963. Given the 11 letters in the phrase and its connection to the 11 sefirot, the albam gematria cipher shift of 11 letters is most
appropriate, as is the addition of the 11 for the kolel to the complete sofit gematria total, which is (2064 + 147 + 11) = 2222, a complementary value to 5778. As for the 147 sofit ordinal value of
the phrase it refers to Jacob’s ladder of the 7 rungs/tiers of the 42-Letter Name Matrix and the 7 Ehyeh (אהיה) in the Torah.
In the same way that the Torah concealed the 42-Letter Name Matrix within its first 42 letters, it concealed it here too within Moses’ blessing, along with the 173 Keys to Heaven that the Zohar
explains were given to Moses in the form of the small gematria of the 42-Letter Name Matrix.
1/5778 = .001730702665282104534440 is far more profound than the obvious connection. Through this simple equation, we can understand that the Event Horizon radius (5778 years) is dependent upon and
defined by the Singularity of 42. This itself is next level understanding because while we cannot change the reset cycle of 5778 years that is hardwired into the mechanics of our universe through the
galactic wave, we can tap into its myriad time-pathways through the 42-Letter Name Matrix. The value 173 of those 173 Keyes are even built into the final letters (קעג) of its 4^th, 5^th, and 6^th
tiers, while the sum of those 3 tiers is (704 + 239 + 230) = 1273, the year that Moses passed away (1273 BCE) and the year Israel entered the Promised Land.
1.273) or 4/π, meaning these 3 lines embrace all aspects of the physics of our universe and dynamically the basic relationship of the square to the circle. This further means that these 3 tiers that
bridge the 1820 of the 2^nd, 3^rd and 4^th tiers that radiate with the 1820 YHVH (יהוה) and the 1375 or primal and the radiance or the Phi (φ) angle 137.5^o and the Primal Frequency (27.5 Hz) of the
4^th, 5^th, and 6^th tiers.
The average value of these 3 tiers is thus (1273/3) = 424.333, as in Moshiach Ben David consciousness. They are also energetically comprised of (1100 + 173) or the major interval (40 x 27.5) plus 173
, much like the final 3 tiers that are attuned to 1375 or (50 x 27.5), with both sets of triple tiers reflecting the radiant Primal Frequency (27.5 Hz) throughout our universe. These are not
coincidental numbers; these are the call signals of this metaphysical two-way radio with its pre-set channels tuned to allow us to communicate with the Cosmos. The value 1375 is also reflected in
the difference between the first 9 generations of Adam until Noach and their full potential of 9000 years, or (9000 – 7625) = 1375.
3 final letters (קעג) that equal 173, the remaining 4 final letters (צנשת) of the 7 tiers total 840 or (20 x 42), and their corresponding 4 tiers equal (2428 + 24 letters – 4 tiers) = 2448, as in
year 2448 HC and the overcoming of the slavery to physicality exemplified in the year of the Exodus and reception of the 10 Commandments. All this illustrates how the 42-Letter Name Matrix ’s
accesses the shifting pathways of Spherical Time that radiate from the Singularity of 42 with the Phi (φ) angle 137.5^o and with the Primal Frequency (27.5 Hz).
Meanwhile, the equation 1/5778 = .001730702665282104534440 also contains the numerical strings that decipher as …1730, 702, 26, 65, 28, 210, 45, and 444 for starters. The value 1730 is obviously (10
x 173) but it also represents the sum of the values of the Names of the 4 upper (Keter, Chochma, Binah, and Da’at) dimensions along with Malchut that equal 1730. The full value of all 11 Names or 42
letters of the Tree-of-Life is 3342, meaning the other bundled 6 dimensions of Zeir Anpin equal (1612 + 6) = 1618 or Phi (φ).
The same value 1730 is embedded in the Torah as the 1730 Israelite men in the second census who would proceed to the Promised Land (Binah) beyond the original 600,000. Perhaps this represents the
1730 successfully developed souls that can filter up and elevate to the next level of universal consciousness each cycle if critical mass is not achieved. It represents .288333% of the 600,000, as in
the 288 Holy Sparks that the Zohar tells us must be retrieved from our world every cycle.
1.273) or the 1273 value of the 4^th, 5^th, and 6^th tiers of the 42-Letter Name Matrix are added to the 1730 men, who perhaps understood how to harness them, they add up to (1273 + 1730) = 3003, the
value of the 11 Essential Triplets of Bereshit that average 273 per Triplet, as in the Cosmic Harmonic residue (.273). This is the 77^th Triangular Field of mazal, and it aligns with the 42^nd
Chapter of the Torah, when Abraham begins his journey that is found in the 77^th Paragraph of the Torah. We all have journeys and based on the teachings of the Baal Shem Tov there are 42 journeys of
our individual souls or consciousness units and 42 journeys of our collective soul, 42 steps of elevations that the 42-Letter Name Matrix was designed to help us with.
The paragraph of the 42 Aspects of the Mishkan is followed by 33 commands to him by the Creator regarding setting up those 42 Aspects of the Mishkan. This aligns with the related 33 simultaneous
letters in the consciousness field of the 11 Essential Triplets, which are the first 33 of the first 42 letters of the Torah that transformed from the 42 letters of the 42-Letter Name Matrix and that
are aligned with the 42 letters of the 11 Sefirot (dimensions) of the Tree-of-Life which total 3342. Moreover, the last two of the first 33 letters within the 42-Letter Name Matrix are LaG (לג) of
numerical value 33, as in LaG (לג) B’Omer within the Cosmic Wheel. Meanwhile, those first 11 Essential Triplets within the 42-Letter Name Matrix equal 2608 plus the kolel (11) equal (2618 + 1), or
Phi (φ)^2 + 1. The first 33 letters of the 11 Sefirot, covering all 9 Sefirot from Keter to Hod, total 2766, meaning that the final 9 letters of Yesod and Malchut equal 576 or 24^2, alluding to
their enlarged influence over Physical Time.
Everything stems from the Singularity of 42, including the Torah’s 28-letter First Verse of Creation that forms a perfect cube of 288000000000^3, like the 288 Holy Sparks and like the percentage of
the 1730 men. These 28 letters are 28/42 or .666 of the 42 transformed letters from the 42-Letter Name Matrix, like the 66.6 jubilee years from 2448 to the Event Horizon and the centroid of median 42
, where the swords of 42 cross in the center. Those 28 letters were transformed from the first 4 tiers of the 42-Letter Name Matrix that sum to 506, as in the sum of the first 11 natural squares and
the complete value of Moshiach Ben David consciousness, and the next 3 tiers that sum to 1820, as in the radiant 1820 YHVH (יהוה) of the Torah and the Path of One. Meanwhile, the percentage of the
first tier (אבגיתצ) over the next 3 tiers is 506/1820 = .2780 as in Ohr Haganuz, the light of Moshiach.
That value achieved by climbing the 42 letters of the 11 Sefirot (dimensions) of the Tree-of-Life that are associated with the 42 letters of the 42-Letter Name Matrix, or 3342, when combined with the
288 Holy Sparks embedded in the seed of our world and the Torah equal (3342 + 288) = 3630, as in H’Moshiach consciousness, and the Spherical Time circumference, 363042.4470, and as in the year of
Seth’s birth, 3630 BCE. There are 2358 years between Seth’s birth and Moses’ death, as in 358, Moshiach. This connection is more than casual as Moshe was the incarnation of Seth, and as Seth lived 18
years less than his father Adam, Moshe was 18 when he slayed the Egyptian, avenging the slaying of Abel, his first incarnation. The Torah is constantly and actively teaching us. Our ability to learn
is based on the state of our journey.
42-Letter Name Matrix as 42 Letters, but they are 42 distinct Concepts of living consciousness that came together in a specific networked arrangement early in the formation of the cosmic universe,
and as the Ancient of Ancients it has the power of guiding potentiality and channeling primordial Essential energy. This is why a small holographic piece of this original Concept Name evolved into a
massive cube of 288000000000^3 to help form the numerous Torah matrices within it. Literally everything can be tapped into when connected to the Singularity of 42. As Rav Brandwein, of blessed
memory, intimated, to what degree we can tap it depends on our understanding of it, including achieving the geulah, the final redemption and the end of cycles.
Regarding the Gateway Equation 1/5778 = .001730702665282104534440, the next numerical string within it deciphers as …702…, the value of Shabbat, as in the 1000-year Great Shabbat that could come
after the Event Horizon. We must understand that (5778 + 702) = 6480, which is the square root of 42 or 6.480 that we saw as the product of the digits in the 390,625 and 401273 elements of the Torah
and as a season of the Great Precession (25,920/4) = 6480, etc.
Then sequentially it continues deciphering as the Concept of 26 paired with 65 again, as in the bookend Names YHVH (יהוה) and Adonai (אדני) of the 40 integers between 26 and 65 that equal 1820, as in
the 1820 YHVH (יהוה) in the Torah; the string 28 represent Koach (כח), power, associated with the first 28 letters of Creation; and 210 has many critical roles including the 210 years of exile; while
45 is the value of Adam and Man; and 444 is the value of “From Generation to Generation (לדר־ודר),” as in the reset cycles and the 26 Generations of Adam.
The sum of the differentials between the 26 generations of the Torah from Adam to Moses total 2702, as in the First Verse of the Creation plus One (1), or the 73^rd Triangular Field of 2701 plus One
(1). We saw the number 2702 in the ratio between the Spherical Time diameter and the value of “From Generation to Generation (לדר־ודר),” or (11,556/444) = 26.270270270… which is directly reflective
of the 26 generations through Moses whose generation differentials total 2702.
26 layers of the generations, through the heavenly or cosmic concepts and through our time-paths. It is like seeing thousands of stars, which are independently hundreds and thousands of light-years
away from us and from each other, and yet they condense together within our limited field of vision as connected constellations. Like the constellations in the Great Precession of 26,000 years and
the parallel annual Cosmic Wheel, we can superimpose the 26 layers over a map of our history.
From Generation to Generation
That said, the total of the 10 differentials for the 10 generations from Adam to Noach and the Great Reset of the Flood is 2028, and subtracting the kolel (10), we get 2018 HC, as in the Year of
G-d’s Covenant of Halves with Abraham and as in the mirrored year 2018 CE, the Western Calander analogue to 5778 HC, the Event Horizon and radius of the Spherical Time bubble.
Moreover, the 4 highest differentials occur for Enoch, Yered, Noach, and Eber, the 7^th, 6^th, 10^th, and 14^th generations that total 37. Of course, the differentials are between these and the
following generations, so the total would be (37 + 37 + 4) = 78. What is most descriptive about this is that Eber is the 14^th generation, like David (14) at the center of Spherical Time and that
those 4 generations add up to (604 + 597 + 350 + 225) = 1776, as in the value of the 4^th verse of the Torah, and as in the year 1776 HC when the Tower of Babel reset occurred, 120 years after the
Flood Great Reset in 1656 HC, like the 120 years of Moses, the 26^th generation; and as in the mirrored 1776 CE, when the US was established and illuminati officially revived three months earlier.
This brings up the topic of “from generation to generation” again in that the average of those 4 generations is (1776/4) = 444. Nothing is in the Torah by mistake and thus the ability to read the
Cosmic timeline can serve as a warning and a gauge for us.
There are so many other interesting alignments among the chart of the 26 generations, including Enoch, the 7^th generation living 365 years and then vanishing, which aligns with the 365-year
differential between Methuselah and Noach, the 8^th – 10^th generations. Then there is Lamech, Methuselah’s son and Noach’s father, who lived 777 years as in 77.7…, which is the 4^th root of the 7^
th generation’s 365 years. That makes 4 different connections to 365 within the 4 consecutive generations, the sideral year divided by 4, or one season, equals (365.256/4) = 91.314, as in the 913
value of the First Word of Creation and Pi (π).
Another alignment is found in Noach’s 950-year life and the 350-year differential with his son Shem, Abraham’s teacher, which equates to (950 + 350) = 1300, as in Echad, One. Moreover, Enoch was the
7^th generation and the sum of Enoch’s birth and elevation years (3139 BCE + 2774 BCE) = 5913, which is the sum of the first 7 factorials (7! + 6! + 5! + 4! + 3! + 2! + 1!), as in the 5913 letters in
the 6 Days of Creation.
While the 365 years indirectly reference the Earth’s 448,000-mile orbit around the Sun, the 1300 years reference the Sun’s volume to that of the Earth or 1,300,000 to 1, and the Sun’s orbital speed
around the Milky 130 miles per second, and the Milky Way’s speed through space at 1,300,000 miles per hour. In this case, symmetry is a signature. It is the signature of the One.
These 2702 total years represent the absolute differences between the lifespans, yet some generations lived shorter lives than their fathers and thus the total net differentials are only 810, giving
us a ratio of 2702/810 = 3.335802, as in the 333 value of “HaChoshech (הַחשֶׁךְ), the darkness” juxtaposed with the 358 value of Moshiach and the year 5802, which is 24 years after the Event Horizon
began, which was 3330 years after the Torah was received in 2448 HC. It is also like the Event Horizon radius 5778^2 or 33385284, as in the year 3338 HC when the First Holy Temple was destroyed. The
inverse of the total differential verses over the net differential is 810/2702 = 2.99778, like the speed of light 299,792 KM/sec generated from the Sun’s surface at 5778 K.
The Cosmic Altar
The ratio of the total ages to the total years for the 26 generations is (12600/2488) = 5.06 years per year elapsed, as in the first tier (506) of the 42-Letter Name Matrix Ladder, and as in the
complete value (506) of Moshiach Ben David, the higher consciousness. Meanwhile, the total value of the 42-Letter Name Matrix divided by 506 is (3701/506) = 7.3142, as in 3142, the string gematria of
the highest Name Ehyeh Asher Ehyeh (אהיה־אשר־אהיה) and the sum of the 3 iterations of the expansions of the Names Ehyeh (אהיה) and the YHVH (יהוה). The highest Name Ehyeh Asher Ehyeh (אהיה־אשר־אהיה
) contains the two wings Ehyeh (אהיה) each of value 21 and not only do we find the numeric string …3142… for the first time in Pi (π) at digit #2121, but the complete value of the central word (אשר)
in the Name Ehyeh Asher Ehyeh (אהיה־אשר־אהיה) has the value 543, which is the same standard value as the entire Name (אהיה־אשר־אהיה) and we find …543… for the first time within Pi (π) at digit #273,
as in the Cosmic Harmonic residue (.273) around which our physical Cosmos are structured. And since the standard value of Asher (אשר) is 501, and …501… is found for the first time at digit #1343, we
see that (2121 + 273 + 1343) = 3737, as in the Core Prime Essential Concept.
The value 506 is also the value of Asherah (אֲשֵׁרָה) as in the groves of idolatry of the 70 nations, which makes the first tier of the 42-Letter Name (אבגתץ) an antidote to ward off their idolatry, and
particularly the spells of the moon worshipers, as explained in the Zohar. This uppermost tier of value 506 is energetically feminine consciousness to the masculine consciousness of the Upper 42
letters of the Name YHVH (יהוה) of numerical value 708. These two clouds of consciousness link together and activate new circuitry. Together, they equal (506 + 708) = 1214, as in the value of the
value of the Hebrew phrase “70 Languages.”
More specifically, the result of (3701/506) = 7.3142292, which contains the additional digital string 2292 or 2.29166, the Harmonic Space/Time Ratio of the cubit to the foot, 27.5/12 = 2.29166
inches, or alternatively, the Primal Frequency (27.5 Hz) over the 12 lunar months in the Cosmic Wheel, which is also Pi (π) times the Fine Structure Constant or π/137.03599… = .0229253107420, a
numerical string that also contains the 22^nd Triangular Field (253), along with the numbers 107 and 420, the two numbers associated with Spherical Time radius and diameter. We should give this a
moment to sink in: the product of two of the most important constants in our physical existence, Pi (π) and the Fine Structure Constant equate to one of the most profound metaphysical ratios and the
interplay of the highest Names of the Creator. The ratios and the Concepts of the Names set the parameters for our observed physical constants and their various fields. It is no coincidence then that
Jacob’s Name has a complete value of (182 + 47) = 229.
Ideally the 26 Generations of the forefathers would each have lived 1000 years to coincide with the (26 x 1000) = 26,000 years of the circumference of the Great Precession of the Earth’s axis and the
parallel Cosmic Wheel. This is also a difference of (26000 – 24880) = 1120 from the circumference of the Earth itself, as in the 112 Essential Triplets that align with our 112 chakras, just as 2488
aligns with the end date of the 26 Generations. The Binah or spiritual counterspace for Moses and the final generation was thus (1000 – 120) = 880, as in the 24880 circumference of the Earth that
aligns with the year he passed away, 2488, the year the Israelites finally entered the Promised Land, the Land so much of the world seems desperate to take away from them.
Subtracting their ages as counterspace, generation by generation, from 1000, and cumulatively adding them gives us 253, the 22^nd Triangular Field for first 3 generations (70 + 88 + 95) = 253.
The total 26 Generations of differentials from Binah completion equal (26000 – 12600) = 13400 years = (6700 x 2) or (670 x 20) years, as in Binah (67) and the 670 paragraphs in the Torah times Keter.
Most of the 13400 year differential lies in the later generations who died younger than the earlier generations, with Adam’s differential being (1000 – 930) = 70 and Moses’ being (1000 – 120) = 880,
making the two bookend generations equivalent to (70 + 880) = 950, the lifespan of Noach, the 10^th generation and that of the great reset. The age of Man declared at the time of the Flood is thus (
1000 – 120) = 880 = (32 x 27.5), 32 cycles of the Primal Frequency from Binah existence, as in the 32 Paths of the Tree-of Life. The cumulative lifespan differentials through the 8^th generation,
Methuselah, the oldest man, is 1152 years which is 2304/2 or exactly half a mile in cubits (27.5”), or (5280’ x 12”)/27.5 = 2304, making 1152 a quarter mile. Nonetheless, this takes us back to the
first tier (אבגיתצ) of the 42-Letter Name Matrix that we saw tied to the letter Ayin (ע) of numerical value 70, where the numeric string …2304… is part of the string gematria of that first tier (
אבגיתצ), especially as utilized in the Pi (π) equation: 9/.123049… = 73.141593 = (70 + 3.141593) that incorporates the value 9 for the 9^th generation and the 9^th Sefira (Chochma) as a crucial
component. We first find the string …2304… in Pi (π) at digit #14777 where 777 is the age of Lamech, the 9^th generation. Nonetheless, the 70 branches of the Tree-of-life are there for multiple
reasons, as are the 70 nations. It is much easier to climb a tree by utilizing its branches than trying to shimmy up its smooth trunk.
As regards the figure 2304, it is both 48^2 and the exact product of the first 17 letters of the Torah, 2304 x 10^16, or 2304 followed by 16 zeros, and while 17 is the value of “good, tov (טוֹב) and
the katan gematria of Torah woven into Pi (π), the sum of the first 17 differentials through the first 17 generations is 5782 and 82 is the katan gematria for the entire First Verse.
9 differentials though Lamech and until Noach is 1375, as in the Phi (φ) angle 137.5^o which is also the value of the 3 bottommost tiers of the 42-Letter Name Matrix, the height of the Gates in the
Holy Temple in inches, (50 x 27.5), 50 cycles of 27.5, leaving (420 – 50) = 370 cycles to complete the galactic wave and the Spherical Time diameter. More precisely, that is 370.1 cycles as in the
3701 value of the 42-Letter Name Matrix.
Including Noach, the 10 differentials with the state of Binah consciousness total 1425 or (1000 + 425), as in the 42.5^o complementary angle to the Phi (φ) angle 137.5^o that guides the radiant
Primal Frequency throughout the universe.
Adam lived 18 years longer than his son Seth and together that lived 1842 years. The 42-Letter Name Matrix has 3 consecutive sets of 18 letters, which equal 1820, 1273, and 1375 respectively, as in
the radiant 1820 YHVH (יהוה), the Cosmic Harmonic (1.273) that the universe is structured upon, and the Phi (φ) angle (137.5^o) Primal Frequency radiation of the universe. The Israelites count time
in parts of a minute rather than seconds, which equates the 25920 parts of minute in a single day with the 25920 years of the Great Precession of which they were not supposed to be aware.
Nonetheless, the understanding is cosmic in nature as 18 is the value for chai (חי), life, and the first 18 letters in the Alef-bet have a collective value of 495, which is 18 times the Primal
Frequency that measures out the length of life in the universe and the Spherical Time Bubble, (18 x 27.5) = 495. The secret to life is to tune it to the frequencies of the universe.
3 generations of Man, lived (930 + 912 + 905) = 2747 years and with the kolel (3), 2750 years, which perhaps tells us everything, especially since our day of 25920 parts of a minute and out Great
Precession Great Year of 25920 years equal (3 x 2750π), making 300 cycles of 27.5 the diameter of our day and of our Great Year, or 120 cycles less than the 420 cycles in the Spherical Time diameter,
like the age of Man and Moses, the 26^th generation.
Meanwhile, the sum of the first 12 generations plus the kolel (12) is 9625 years or 350 cycles of 27.5, like the 350 years that Noach lived from the time of the Flood and as in 70 cycles less than
the 420 galactic limit, while the first 23 generations total exactly (444 x 27.5) = 12,210 years or 444 cycles of the Primal Frequency as in the value of “From generation to generation.”
The secret to life is to tune it to the frequencies of the universe.
|
{"url":"https://kabbalahsecrets.com/time-and-the-26-generations/","timestamp":"2024-11-06T15:13:26Z","content_type":"text/html","content_length":"332989","record_id":"<urn:uuid:1e294adf-d93d-4d1a-b8a7-0d66638b6ed6>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00628.warc.gz"}
|
Create Moodle Quiz questions from a TeX file
Moodle quizzes can be used for different purposes. They can help to build students’ confidence by providing opportunities for them to consolidate their learning, and offering instant personalised
feedback. Lecturers can also benefit from using quizzes to identify parts of the curriculum that students are struggling with, and to gather feedback about content covered in lectures.
One challenge in building quizzes that include mathematical expressions is how to input and display them in Moodle in an easy and accessible way. If you already create your exams using a LaTeX
editor, you are likely to have a large bank of questions ready to be used as practice questions or formative assessment. However, using an equation editor in Moodle to input the mathematical
expressions can be time-consuming. The good news is that you can create quiz questions on Moodle by entering your LaTeX code into the Moodle questions.
This post will explain how to create Moodle questions using LaTeX code.
Creating a LaTeX quiz question in Moodle
You can easily create questions by copying and pasting the LaTeX question text and answer in a Moodle question. The only thing you need to take into account is to wrap your mathematical expressions
with double dollar signs ($$). For instance, if you have written the following question (1):
You need to add an extra $ sign and input the question in the Question text input field and the answers in the Choice input fields:
Moodle will interpret the expressions surrounded by $$ and it will transform them to images (2). Our question will be displayed by Moodle like this:
Finally, you need to check which filters Moodle uses to convert LaTeX to an image. MathJax is the filter that best displays your TeX code. This is important not only to improve the quality of the
images but also because the MathJax generated images are accessible for screen readers and zooming tools.
To use this filter in your module:
• Go to Settings>>Module administration>>Filters,
• disable TeX notation,
• enable MathJax.
This is how your filters should look:
(1)You can find guidance about how to create Moodle quizzes and questions on the EdTech guidance site: https://sleguidance.atlassian.net/wiki/display/Moodle/Quiz
(2) All LaTeX code seems to work properly with Moodle, except for \mbox. This code is blacklisted in moodle and generates an error. Since this code is used to include text, you can input it outside
the $$ signs.
|
{"url":"https://blogs.city.ac.uk/educationalvignettes/2015/12/17/create-moodle-quiz-questions-from-a-tex-file/","timestamp":"2024-11-13T06:05:47Z","content_type":"text/html","content_length":"84003","record_id":"<urn:uuid:b4881be8-b8a4-4613-b0fb-f429472bd679>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00228.warc.gz"}
|
sla_syrpvgrw.f −
REAL function SLA_SYRPVGRW (UPLO, N, INFO, A, LDA, AF, LDAF, IPIV, WORK)
SLA_SYRPVGRW computes the reciprocal pivot growth factor norm(A)/norm(U) for a symmetric indefinite matrix.
Function/Subroutine Documentation
REAL function SLA_SYRPVGRW (character*1UPLO, integerN, integerINFO, real, dimension( lda, * )A, integerLDA, real, dimension( ldaf, * )AF, integerLDAF, integer, dimension( * )IPIV, real, dimension( *
SLA_SYRPVGRW computes the reciprocal pivot growth factor norm(A)/norm(U) for a symmetric indefinite matrix.
SLA_SYRPVGRW computes the reciprocal pivot growth factor
norm(A)/norm(U). The "max absolute element" norm is used. If this is
much less than 1, the stability of the LU factorization of the
(equilibrated) matrix A could be poor. This also means that the
solution X, estimated condition numbers, and error bounds could be
UPLO is CHARACTER*1
= ’U’: Upper triangle of A is stored;
= ’L’: Lower triangle of A is stored.
N is INTEGER
The number of linear equations, i.e., the order of the
matrix A. N >= 0.
INFO is INTEGER
The value of INFO returned from SSYTRF, .i.e., the pivot in
column INFO is exactly 0.
A is REAL array, dimension (LDA,N)
On entry, the N-by-N matrix A.
LDA is INTEGER
The leading dimension of the array A. LDA >= max(1,N).
AF is REAL array, dimension (LDAF,N)
The block diagonal matrix D and the multipliers used to
obtain the factor U or L as computed by SSYTRF.
LDAF is INTEGER
The leading dimension of the array AF. LDAF >= max(1,N).
IPIV is INTEGER array, dimension (N)
Details of the interchanges and the block structure of D
as determined by SSYTRF.
WORK is REAL array, dimension (2*N)
Univ. of Tennessee
Univ. of California Berkeley
Univ. of Colorado Denver
NAG Ltd.
September 2012
Definition at line 122 of file sla_syrpvgrw.f.
Generated automatically by Doxygen for LAPACK from the source code.
|
{"url":"https://manpag.es/SUSE131/3+sla_syrpvgrw.f","timestamp":"2024-11-07T02:18:26Z","content_type":"text/html","content_length":"21322","record_id":"<urn:uuid:7ca7c04e-9635-4816-b299-6ed967a7882a>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00224.warc.gz"}
|
Linearity in Circuits | Electrical Academia
Linearity in Circuits
Consider the relationship between voltage and current for a resistor (Ohm’s Law). Suppose that c current I[1] (the excitation or input) is applied to a resistor, R. then the resulting voltage V[1]
(the response or output) is
Similarly, if I[2] is applied to R, then V[2]=I[2]R results. But if I=I[1]+I[2] is applied, then the response
In other words, the response to a sum of inputs is equal to the sum of the individual responses (Condition 1).
In addition, if V is the response to I (that is V=IR), then the response to KI is
In other words, if the excitation is scaled by the constant K, then the response is also scaled by K, (Condition 2).
Because conditions 1 and 2 are satisfied, we say that the relationship between current (input) and voltage (output) is linear for a resistor. Similarly, by using the alternate form of Ohm’s law I=V/
R, we can show that the relationship between voltage (excitation) and current (response) is also linear for a resistor.
Although the relationships between voltage and current for a resistor are linear, the power relationships P=I^2R and P=V^2/R are not.
For instance, if the current through a resistor is I[1], then the power absorbed by the resistor R is
Whereas if the current is I[2], then the power absorbed is
However, the power absorbed due to the current I[1]+I[2] is
${{P}_{3}}={{({{I}_{1}}+{{I}_{2}})}^{2}}R=I_{1}^{2}R+I_{1}^{2}R+2{{I}_{1}}{{I}_{2}}R\ne {{P}_{1}}+{{P}_{2}}$
Hence the relationship P=I^2R is non-linear.
Since the relationships between voltage and current are linear for resistors, we say that a resistor is a linear element.
A dependent source (either current or voltage) whose value is directly proportional to some voltage and current is also a linear element. Because of this, we say that a circuit consisting of
independent sources, resistors, and linear dependent sources is a linear circuit.
|
{"url":"https://electricalacademia.com/basic-electrical/linearity-in-circuits/","timestamp":"2024-11-05T00:46:52Z","content_type":"text/html","content_length":"111330","record_id":"<urn:uuid:f7d666b8-06c5-4890-9c34-4db8a0401bb5>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00306.warc.gz"}
|
Developmental Math Emporium
Key Concepts
The Order of Operations
• Perform all operations within grouping symbols first. Grouping symbols include parentheses ( ), brackets [ ], braces { }, and fraction bars.
• Evaluate exponents or square roots.
• Multiply or divide, from left to right.
• Add or subtract, from left to right.
This order of operations is true for all real numbers.
Irrational number
A number that cannot be written as the ratio of two integers. Its decimal form does not stop and does not repeat.
Rational number
A number that can be written in the form [latex]{\Large\frac{p}{q}}[/latex] , where p and q are integers and [latex]q\ne 0[/latex] . Its decimal form stops or repeats.
Real number:
A number that is either rational or irrational.
|
{"url":"https://courses.lumenlearning.com/wm-developmentalemporium/chapter/summary-classes-of-real-numbers/","timestamp":"2024-11-03T19:22:52Z","content_type":"text/html","content_length":"47193","record_id":"<urn:uuid:f1cc3fa4-890a-4727-a40c-ec9fc8e48d4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00625.warc.gz"}
|
From the vault: Amalie Emmy Noether: Einstein's mathematician
Amalie Emmy Noether was born on 23 March 1882, in the Bavarian city of Erlangen.
Her father, Max Noether, was called “one of the finest mathematicians of the nineteenth century” by Leon Lederman and Christopher Hill in their book Symmetry and the Beautiful Universe, and she was
to follow in his footsteps.
A story in journal Science News on 23 June 2018 carried the headline: “In her short life, mathematician Emmy Noether changed the face of physics”.
As with so many women in science, however, it was no easy road.
Prevented from formally studying mathematics at university, for the simple reason that she was female, Noether instead went to a general finishing school and in 1900 was certified to teach English
and French.
She was later allowed to audit classes in mathematics at the University of Erlangen-Nuremberg, where her father taught, eventually earning an undergraduate degree.
In 1904 she was allowed to enrol in a doctoral program at the university, and three years later she received a PhD. She spent nearly eight years there, working without pay or an official position.
In 1915 she moved to the University of Gottingen, where she was at first permitted to lecture only as an “assistant” under a male faculty member’s name. She didn’t receive a salary until 1923.
By 1915, however, Noether’s brilliance had been noticed by her colleagues. David Hilbert and Felix Klein, both renowned mathematicians, asked her for help.
A problem had arisen with Albert Einstein’s new theory, general relativity, which had been introduced several months earlier.
It seemed that Einstein’s work did not adhere to a principle known as conservation of energy, which states that energy can change forms but can never be destroyed. Total energy is supposed to remain
She resolved the issue with one of two theorems she proved that year, American science writer Steve Nadis wrote in 2017, “by showing that energy may not be conserved ‘locally’ – that is, in an
arbitrarily small patch of space – but everything works out when the space is sufficiently large”.
Nadis continued: “The other theorem, which would ultimately have a far greater impact, uncovered an intimate link between conservation laws (such as the conservation of energy) and the symmetries of
nature, a connection that physicists have exploited ever since.
“Today, our current grasp of the physical world, from subatomic particles to black holes, draws heavily upon this theorem, now known simply as Noether’s theorem.”
In 1918 Noether published her work, of which American theoretical physicist Frank Wilczek, of the Massachusetts Institute of Technology, said: “That theorem has been a guiding star to twentieth and
twenty-first century physics.”
An article in the Jewish Women’s Archive, by Saunders MacLane, one of her students, says that in 1920 she “turned her attention to algebra, with decisive axiomatic treatment of the theory of ideals
as they apply to number theory (to factor algebraic integers) and to algebraic geometry (curves and surfaces defined by equations).
“She inspired many students, in particular BL Van der Waerden, who delivered brilliant lectures following her ideas and then presented them in his famous text Modern Algebra, which revolutionised the
In 1933, with the rise of Nazi Germany, Noether was dismissed from her position at Gottingen, but in September, with Einstein’s help, she received a guest professorship in the US, at Bryn Mawr
College, in Pennsylvania. She also lectured at the Institute for Advanced Study at Princeton University.
In April 1935, however, she had surgery to remove a uterine tumor and died from a postoperative infection.
A 2015 article in the Washington Post cites a letter Einstein sent to the New York Times after Noether’s death.
“In the judgment of the most competent living mathematicians,” penned the great man, “Fraulein Noether was the most significant creative mathematical genius thus far produced since the higher
education of women began.”
Related reading: Models of the universe as Einstein saw it
|
{"url":"https://cosmosmagazine.com/science/amalie-emmy-noether-einsteins-mathematician/","timestamp":"2024-11-11T01:58:19Z","content_type":"text/html","content_length":"90065","record_id":"<urn:uuid:26777a3d-3f4d-459e-bed4-2b905aef28ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00295.warc.gz"}
|
Counting the Largest Number Groups by Digit Sum
This post is completed by 2 users
• 2 Add to List
544. Counting the Largest Number Groups by Digit Sum
Given a positive integer N, the task is to group the numbers from 1 to N based on the sum of their digits. Each number will belong to a group determined by its digit sum. The objective is to write a
program that counts the number of groups that are the largest in size.
Example 1:
Given N = 15, our objective is to find the count of the largest number groups.
Output: 6
Groups: [1, 10], [2, 11], [3, 12], [4, 13], [5, 14], [6, 15], [7], [8], [9]
In this case, there are six groups of size 2.
Example 2:
For N = 4, the goal remains the same.
Output: 4
Groups: [1], [2], [3], [4]
Here, there are four groups of size 1.
Solution: Use Map
1. Initialize a Map<Integer, Integer> named 'map' to store the digit sum as the key and the count of numbers with that sum as the value.
2. Initialize the variable 'largestGroupSize' to keep track of the largest group size encountered.
3. Iterate from 1 to N and perform the following for each number:
1. Calculate the sum of the digits by iteratively dividing the number by 10 and summing up the remainders.
2. Retrieve the current count associated with the sum from the map using 'getOrDefault'.
3. c. Increment the count in the map by one.
4. d. Update 'largestGroupSize' to the maximum value between its current value and the updated count.
See the code below for more understanding.
|
{"url":"http://js-algorithms.tutorialhorizon.com/algorithms/counting-the-largest-number-groups-by-digit-sum/","timestamp":"2024-11-02T11:28:45Z","content_type":"text/html","content_length":"88243","record_id":"<urn:uuid:d8f70e42-7707-405d-9cb2-589f8e9d9e16>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00267.warc.gz"}
|
How Many Cells In A Car Battery? [Updated: October 2024]
How Many Cells In A Car Battery?
If you’re like most people, you probably don’t think much about the number of cells in your car battery. But if you’re interested in how your car’s battery works, you might be wondering how many
cells are in a typical car battery. The answer is six.
So, how many cells in a car battery?
A car battery typically contains six cells, each of which produces two volts for a total of 12 volts. The cells are arranged in a row inside the battery casing, with each cell containing a lead
dioxide plate and a lead plate.
Let’s dig into it and see what’s inside.
What Is The Voltage Of A Car Battery?
As most people know, car batteries are typically 12 volts. However, what many people don’t know is that this is only the voltage when the engine is off. When the engine is running, the voltage of a
car battery can range from 13.5 to 14.7 volts. This is thanks to the alternator, which helps to keep the battery charged.
It’s important to test the voltage of your car battery regularly, as a drop below 12.6 volts can indicate a problem. If you notice that your car’s battery voltage is consistently low, it’s a good
idea to take it to a mechanic to have it checked out.
The voltage of a car battery is 12 volts when the engine is off, and 13.5 to 14.7 volts when the engine is running.
How Many Volts Are In A Car Battery?
A car battery is typically 12 volts, but the actual voltage of the battery may be closer to 15 volts. This is because the battery receives power from the alternator when the engine is running. In
fact, a fully charged car battery should measure between 13.7 and 14.7 volts. If the voltage falls below this, it is likely that the battery is weak and needs to be recharged.
A car battery is typically 12 volts, but may be closer to 15 volts.
How Many Amp Hours Are In A Car Battery?
As noted in the article, most car batteries have between 40 and 65 amp hours. However, some trucks can have up to 75 amp hours. The power and longevity of a battery is determined by its Amp Hour (Ah)
rating. This rating basically denotes how long a battery will last if it isn’t recharged.
For example, small car batteries typically have an Ah rating of 40. This means that the battery can provide 40 amps of power for one hour before it needs to be recharged. Larger batteries for cars
and SUVs usually have an Ah rating of 50. And finally, batteries for trucks and other large vehicles can have an Ah rating of up to 75.
So, if you’re wondering how long your car battery will last, you can simply look at the Ah rating to get an estimate. Of course, other factors like weather and driving habits can also affect the
lifespan of a car battery. But in general, the Ah rating is a good indicator of a battery’s power and longevity.
Most car batteries have between 40 and 65 amp hours. However, some trucks can have up to 75 amp hours.
Car Battery Capacity?
The capacity of a car battery is measured in kilowatt-hours (kWh), and this determines how far a car can travel on a single charge. The average EV battery has a capacity of around 60kWh, although
some models have larger batteries with capacities of 100kWh or more.
The number of cells in a battery also affects its capacity. Most EV batteries have between 300 and 400 cells, although some models have as many as 1000 cells. The more cells a battery has, the higher
its capacity will be.
So, if you’re wondering how many cells are in a car battery, the answer depends on the battery’s capacity. A higher-capacity battery will have more cells, while a lower-capacity battery will have
fewer cells.
The capacity of a car battery is determined by its kilowatt-hour (kWh) rating and the number of cells it has. Most batteries have between 300 and 400 cells, with some models having as many as 1000
cells. The higher the battery’s capacity, the more cells it will have.
How Long Will A Car Battery Last?
The lifespan of a car battery is determined by a number of factors, including the charging system, driving frequency, weather and temperature conditions, and many others. In general, most car
batteries will last between three and five years. However, some batteries may need to be replaced after as little as two years, while others may last up to eight years or more. The best way to
determine how long your car battery will last is to consult your owner’s manual or speak with your mechanic.
The lifespan of a car battery is determined by a number of factors and usually lasts between three and five years.
How Many Cells Does A 12V Car Battery Have?
A 12v car battery has six cells, each with 2.1 volts at full charge. A car battery is considered fully charged at 12.6 volts or higher. When the battery’s voltage drops, even a small amount, it makes
a big difference in its performance.
How Many Cells Are In A Car Battery Pack?
There are typically 12 cells in a car battery pack. Each cell has a capacity of 2-3 kWh, for a total pack capacity of 24-36 kWh.
How Many Cells Are In An Ev Car?
The number of cells in an electric vehicle (EV) can vary widely depending on the cell format. For example, cylindrical cells typically have between 5,000 and 9,000 cells. In contrast, pouch cells
typically have only a few hundred cells, and prismatic cells have even fewer.
How Many Lead Acid Cells In A Car Battery?
A standard 12-volt, lead-acid battery is made up of six cells connected in series. Each cell produces approximately two volts. The cells are filled with an electrolyte.
What Car Battery Voltage Is Too Low?
A car battery is considered too low if the voltage reading is 12.4 volts or less.
How Many Cells In An Electric Car Battery?
The average electric car battery consists of 12 cells, but the number of cells can vary widely depending on the type of cell used. The most common type of cell used in electric car batteries is the
cylindrical cell, which typically has between 5,000 and 9,000 cells.
How Many Amps Are In A 12 Volt Car Battery?
A 12-volt car battery can produce up to 600 amps of power, but its capacity will depend on its age and condition.
• Each Cell Of An Automotive Battery Will Supply How Many Volts?: Each cell in an automotive battery will supply 2.1 volts.
• At What Voltage Is A Car Battery Dead?: A car battery is considered dead when its voltage is below 12.5V at rest.
• What Is The Car Battery Minimum Voltage To Start?: The minimum voltage to start a car is 12.2 volts.
• What Voltage Is A Car Battery When Running?: The ideal car battery voltage with the engine running is between 13.7 and 14.7 volts.
Final Word
As you can see, there are six cells in a standard car battery. These cells work together to create the voltage that is needed to power your car. If you are having trouble with your car battery, make
sure to check each of the cells to see if they are working properly.
Related Post:
|
{"url":"https://autoshubs.com/how-many-cells-in-a-car-battery-2/","timestamp":"2024-11-01T19:03:19Z","content_type":"text/html","content_length":"78406","record_id":"<urn:uuid:b3e42b79-1c4a-4af4-9eaa-0876b4e0e0f1>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00423.warc.gz"}
|
Background Given the increasing scale of rare variant association studies we
Background Given the increasing scale of rare variant association studies we introduce a method for high-dimensional studies that integrates multiple sources of data as well as allows for multiple
region-specific risk indices. regions are associated with the outcome of interest. Results Using a set of study-based simulations we show that our approach leads to an increase in power to detect
true associations in comparison to several commonly used alternatives. Additionally the method provides multi-level inference at the pathway region and variant levels. Conclusion To demonstrate the
flexibility of the method to incorporate various types of information and the applicability to a high-dimensional data we apply our method to a single region within a candidate Metoprolol tartrate
gene study of second primary breast cancer and to multiple regions within a candidate pathway study of colon Metoprolol tartrate cancer. individuals we have: 1) an dimensional binary outcome vector Y
Y that represents an individual’s disease status 2 a set of genotypes within a dimensional matrix G Gwhere = 0 1 2 = 0 1 2 for the number of copies of the minor allele measured for individual at
variant covariates within a dimensional matrix Z Z included in all models. These covariates include variables such as age sex and variables used to control for potential confounding by population
stratification. Within the BMU framework we consider all models M∈ MM∈M defined by a distinct subset of the genetic variants and including all adjustment variables in each model. In particular each
model Mis indexed by a dimensional indicator vector where = 1 γ= 1 if variant is included in model Mand = 0 = 0 if is not included in model M= 1 variant is included as a risk Metoprolol tartrate
factor and if = ?1 variant is included as a protective factor. Then given any model M we define a risk index as the collective frequency of the variants in model Mthat is of the form: genetic
variants that belong to a set of regions and we wish to model the outcome variable using multi-regional genetic profile. In particular for each model Mdefined Metoprolol tartrate as: is the
model-specific rare variant load for region = 0 for all variants in region Rvariant-specific covariates specified within a dimensional covariate matrix W into the estimation of marginal inclusion
probabilities by introducing a second-stage regression on the probability that any variant is associated. Specifically we define the probability that any variant is associated as a function of the
variant-specific covariates using a probit model: is a ∈M we can quantify Metoprolol tartrate the evidence Slco2a1 that the data Metoprolol tartrate supports the model via posterior model
probabilities defined as: |M) is the marginal likelihood of each model after integrating out model specific parameters ∈S we can quantify the evidence that at least one variant within the set is
associated via set specific posterior probabilities: = 1 if at least one variant within set Sis in model M= 1| Y) we can also calculate the multi-level Bayes factors (BF) as the posterior odds that
at least one variant within the set is associated divided by the prior odds: ∈M and a Gibbs sampling algorithm to sample the second-stage regression coefficients ≠ 0) given the sampled values of
dimensional matrix W indicating the DNA repair sub-pathway that each variant is involved in. Randomly select one of these variant-specific covariates and sample an α within {0 1 2 3 for that
covariate (all of the other covariates are assumed to have an α-level of 0). Marginal probabilities of association are calculated for each rare variant based on the assigned α-levels then. Select
between 0:10 causal rare variants based on the marginal probabilities. Randomly select a for all causal rare variants within the simulation within {.5 1 1.5 2 2.5 Simulate each individual’s case/
control status based on the selected causal level and variants. As the variant-specific covariates become more informative there is a decrease in the mean number of iterations needed for a causal
variant to be sampled. Additionally there is not an increase in the mean number of iterations needed to propose other noncausal variants. Thus the integration of external biological covariates with
iBRI leads to a more efficient model search algorithm when these covariates are.
|
{"url":"http://www.crispr-reagents.com/background-given-the-increasing-scale-of-rare-variant-association-studies-we/","timestamp":"2024-11-08T03:08:22Z","content_type":"text/html","content_length":"34710","record_id":"<urn:uuid:6e3aa723-7f3b-4d33-b1ec-075231afd2b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00803.warc.gz"}
|
MTH101 Lecture 23 - Lecture 01 to 45 midterm mcqs - solved past papers
23 Lecture
Midterm & Final Term Short Notes
Maximum and Minimum Values of Functions
the maximum and minimum values of functions are critical points that play a crucial role in optimization problems. These critical points can be either absolute or relative, and they indicate the
highest and lowest points of a function within a g
Important Mcq's
Midterm & Finalterm Prepration
Past papers included
Download PDF
Which of the following is true about the maximum or minimum value of a function?
A) It always occurs at a critical point of the function
B) It always occurs at the endpoints of the interval
C) It can occur at either a critical point or an endpoint of the interval
D) It can occur anywhere on the function
Answer: C) It can occur at either a critical point or an endpoint of the interval
How can we determine whether a critical point corresponds to a maximum or minimum value of a function?
A) By evaluating the function at the critical point
B) By taking the derivative of the function at the critical point
C) By taking the second derivative of the function at the critical point
D) By using the intermediate value theorem
Answer: C) By taking the second derivative of the function at the critical point
What is the absolute maximum of a function?
A) The highest point of the function over its entire domain
B) The highest point of the function within a given interval
C) The lowest point of the function over its entire domain
D) The lowest point of the function within a given interval
Answer: A) The highest point of the function over its entire domain
What is the absolute minimum of a function?
A) The highest point of the function over its entire domain
B) The highest point of the function within a given interval
C) The lowest point of the function over its entire domain
D) The lowest point of the function within a given interval
Answer: C) The lowest point of the function over its entire domain
What is an inflection point of a function?
A) A point where the derivative of the function is zero
B) A point where the second derivative of the function is zero
C) A point where the function changes concavity
D) A point where the function changes direction
Answer: C) A point where the function changes concavity
Which of the following is not a step in solving an optimization problem?
A) Taking the derivative of the function
B) Setting the derivative equal to zero or undefined
C) Checking the endpoints of the interval
D) Evaluating the function at the critical points
Answer: D) Evaluating the function at the critical points
What is a constraint in an optimization problem?
A) A condition that must be satisfied by the function
B) A condition that must be satisfied by the derivative of the function
C) A condition that must be satisfied by the second derivative of the function
D) A condition that must be satisfied by the endpoints of the interval
Answer: A) A condition that must be satisfied by the function
Which of the following is not true about the maximum or minimum value of a function over a closed interval?
A) It may occur at the endpoints of the interval
B) It may occur at the critical points of the function
C) It may occur at points where the derivative is undefined
D) It may occur at points where the function is not continuous
Answer: D) It may occur at points where the function is not continuous
What is the first derivative test used for?
A) To determine whether a critical point corresponds to a maximum or minimum of a function
B) To determine whether a function is increasing or decreasing
C) To determine whether a function is concave up or concave down
D) To determine whether a function has an inflection point
Answer: B) To determine whether a function is increasing or decreasing
Which of the following is true about the second derivative test?
A) It is used to determine whether a function is increasing or decreasing
B) It is used to
Subjective Short Notes
Midterm & Finalterm Prepration
Past papers included
Download PDF
What are critical points of a function?
Answer: Critical points of a function are the points where the derivative of the function is either zero or undefined.
What is a relative maximum of a function?
Answer: A relative maximum of a function is the highest point of the function within a given interval.
What is a relative minimum of a function?
Answer: A relative minimum of a function is the lowest point of the function within a given interval.
How do you find the critical points of a function?
Answer: To find the critical points of a function, we need to take the derivative of the function and solve for where the derivative is zero or undefined.
What is the second derivative test?
Answer: The second derivative test is a method to determine whether a critical point corresponds to a relative maximum, relative minimum, or neither.
What is an absolute maximum of a function?
Answer: An absolute maximum of a function is the highest point of the function over its entire domain.
What is an absolute minimum of a function?
Answer: An absolute minimum of a function is the lowest point of the function over its entire domain.
What are optimization problems?
Answer: Optimization problems involve maximizing or minimizing a function subject to certain constraints.
How do you solve an optimization problem?
Answer: To solve an optimization problem, we need to set up the problem, take the derivative of the function, solve for where the derivative is zero or undefined, and check whether the critical point
corresponds to a maximum or minimum.
What is the maximum or minimum value of a function?
Answer: The maximum or minimum value of a function is the highest or lowest point of the function within a given interval or over its entire domain.
Maximum and Minimum Values of Functions
In calculus, the maximum and minimum values of functions are critical points that play a crucial role in optimization problems. These critical points can be either absolute or relative, and they
indicate the highest and lowest points of a function within a given interval. To find the c and minimum values of a function, we need to take the derivative of the function and solve for the critical
points, where the derivative is zero or undefined. We then use the second derivative test to determine whether these critical points correspond to a relative maximum, relative minimum, or neither. A
relative maximum is the highest point of a function within a given interval, while a relative minimum is the lowest point of a function within a given interval. Absolute maximum and minimum, on the
other hand, are the highest and lowest points of a function over its entire domain. To illustrate how to find the maximum and minimum values of a function, let's consider the function f(x) = x^3 - 3x
^2 + 2x. Taking the derivative of this function, we get f'(x) = 3x^2 - 6x + 2. Setting this derivative equal to zero, we get: 3x^2 - 6x + 2 = 0 Solving for x, we get: x = (6 ± ?28)/6 Therefore, the
critical points of the function are: x = (6 + ?28)/6 and x = (6 - ?28)/6 To determine whether these critical points correspond to a relative maximum, relative minimum, or neither, we need to use the
second derivative test. The second derivative of the function is f''(x) = 6x - 6, and evaluating this at each critical point, we get: f''((6 + ?28)/6) = 2(?28 - 3)/3 < 0 f''((6 - ?28)/6) = 2(3 - ?28)
/3 > 0 Therefore, the first critical point corresponds to a relative maximum, and the second critical point corresponds to a relative minimum. These critical points can also be used to find the
absolute maximum and minimum values of the function over its entire domain. Another important concept in finding maximum and minimum values of a function is the concept of optimization problems.
Optimization problems involve maximizing or minimizing a function subject to certain constraints. For example, suppose we want to find the dimensions of a rectangular garden that maximize its area,
given a fixed perimeter of 60 meters. We can set up this problem as follows: Let L be the length of the rectangular garden, and let W be the width. The perimeter of the garden is given by: P = 2L +
2W = 60 Solving for one of the variables, we get: W = 30/L - L The area of the garden is given by: A = LW = L(30/L - L) = 30L - L^2 Taking the derivative of the area function, we get: A'(L) = 30 - 2L
Setting this derivative equal to zero, we get: 30 - 2L = 0 Solving for L, we get L = 15. Therefore, the length of the garden that maximizes its area is 15 meters, and the width is 30/15 - 15 = 15
meters. The maximum area of the garden is then 15 * 15 = 225 square meters. In conclusion, the maximum and minimum values of functions are critical points that play a crucial role in optimization
problems. To find these critical points, we need to take the derivative of the function and solve for where the derivative is zero or undefined
|
{"url":"https://smallseotricks.com/course/mth101-short-notes-calculus-and-analytical-geometry/lecture/23","timestamp":"2024-11-07T04:09:34Z","content_type":"text/html","content_length":"57004","record_id":"<urn:uuid:8bc83de9-c1af-4a82-bd31-cbfca78e7683>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00324.warc.gz"}
|
Weaving Compilers
Post Metadata
Audrey Seo
Date Published
The past few weeks, I’ve been learning more about a few different kinds of weaving. One form of weaving is card weaving (also called tablet weaving), where the loom is instead replaced by square
pieces of cardboard with holes punched in each corner. You may be wondering: how does a bunch of pieces of cardboard with holes weave cloth? I wondered the same thing myself! It seemed bonkers to
think that you could do something with a bunch of cards that produced anything like fabric.
So, I watched a few YouTube videos, and the process seemed absolutely magical. By rotating the cards, you somehow produce cloth!
I immediately wanted to weave something using this bizarre but charmingly low-entry method, but I didn’t like a lot of the patterns I found for card weaving, so I found one for a different kind of
weaving, that looks like this:
This pattern didn’t seem too complicated: 26 rows, 9 columns. Seems like it should be simple, right? Looking at the picture of this pattern, it feels like it should be obvious what the steps should
be to recreate it.
However, as it turns out, it is completely un-obvious. To see why, first we’ll need to know a little more about card weaving.
Card weaving is a very old way to weave. The produced fabric, which usually consists of narrow bands, is very strong and sturdy, used for horse bridles and reins, straps for instruments, belts, etc.
As you may have seen in the videos linked above, instead of a loom, card weaving employs \(N\) cards, which are typically square in shape with holes in each corner, labeled A, B, C, and D clockwise.
Each corner is threaded with a yarn. This yarn is secured and fastened on each end, on one end to a fixed object, such as a tree or doorknob, and on the other to the weaver.
How the cards are arranged while weaving. Credit for image on the left.
To weave, the cards are arranged so that the AD side is parallel to the ground, as shown above. A weft thread is passed above the BC threads and under the AB threads. Then, each of the N cards may be
rotated “forward” or “backward”, which swaps the corners and which yarn is on top. “Forward” is a clockwise rotation of 90 degrees, resulting in the CD side being uppermost. “Backward” is a
counterclockwise rotation of 90 degrees, and brings the AB side on top.
These rotations are the basic operation that gives rise to the pattern. If rotating the cards clockwise, the color in position A will be one that determines the color of the “row” you just wove. When
rotating counterclockwise, it’s the color in position D.
Now that we know more about card weaving, we can reformulate our problem as follows: Given an \(M \times N\) array (\(M\) rows, \(N\) columns) where each cell contains one of \(K\) colors, can we
find an assignment of colored threads to \(N\) card weaving tablets as well as \(M\) card weaving instructions such that performing the instructions on the \(N\) cards produces the pattern in the
For a very small \(N\), and thus a very narrow-width pattern, very small M (a very short pattern repeat), and very small number of colors \(K\), it could be feasible to find, by hand, an assignment
of yarn colors to the \(4N\) holes in the cards and \(M\) operations such that the \(M\) operations on the \(N\) cards produces the pattern. However, make \(N\) or \(M\) nontrivial and it becomes
quite unwieldy. One mistake could also propagate to mess up the entire rest of the pattern instructions.
For instance, can you easily come up with an assignment of threads to cards as well as clockwise/counterclockwise rotations such that the resulting weaving produces the above pattern? I can’t!
It isn’t enough to just come up with the instructions, either. A good card weaving pattern will have other “nice” properties about the amount of twist in the yarn, or a particularly intuitive
progression of steps. The difficulty of coming up with a pattern from scratch combined with these factors has historically constrained how card weaving patterns are designed.
However, such a problem seems like it should be solvable with program synthesis and/or SMT solvers. While there are \(2^N\) operations that could possibly be done on N cards at any given time, this
is still a finite search space, and often the cards that are used to weave the border design are only turned in one direction. This reduces the possible space to \(2^{(N - B)}\), where \(B\) is the
number of border cards. Cards with only one color can be similarly ignored.
Assigning colors to holes in the cards can also be simplified using some heuristics, such as trying every set of 4 rows of the pattern as a color assignment.
While solving this problem won’t make our computers faster or more correct, it will make hundreds of patterns more accessible for people who want to get into card weaving. Card weaving is compact,
flexible, and relatively cheap to get into. A pack of decent cards for card weaving costs less than $20, whereas decent looms cost at least $100, and usually quite a lot more depending on the type of
loom. You could even use playing cards to DIY your own card weaving cards. For such an accessible form of weaving, it seems appropriate that designing patterns for it should be equally accessible. My
proposed compiler would provide this accessibility, since it could be used to translate any array of colors into a card weaving pattern, and could be used to compile cross stitch patterns and
knitting patterns in addition to patterns for other narrow bands to card weaving patterns.
I’ve barely scratched the surface here – card weaving has some other cool links to mathematics and computer science. For instance, the state of the cards (their rotation, how much twist is in the
warp) is an element of an algebraic group! This isn’t just a fun fact either, it can be useful – group theory can be used to reduce the solution space using symmetry groups, as was done in the
Kociemba algorithm for solving Rubik’s cubes. For another example, punch cards may be most associated today with old computers, but they were first used to encode the instructions for the Jacquard
loom, and inspired the analytical engine that Ada Lovelace programmed. It feels very full-circle that computers, which were inspired by weaving machines, could be used to then help people do more
|
{"url":"https://uwplse.org/2024/05/27/Weaving-Compilation.html","timestamp":"2024-11-11T00:48:07Z","content_type":"text/html","content_length":"15264","record_id":"<urn:uuid:03c31195-6a58-45d6-bfa1-b0aea1b93471>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00706.warc.gz"}
|
Introducing Theseus, a library for encoding domain knowledge in end to end AI models
What the research is:
Meta AI is open-sourcing Theseus, a library for an optimization technique called differentiable nonlinear least squares (NLS) that is particularly useful for applications like robotics and computer
vision. Built on PyTorch, Theseus enables researchers to easily incorporate expert domain knowledge into modern AI architectures. It does this by expressing that knowledge as an optimization problem
and adding it to the architecture as a modular “optimization layer” in the usual gradient-based learning process. This domain knowledge is distinct from the training data and can help the model make
more accurate predictions. For instance, to ensure that a robot’s movements are smooth, researchers could include knowledge about the robot’s embodiment and movement patterns (called a kinematics
model) as a layer while the robot is trained end to end to move.
Theseus is the first library to provide an application-agnostic framework for differentiable nonlinear optimization. Theseus is also highly efficient — it speeds computation and memory by supporting
batching, GPU acceleration, sparse solvers, and implicit differentiation. As a result, it is up to four times faster than Google’s state-of-the-art, C++-based Ceres Solver (which does not support
end-to-end learning).
Theseus fuses the best aspects of the two prevailing methods for injecting prior knowledge into an AI system. Before the advent of deep learning, researchers used simpler, standalone AI optimization
algorithms to solve individual problems in robotics. Robotic systems learned the best way to carry out commands by calculating the minimum value of a hand-selected combination of factors, such as
joint motion and energy use. This method was effective but inflexible; the application-specific optimization algorithms often proved difficult to adapt to new systems or environments. Deep learning
methods, on the other hand, are much more scalable, but they require a massive amount of data, and they may produce solutions that are effective but also brittle outside of the training domain.
To train a deep learning model for a particular application, researchers use a carefully selected loss function to measure how well the model is predicting the data. But to update the model weights
through backpropagation, each layer must be differentiable, allowing the error information to flow through the network. Traditional optimization algorithms are not end to end differentiable, so
researchers face a trade-off: They can abandon optimization algorithms for end to end deep learning dedicated to the specific task — and risk losing optimization’s efficiency as well as its facility
for generalization. Or, they can train the deep learning model offline and add it to the optimization algorithms at inference time. The second method has the benefit of combining deep learning and
prior knowledge, but — because the deep learning model is trained without that pre-existing information or the task-specific error function — its predictions might prove inaccurate.
To blend these strategies in a way that mitigates their weaknesses and leverages their strengths, Theseus converts the results of optimization into a layer that can be plugged into any neural network
architecture. That way, revisions can back-propagate through the optimization layer, allowing researchers to fine-tune with domain-specific knowledge on the final task loss as an integral part of the
end to end deep learning model.
In the Theseus layer (green), the objective is composed of the output tensors of upstream neural models (gray) and prior knowledge (orange). The output of the Theseus layer are tensors that minimize
the objective.
How it works:
NLS measures how much a nonlinear function varies from the actual data it is meant to predict. A small value means the function fits the data set well. NLS is prevalent in the formulation of many
robotics and vision problems, from mapping and estimation to planning and control. For example, a robot’s route toward a desired goal can be formulated as an NLS optimization problem: To plot the
fastest safe trajectory, the system finds the solution to a sum-of-costs objective that minimizes both travel duration and unwanted behavior, like sharp turns or collisions with obstacles in the
environment. A sum-of-costs objective can also capture sensor measurement errors to optimize the past trajectories of a robot or camera.
Making NLS differentiable, Theseus provides differentiable nonlinear optimization as a layer that researchers can insert into their neural network. Input tensors define a sum-of-weighted-squares
objective function, and output tensors are arguments that produce the minimum of that objective. (In contrast, typical neural layers take input tensors through a linear transformation and some
element-wise nonlinear activation function.) The ability to compute gradients end to end is retained by differentiating through the optimizer.
This integrates the optimizer and known priors into the deep learning training loop, allowing models to encode domain knowledge and learn on the actual task loss. For instance, to ensure that a
robot’s movements are smooth, researchers could include known robot kinematics in the optimizer; meanwhile, the deep learning model will extract the larger goal from perception or a language
instruction during training. That way, researchers can develop the goal prediction model end to end with the known kinematics model in the training loop. This technique of modularly mixing known
priors with neural components leads to improved data efficiency and generalization.
For efficiency, Theseus incorporates support for sparse solvers, automatic vectorization, batching, GPU acceleration, and gradient computation with implicit differentiation. Just as autodiff and GPU
acceleration have propelled the evolution of PyTorch over NumPy, sparsity and implicit differentiation — on top of autodiff and GPU acceleration — power Theseus, in contrast to solvers like Ceres
that typically support only sparsity. On a standard GPU, Theseus with a sparse solver is much faster and requires significantly less memory than a dense solver. Additionally, when Theseus is solving
a batch of large problems, its forward pass is up to four times faster than that of Ceres, which has limited GPU support and does not support batching or end to end learning. Finally, implicit
differentiation yields better gradients than standard unrolling. Implicit differentiation also has a constant memory and compute footprint with increasing optimization iterations, unlike unrolling,
which scales linearly in compute and memory.
Why it matters:
Theseus provides a common framework to leverage the complementary strengths of traditional robotics and vision approaches and deep learning. Differentiable optimization acts as an inductive prior,
improving data efficiency and generalization, which is crucial in robotics because data and labels often do not come cheap, and application domains tend to be broad.
Recognizing the flexibility of differentiable NLS, previous researchers have reported state-of-the-art results with similar methods in a wide range of applications in robotics and vision, but
existing implementations are task-specific and often inefficient. Theseus is application-agnostic, so the AI community can make faster progress by training accurate models that excel in multiple
tasks and environments. We have developed several example applications, including pose graph optimization, tactile state estimation, bundle adjustment, motion planning, and homography estimation. We
built these examples using the same underlying differentiable components, such as second-order optimizers, standard costs functions, and Lie groups.
Beyond pushing the current state of the art, our framework will enable avenues for future research into the role and possible evolution of structure in complex robot systems, learning end to end on
such systems, and continually learning during real-world interactions.
Read the full paper and get the code:
|
{"url":"https://ai.meta.com/blog/theseus-a-library-for-encoding-domain-knowledge-in-end-to-end-ai-models/","timestamp":"2024-11-04T10:34:57Z","content_type":"text/html","content_length":"146193","record_id":"<urn:uuid:5a7087d9-155d-4bb2-a958-1be12fe3750a>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00094.warc.gz"}
|
About Passivity and Passivity Indices
Passive control is often part of the safety requirements in applications such as process control, tele-operation, human-machine interfaces, and system networks. A system is passive if it cannot
produce energy on its own, and can only dissipate the energy that is stored in it initially. More generally, an I/O map is passive if, on average, increasing the output y requires increasing the
input u.
For example, a PID controller is passive because the control signal (the output) moves in the same direction as the error signal (the input). But a PID controller with delay is not passive, because
the control signal can move in the opposite direction from the error, a potential cause of instability.
Most physical systems are passive. The Passivity Theorem holds that the negative-feedback interconnection of two strictly passive systems is passive and stable. As a result, it can be desirable to
enforce passivity of the controller for a passive system, or to passivate the operator of a passive system, such as the driver of a car.
In practice, passivity can easily be destroyed by the phase lags introduced by sensors, actuators, and communication delays. These problems have led to extension of the Passivity Theorem that
consider excesses or shortages of passivity, frequency-dependent measures of passivity, and a mix of passivity and small-gain properties.
Passive Systems
A linear system $G\left(s\right)$ is passive if all input/output trajectories $y\left(t\right)=Gu\left(t\right)$ satisfy:
${\int }_{0}^{T}{y}^{T}\left(t\right)u\left(t\right)dt>0,\phantom{\rule{1em}{0ex}}\forall T>0,$
where ${y}^{T}\left(t\right)$ denotes the transpose of $y\left(t\right)$. For physical systems, the integral typically represents the energy going into the system. Thus passive systems are systems
that only consume or dissipate energy. As a result, passive systems are intrinsically stable.
In the frequency domain, passivity is equivalent to the "positive real" condition:
$G\left(j\omega \right)+{G}^{H}\left(j\omega \right)>0,\phantom{\rule{1em}{0ex}}\forall \omega \in R.$
For SISO systems, this is saying that $Re\left(G\left(j\omega \right)\right)>0$ at all frequencies, so the entire Nyquist plot lies in the right-half plane.
nyquist(tf([1 3 5],[5 6 1]))
Nyquist plot of passive system
Passive systems have the following important properties for control purposes:
When controlling a passive system with unknown or variable characteristics, it is therefore desirable to use a passive feedback law to guarantee closed-loop stability. This task can be rendered
difficult given that delays and significant phase lag destroy passivity.
Directional Passivity Indices
For stability, knowing whether a system is passive or not does not tell the full story. It is often desirable to know by how much it is passive or fails to be passive. In addition, a shortage of
passivity in the plant can be compensated by an excess of passivity in the controller, and vice versa. It is therefore important to measure the excess or shortage of passivity, and this is where
passivity indices come into play.
There are different types of indices with different applications. One class of indices measure the excess or shortage of passivity in a particular direction of the input/output space. For example,
the input passivity index is defined as the largest $u$ such that:
${\int }_{0}^{T}{y}^{T}\left(t\right)u\left(t\right)dt>u {\int }_{0}^{T}{u}^{T}\left(t\right)u\left(t\right)dt,$
for all trajectories $y\left(t\right)=Gu\left(t\right)$ and $T>0$. The system G is input strictly passive (ISP) when $u >0$, and has a shortage of passivity when $u <0$. The input passivity index is
also called the input feedforward passivity (IFP) index because it corresponds to the minimum static feedforward action needed to make the system passive.
In the frequency domain, the input passivity index is characterized by:
$u =\frac{1}{2}\underset{\omega }{\mathrm{min}}{\lambda }_{\mathrm{min}}\left(G\left(j\omega \right)+{G}^{H}\left(j\omega \right)\right),$
where ${\lambda }_{\mathrm{min}}$ denotes the smallest eigenvalue. In the SISO case, $u$ is the abscissa of the leftmost point on the Nyquist curve.
Similarly, the output passivity index is defined as the largest $\rho$ such that:
${\int }_{0}^{T}{y}^{T}\left(t\right)u\left(t\right)dt>\rho {\int }_{0}^{T}{y}^{T}\left(t\right)y\left(t\right)dt,$
for all trajectories $y\left(t\right)=Gu\left(t\right)$ and $T>0$. The system G is output strictly passive (OSP) when $\rho >0$, and has a shortage of passivity when $\rho <0$. The output passivity
index is also called the output feedback passivity (OFP) index because it corresponds to the minimum static feedback action needed to make the system passive.
In the frequency domain, the output passivity index of a minimum-phase system $G\left(s\right)$ is given by:
$\rho =\frac{1}{2}\underset{\omega }{\mathrm{min}}{\lambda }_{\mathrm{min}}\left({G}^{-1}\left(j\omega \right)+{G}^{-H}\left(j\omega \right)\right).$
In the SISO case, $\rho$ is the abscissa of the leftmost point on the Nyquist curve of ${G}^{-1}\left(s\right)$.
Combining these two notions leads to the I/O passivity index, which is the largest $\tau$ such that:
${\int }_{0}^{T}{y}^{T}\left(t\right)u\left(t\right)dt>\tau {\int }_{0}^{T}\left({u}^{T}\left(t\right)u\left(t\right)+{y}^{T}\left(t\right)y\left(t\right)\right)dt.$
A system with $\tau >0$ is very strictly passive. More generally, we can define the index in the direction $\delta Q$ as the largest $\tau$ such that:
${\int }_{0}^{T}{y}^{T}\left(t\right)u\left(t\right)dt>\tau {\int }_{0}^{T}{\left(\begin{array}{c}y\left(t\right)\\ u\left(t\right)\end{array}\right)}^{T}\delta Q\left(\begin{array}{c}y\left(t\right)
\\ u\left(t\right)\end{array}\right)dt.$
The input, output, and I/O passivity indices all correspond to special choices of $\delta Q$ and are collectively referred to as directional passivity indices. You can use getPassiveIndex to compute
any of these indices for linear systems in either parametric or FRD form. You can also use passiveplot to plot the input, output, or I/O passivity indices as a function of frequency. This plot
provides insight into which frequency bands have weaker or stronger passivity.
There are many results quantifying how the input and output passivity indices propagate through parallel, series, or feedback interconnections. There are also results quantifying the excess of input
or output passivity needed to compensate a given shortage of passivity in a feedback loop. For details, see:
Relative Passivity Index
The positive real condition for passivity:
$G\left(j\omega \right)+{G}^{H}\left(j\omega \right)>0\phantom{\rule{1em}{0ex}}\forall \omega \in R,$
is equivalent to the small gain condition:
$||\left(I-G\left(j\omega \right)\right)\left(I+G\left(j\omega \right){\right)}^{-1}||<1\phantom{\rule{1em}{0ex}}\forall \omega \in R.$
We can therefore use the peak gain of $\left(I-G\right)\left(I+G{\right)}^{-1}$ as a measure of passivity. Specifically, let
$R:={‖\left(I-G\right)\left(I+G{\right)}^{-1}‖}_{\infty }.$
Then $G$ is passive if and only if $R<1$, and $R>1$ indicates a shortage of passivity. Note that $R$ is finite if and only if $I+G$ is minimum phase. We refer to $R$ as the relative passivity index,
or R-index. In the time domain, the R-index is the smallest $r>0$ such that:
${\int }_{0}^{T}||y-u|{|}^{2}dt<{r}^{2}{\int }_{0}^{T}||y+u|{|}^{2}dt,$
for all trajectories $y\left(t\right)=Gu\left(t\right)$ and $T>0$. When $I+G$ is minimum phase, you can use passiveplot to plot the principal gains of $\left(I-G\left(j\omega \right)\right)\left(I+G\
left(j\omega \right){\right)}^{-1}$. This plot is entirely analogous to the singular value plot (see sigma), and shows how the degree of passivity changes with frequency and direction.
The following result is analogous to the Small Gain Theorem for feedback loops. It gives a simple condition on R-indices for compensating a shortage of passivity in one system by an excess of
passivity in the other.
Small-R Theorem: Let ${G}_{1}\left(s\right)$ and ${G}_{2}\left(s\right)$ be two linear systems with passivity R-indices ${R}_{1}$ and ${R}_{2}$, respectively. If ${R}_{1}{R}_{2}<1$, then the negative
feedback interconnection of ${G}_{1}$ and ${G}_{2}$ is stable.
[1] Xia, M., P. Gahinet, N. Abroug, C. Buhr, and E. Laroche. “Sector Bounds in Stability Analysis and Control Design.” International Journal of Robust and Nonlinear Control 30, no. 18 (December
2020): 7857–82. https://doi.org/10.1002/rnc.5236.
See Also
isPassive | getPassiveIndex | passiveplot
Related Topics
|
{"url":"https://se.mathworks.com/help/control/ug/about-passivity-and-passivity-indices.html","timestamp":"2024-11-04T08:48:12Z","content_type":"text/html","content_length":"102801","record_id":"<urn:uuid:9ce782ee-f16a-4e85-a2c1-ebf751d1ab5f>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00457.warc.gz"}
|
Making Sense of Percentages: Demystifying Common Calculation Errors - Age calculator
Making Sense of Percentages: Demystifying Common Calculation Errors
Making Sense of Percentages: Demystifying Common Calculation Errors
Percentages are ubiquitous in our everyday lives, from calculating tips at a restaurant to working out discounts while shopping. However, it is not uncommon to witness people making mistakes while
calculating percentages. In this article, we will demystify some of the common calculation errors made while working with percentages and explore some nifty techniques to get those numbers right.
Understanding Percentages
To put it simply, a percentage is a fraction with 100 as the denominator. For example, 50% is the same as 50 ÷ 100 or 0.5. Percentages are used to represent a part of a whole, where the whole is
represented by 100%. So if 15 out of 20 people prefer red over blue, we can calculate the percentage of people who prefer red as follows:
15 ÷ 20 × 100% = 75%
Common Calculation Errors
Let us explore some of the common calculation errors made while working with percentages.
1. Forgetting to Convert the Percentage to Decimal
When we need to multiply a number by a percentage, we need to first convert the percentage into a decimal. For example, if we need to calculate 20% of 50, we first need to convert 20% into a decimal
as follows:
20% = 20 ÷ 100 = 0.2
Now, we can multiply this decimal by 50 to get the answer:
0.2 × 50 = 10
If we forget to convert the percentage to a decimal, we will get an incorrect answer. In the above example, if we mistakenly direct multiplied 20% by 50, we would get 1000, which is obviously
2. Confusing Percentage Increase and Percentage Points
There is a big difference between a percentage increase and percentage points. A percentage increase is the relative change in a value, whereas percentage points are the absolute change in a value.
For example, let’s say that a shirt costs $40 and its price increases to $44. The percentage increase in the price is:
(44 − 40) ÷ 40 × 100% = 10%
However, if the price increases from $40 to $45, the percentage increase is:
(45 − 40) ÷ 40 × 100% = 12.5%
On the other hand, the percentage points increase is the absolute difference between the two values. In the first example, the percentage points increase is 4, whereas in the second example, it is 5.
3. Misunderstanding the Base Value
The base value is the original value to which we apply a percentage change. Often, a calculation error occurs when we use the wrong base value.
For example, let’s say that the price of a car is $20,000 and its value increases by 10%. A common error would be to calculate the new price as follows:
20,000 + 10% of 20,000 = 22,000
This is incorrect because we need to add the percentage increase to the base value, not to the result of the previous calculation. The correct calculation would be:
20,000 + 10% of 20,000 = 20,000 + 2,000 = 22,000
Nifty Techniques
Now that we have demystified some of the common calculation errors made while working with percentages, let’s explore some nifty techniques to get those numbers right.
1. Using Proportions
Proportions can be a useful tool in percentage calculations. For example, let’s say that we want to find the percentage of 75 out of 100. We can set up the following proportion:
75/100 = x/100%
Simplifying this proportion, we get:
75% = x
So the percentage of 75 out of 100 is 75%.
2. Using Percentages in Reverse
We can also use percentages in reverse to work out the original value from a given percentage change. For example, let’s say that the price of a car increased by 15% and the new price is $23,000. To
find the original price, we can use the following formula:
Original price / 100% = New price / (100% + Percentage increase)
Substituting the values we have, we get:
Original price / 100% = 23,000 / (100% + 15%)
Simplifying, we get:
Original price / 100% = 23,000 / 1.15
Multiplying both sides by 100%, we get:
Original price = 23,000 / 1.15 = $20,000
So the original price of the car was $20,000.
1. What is the easiest way to calculate percentages?
The easiest way to calculate percentages is to use a calculator. However, it’s always good to understand the underlying concept of percentages and not rely solely on the calculator.
2. How do I convert a decimal to a percentage?
To convert a decimal to a percentage, multiply it by 100%. For example, to convert 0.5 to a percentage, we would multiply it by 100%:
0.5 × 100% = 50%
3. Can percentages be added or subtracted?
Percentages cannot be added or subtracted. However, you can add or subtract the actual values represented by the percentages. For example, if you have two percentages of 10% and 20%, you cannot add
them to get 30%. Instead, you would need to add the values they represent. If the values are $100 and $200, the total would be $300.
Percentages play a significant role in our day-to-day lives, and understanding their calculations is essential. By avoiding common calculation errors and using nifty techniques, you can master the
art of percentages and make it work to your advantage.
Recent comments
|
{"url":"https://age.calculator-seo.com/making-sense-of-percentages-demystifying-common-calculation-errors/","timestamp":"2024-11-03T21:50:37Z","content_type":"text/html","content_length":"303940","record_id":"<urn:uuid:873071dc-1705-4463-93a3-13a8f0c92f27>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00679.warc.gz"}
|
[Beowulf] Threaded code (& Fortran)
Josip Loncaric josip at lanl.gov
Wed Aug 18 13:47:41 PDT 2004
Jeff Layton wrote:
> [...] Start coding in Fortran or the dog gets it! [...]
I learned programming 34 years ago the hard way, using Fortran IV and
punched cards, but my preference today is C. Even so, there are a few
advantages Fortran had (or still has):
* Intrinsic complex data types and functions
* More sensible notation for indexing matrices and multi-dim. arrays
* Operators like x**y
* Same precision operations & pass-by-value (unlike traditional K&R C)
* etc.
C was originally devised for systems programming, but Fortran is geared
towards scientific computing, where it still dominates for historical
reasons (e.g. LINPACK).
If we start talking about C99, we should at least talk about Fortran 90
or 95. Fortran 90/95 is not your father's Fortran IV. Many scientific
programmers prefer Fortran 90/95 despite the addition of "complex" data
type in C99.
Fortran has evolved to include a nice array language and abstract data
types, so that C's advantage in handling data structures is reduced.
Fortran now allows free source form (no column 7-72 limit), recursion,
improved I/O, "implicit none" statement, and many other enhancements
which deliver faster and more reliable development cycle in scientific
Having said all that, I use C mostly because it is ubiquitous. GNU C is
excellent and available virtually everywhere. Good Fortran 90/95
compilers are harder to get.
For a good discussion of Fortran-vs-C in scientific computing, see the
"Preliminaries" section of "Numerical Recipes in C" by William Press et
al., particularly early editions.
P.S. Real physicists (and control theorists and mathematicians)
*routinely* use complex numbers: generic real polynomials have complex
roots. Computing roots (or eigenvalues) using only real numbers is
possible, but needlessly complicated and prone to coding errors.
Pre-C99 extensions for complex arithmetic in C are a royal pain.
Quaternions are rarely used. Equivalent matrix representations are
often easier to work with anyway.
More information about the Beowulf mailing list
|
{"url":"https://beowulf.org/pipermail/beowulf/2004-August/010527.html","timestamp":"2024-11-05T19:30:07Z","content_type":"text/html","content_length":"4842","record_id":"<urn:uuid:c12a6cc5-76e0-4ff3-8310-6634b6492772>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00705.warc.gz"}
|
Top Tools for Solving Classical Mechanics Problems - Home Run On Wheels
Top Tools for Solving Classical Mechanics Problems
Delving into the realm of classical mechanics can be both challenging and rewarding for students and professionals alike. Whether you are grappling with projectile motion, analyzing forces and
energy, or studying rotational dynamics, having the right tools at your disposal can make a world of difference in tackling complex problems.
In this article, we will explore three top tools that can help you navigate the intricacies of classical mechanics with ease. From calculators to simulation software, these resources are designed to
streamline your problem-solving process and enhance your understanding of fundamental physics principles.
So, lets dive in and discover how these tools can revolutionize your approach to classical mechanics problems.
Newtons Laws of Motion
Newtons Laws of Motion are the foundation of classical mechanics and serve as the basis for solving a wide range of physics problems. The first law, also known as the law of inertia, states that an
object will remain at rest or in uniform motion unless acted upon by an external force.
The second law defines how the acceleration of an object is directly proportional to the net force acting on it and inversely proportional to its mass. Finally, the third law states that for every
action, there is an equal and opposite reaction.
These three laws work together to provide a framework for understanding and analyzing the motion of objects in the physical world. By applying the principles of Newtons laws, physicists and engineers
can solve complex problems involving motion and forces with precision and accuracy.
Energy Conservation
Energy conservation is a fundamental principle in classical mechanics, and understanding how energy is conserved in a system is crucial for solving problems in this field. One top tool for solving
classical mechanics problems related to energy conservation is the conservation of mechanical energy equation, which states that the total mechanical energy of a system remains constant as long as
only conservative forces are doing work.
Another useful tool is the concept of potential energy, which allows us to analyze how energy is stored in a system due to positions of objects relative to one another. By applying these tools
effectively, physicists and engineers can accurately predict the behavior of systems and make informed decisions about energy utilization.
Work-Energy Theorem
One essential concept in classical mechanics is the Work-Energy Theorem, which states that the work done on an object is equal to the change in its kinetic energy. This theorem is a powerful tool in
problem-solving as it allows us to analyze the effects of forces on the motion of objects.
By calculating the work done on an object and understanding how it affects its kinetic energy, we can make predictions about its motion and behavior. In classical mechanics problems, applying the
Work-Energy Theorem can help us determine the final speed of an object, the distance it travels, or the force required to achieve a certain motion.
Mastering this theorem is crucial for success in solving a wide range of mechanical problems.
In conclusion, the three top tools for solving classical mechanics problems discussed in this article have proven to be invaluable resources for students, researchers, and professionals alike.
Whether utilizing computational software, analytical techniques, or experimental tools, individuals can address a wide range of problems in classical mechanics with confidence and efficiency.
By harnessing the power of these tools, individuals can navigate complex physical systems, derive solutions to challenging equations, and gain deeper insights into the fundamental principles
governing motion and forces. With access to these top tools, the study and application of classical mechanics are made more accessible and rewarding, empowering users to tackle and overcome the most
intricate problems in this field.
Let these tools serve as invaluable assets in your exploration and understanding of the solved problems in classical mechanics.
|
{"url":"https://homerunonwheels.com/solving-classical-mechanics-problems/","timestamp":"2024-11-03T06:56:40Z","content_type":"text/html","content_length":"205134","record_id":"<urn:uuid:028efaa2-0807-4145-8f13-63884ec9797b>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00331.warc.gz"}
|
The Quadratic Formula and Parameters
Appendix 1
The Quadratic Formula and Parameters
The well-known quadratic formula gives an example of the all-encompassing power of parameters. Examples of quadratic equations include the following:
1. 6x^2 + 13x + 8 = 0.
2. x^2 + 11x + 5 = 0.
3. 20x^2 + 5x – 36 = 0.
4. x^2 – 7x + 72 = 0.
Techniques for solving some problems of these types existed going all the way back to ancient Babylon over 3500 years ago—although it was only in the sixteenth and seventeenth centuries that the
presentation of these equations would be similar in form to the way that the four examples are written.
One of the general methods for solving these equations is known as “completing the square.” The method can be applied to each of these equations to find individual solution(s) for each.
In the language of Chapter 4, we can think of each quadratic equation as describing an individual scenario where the three fundamental types of behavior (x^2 behavior, x behavior, and constant
behavior) each have a specific numerical value (or “price in dollars”) assigned to them. The values of these “prices” can change from scenario to scenario but remain constant in a given scenario and
are what we want to capture as parameters. Note that these “price values” are often called “numerical coefficients” or “given values” or “givens” for a specific scenario.
So, in Equation 1 from the list, the price value for x^2 is 6, for x is 13, and for the constant is 8. Once these price values are fixed, the x is still allowed to take on different values (like it
was in the break-even situation from Chapter 4, where it represented the number of meals in a specific scenario).
For Equation 2, the price this time for x^2 is 1, for x is 11, and for the constant is 5. And we get a new scenario, where the x is still free to take on different values. Similar situations occur
for Equations 3 and 4.
Just as we were able to use P, C, and F to represent the dollar values of selling price per item, cost to make each item, and fixed costs, respectively, in the break-even scenario, we can use letters
here to represent the “price values” for the three terms x^2, x, and the constant. The standard letters to use here are those early in the alphabet: a, b, and c, respectively.
Doing so, we obtain the following equation: ax^2 + bx + c = 0. The three letters a, b, and c represent parameters, and what they give us is the power to represent all quadratic equations by a single
super-equation. All of the individual equations can be obtained from this one by appropriate choices of a, b, and c. We illustrate this in the following table:
Set Parameters
a b c Super Equation ax^2 + bx + c = 0 Becomes:
6 13 8 6x^2 + 13x + 8 = 0
1 11 5 x^2 + 11x + 5 = 0
20 5 –36 20x^2 + 5x – 36 = 0
–7 72 x^2 – 7x + 72 = 0
Reasoning similarly, we can obtain all of the infinitely many other quadratic equations by appropriate choices of a, b, and c from the general equation.
We are not done, however.
The coup de grâce is that the method of completing the square, which can be used to solve each individual equation in its specific scenario, can now be applied to the super-equation (ax^2 + bx + c =
0) to yield a general solution to the infinitely many equations all at once—in one grand maneuver. Doing so in this case ultimately yields the famous quadratic formula (details not shown):
The formula represents in writing a crystallization of the entire process of completing the square. It also means that once we identify the parameters in a given quadratic equation (which can be done
on sight), instead of having to perform the more involved method of completing the square each time, we can simply plug the values for a, b, and c into the crystallized quadratic formula and do some
arithmetic, and the solution pops out for us.
Moreover, this technique of using parameters to freeze-dry in writing the results of many algebraic maneuvers is not limited to the quadratic equation here or the break-even equation in Chapter 4,
but can be used in all kinds of other situations with similar effect. This is what we have called big algebra.
Fertilizing the soil for others to purposefully use and systematically apply parameters for wide-scale impact and insight is—in the mind of many mathematical historians—Viète’s most important and
revolutionary contribution to mathematics.
If you find an error or have any questions, please email us at admin@erenow.org. Thank you!
|
{"url":"https://erenow.org/exams/algebra-beautiful-an-ode-maths-least-loved-subject/14.php","timestamp":"2024-11-04T13:45:07Z","content_type":"text/html","content_length":"22081","record_id":"<urn:uuid:949f3bd4-51a9-4aaf-91fe-87b348b2f09f>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00055.warc.gz"}
|
Understanding Mathematical Functions: How To Add Functions
Mathematical functions are a fundamental concept in algebra and calculus, representing a relationship between a set of inputs and a set of possible outputs. They are essential for understanding and
solving a wide range of mathematical problems. It is important to grasp the concept of adding functions as it allows us to combine different mathematical relationships and analyze their combined
effect. In this blog post, we will delve into the process of adding functions and explore its significance in mathematics.
Key Takeaways
• Understanding mathematical functions is crucial for solving a wide range of mathematical problems.
• Adding functions allows for the combination and analysis of different mathematical relationships.
• Function notation is important when adding functions, and it simplifies the process.
• It is essential to avoid common mistakes when adding functions, such as confusing addition with composition of functions.
• Adding functions has real-world applications in physics, economics, finance, and computer science.
The basics of adding functions
When learning about mathematical functions, it's essential to understand the basics of adding them together. This concept builds on the traditional method of adding numbers and introduces a new layer
of complexity.
A. Review the concept of adding numbers
• Start by reminding readers of the fundamental concept of adding numbers together, using examples to illustrate the process.
• Emphasize that adding numbers involves combining their values to obtain a single result.
B. Introduce the concept of adding functions
• Transition into the concept of adding functions by highlighting that functions can also be combined to create a new function.
• Explain that adding functions involves adding their respective outputs for each input value, which results in a new combined function.
C. Explain how to add two functions together
• Provide a step-by-step guide on how to add two functions together, including examples to demonstrate the process.
• Highlight that when adding two functions, it's essential to consider each function's domain and ensure that the resulting function is well-defined for all possible input values.
Understanding the basics of adding functions is crucial for mastering more advanced concepts in mathematics. By reviewing the concept of adding numbers, introducing the idea of adding functions, and
explaining the process of adding two functions together, readers can develop a solid foundation for further exploration of mathematical functions.
Understanding function notation
When working with mathematical functions, understanding function notation is crucial for performing operations such as addition. Function notation is a way of representing a function in a concise and
standardized manner.
A. Define function notation
Function notation is typically represented as f(x), where f is the name of the function and x is the input variable. The function f takes the input x and produces an output, which is denoted as f(x).
B. Show how to use function notation when adding functions
When adding two functions, we can use function notation to represent the individual functions and then perform the addition operation. This involves adding the outputs of the two functions for a
given input value.
C. Provide examples of adding functions using notation
Let's consider the following example:
• f(x) = 2x + 3
• g(x) = x^2 - 1
1. Using function notation:
When adding these two functions, we can denote the sum as (f + g)(x) and then perform the addition operation on the individual function outputs:
(f + g)(x) = f(x) + g(x) = (2x + 3) + (x^2 - 1)
By using function notation, we can clearly represent the process of adding the two functions and then simplify the resulting expression.
The process of adding different types of functions
When it comes to understanding mathematical functions, the process of adding different types of functions is an essential skill to master. Whether you are dealing with linear, quadratic, or
exponential functions, the principles for adding them remain the same. In this chapter, we will discuss how to add each of these types of functions.
A. Adding linear functions
Understanding linear functions
Linear functions are those that can be represented by a straight line on a graph. They have the general form of y = mx + b, where m is the slope of the line and b is the y-intercept. When adding
linear functions, the process is relatively straightforward.
The steps for adding linear functions
• Step 1: Ensure that the linear functions are in the form of y = mx + b.
• Step 2: Add the coefficients of the x terms together to obtain the new slope.
• Step 3: Add the y-intercepts together to obtain the new y-intercept.
• Step 4: Write the new linear function in the form of y = mx + b.
B. Adding quadratic functions
Understanding quadratic functions
Quadratic functions are those that can be represented by a parabola on a graph. They have the general form of y = ax^2 + bx + c, where a, b, and c are constants. Adding quadratic functions involves
combining the terms with the same degree.
The steps for adding quadratic functions
• Step 1: Ensure that the quadratic functions are in the form of y = ax^2 + bx + c.
• Step 2: Add the coefficients of the x^2, x, and constant terms together to obtain the new quadratic function.
• Step 3: Write the new quadratic function in the form of y = ax^2 + bx + c.
C. Adding exponential functions
Understanding exponential functions
Exponential functions are those that have a constant ratio between successive values. They have the general form of y = a * b^x, where a and b are constants. Adding exponential functions involves
combining terms with the same base.
The steps for adding exponential functions
• Step 1: Ensure that the exponential functions are in the form of y = a * b^x.
• Step 2: Add the coefficients of the b^x terms together to obtain the new exponential function.
• Step 3: Write the new exponential function in the form of y = a * b^x.
Common mistakes to avoid when adding functions
When it comes to adding mathematical functions, it's important to be aware of potential mistakes that can lead to errors in your calculations. Here are some common mistakes to avoid:
A. Confusing addition with composition of functions
One common mistake when adding functions is confusing addition with composition. When adding functions, you are simply combining them by adding their respective outputs for each input. On the other
hand, composition involves applying one function to the output of another. It's important to understand the distinction between these two operations to avoid errors in your calculations.
B. Forgetting to simplify the resulting function
Another mistake to avoid is forgetting to simplify the resulting function after adding the individual functions. When you add two functions, the resulting function may be simplified by combining like
terms and simplifying fractions. Failing to simplify the function can lead to confusion and errors in further calculations.
C. Misinterpreting the domain and range when adding functions
It's essential to consider the domain and range of each function when adding them together. Misinterpreting the domain and range can result in inaccuracies in the final function. Ensure that you
understand the domain and range of each function before adding them, and consider how they may impact the domain and range of the resulting function.
Real-world applications of adding functions
Mathematical functions are used in a variety of real-world applications, from physics to economics to computer science. Understanding how to add functions is crucial in solving complex problems in
these fields.
A. Show how adding functions is used in physics
• Projectile motion: When an object is thrown or launched into the air, its position can be described by two separate functions for horizontal and vertical motion. By adding these two functions,
physicists can determine the object’s overall trajectory and predict where it will land.
• Wave interference: In wave physics, the superposition of multiple wave functions requires adding these functions together to determine the resulting wave pattern. This is essential for
understanding phenomena such as sound waves, light waves, and quantum mechanics.
B. Discuss applications in economics and finance
• Portfolio management: Financial analysts often use mathematical functions to model the performance of different investment assets. Adding these functions allows them to calculate the overall
return and risk of a portfolio, as well as optimize investment strategies.
• Supply and demand: In economics, the intersection of supply and demand functions determines the equilibrium price and quantity of goods in a market. This involves adding these two functions to
find the point of balance.
C. Provide examples of how adding functions is used in computer science
• Algorithm analysis: Computer scientists analyze the efficiency of algorithms by studying their time complexity, which often involves adding together separate functions that represent different
parts of the algorithm’s execution time.
• Signal processing: Adding functions is crucial in fields such as digital signal processing, where it is used to combine and manipulate digital signals for tasks like audio processing, image
processing, and data compression.
In conclusion, we have discussed the concept of adding mathematical functions and how to do so effectively. We have learned that when adding functions, we simply add the corresponding terms together.
It is important to understand the rules and techniques for adding functions in order to apply them to various fields of study and professions.
• Summarize the key points discussed: We have learned that adding functions involves adding the corresponding terms together and that understanding this concept is crucial for various applications.
• Emphasize the importance of understanding how to add functions: Whether you are a student, a scientist, an engineer, or a mathematician, understanding how to add functions is essential for
solving complex problems and advancing in your field.
Therefore, it is crucial to grasp the concept of adding functions in order to excel in your academic and professional endeavors.
ONLY $99
Immediate Download
MAC & PC Compatible
Free Email Support
|
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-how-to-add-functions","timestamp":"2024-11-14T19:05:05Z","content_type":"text/html","content_length":"214563","record_id":"<urn:uuid:746182d6-ed28-477b-989a-40e763b29aa1>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00617.warc.gz"}
|
Nernst Equation Calculator 2024 - Calculator City
Nernst Equation Calculator 2024
Enter the standard electrode potential, temperature, number of electrons, and reaction quotient into the calculator to determine the cell potential.
Nernst Equation Formula
The following formula is used to calculate the cell potential using the Nernst equation.
• E: Cell potential (V)
• E°: Standard electrode potential (V)
• R: Gas constant (8.314 J/(mol·K))
• T: Temperature (K)
• n: Number of electrons transferred
• F: Faraday constant (96485 C/mol)
• Q: Reaction quotient
To calculate the cell potential, use the formula: E = E° – (RT/nF) * ln(Q).
What is the Nernst Equation?
The Nernst equation relates the cell potential of an electrochemical cell to the standard electrode potential, temperature, number of electrons, and the reaction quotient. It is essential for
understanding the electrochemical behavior of cells under non-standard conditions.
How to Calculate Cell Potential?
The following steps outline how to calculate the cell potential using the Nernst equation:
1. Determine the standard electrode potential (E°).
2. Measure the temperature (T) in Kelvin.
3. Identify the number of electrons (n) involved in the reaction.
4. Calculate the reaction quotient (Q).
5. Use the formula: E = E° – (RT/nF) * ln(Q).
6. Insert the values into the formula and solve for the cell potential (E).
Example Problem:
Use the following variables as an example problem to test your knowledge:
Standard Electrode Potential (E°) = 1.00 V
Temperature (T) = 298 K
Number of Electrons (n) = 2
Reaction Quotient (Q) = 10
By inserting these values into the Nernst equation, you can calculate the cell potential.
1. What is the standard electrode potential?
The standard electrode potential (E°) is the measure of the individual potential of a reversible electrode at standard state, which is with solutes at an effective concentration of 1 mol/L, and gases
at a pressure of 1 atm.
2. How does temperature affect the cell potential?
Temperature affects the cell potential as it is directly proportional to the thermal energy available for the reaction. This relationship is incorporated into the Nernst equation through the
temperature variable (T).
3. What is the reaction quotient?
The reaction quotient (Q) is a measure of the relative amounts of products and reactants present during a reaction at a given point in time.
4. Why is the Nernst equation important?
The Nernst equation is important because it allows for the calculation of cell potentials under non-standard conditions, providing insights into the feasibility and direction of electrochemical
5. Can this calculator be used for different temperatures?
Yes, you can adjust the temperature field to match the conditions of your specific reaction to calculate the cell potential accordingly.
|
{"url":"https://calculator.city/nernst-equation-calculator-2024/","timestamp":"2024-11-05T23:19:18Z","content_type":"text/html","content_length":"72097","record_id":"<urn:uuid:658b65f9-7f44-49d8-bdf6-ea85f503384f>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00236.warc.gz"}
|
Greater Than Less Than Lessons for First GradeGreater Than Less Than Lessons for First Grade - Primary Theme Park
Comparing numbers in K/1 using the greater than, less than symbols can be challenging! Young students often confuse the symbols and struggle with the concept. These three Greater Than, Less Than
lessons for kindergarten and first grade will help you teach your students how to confidently compare numbers.
Use these lesson ideas with your whole class, in a guided math group, or for individual students who need extra help with this particular skill.
Watch this video to hear me share about each lesson, or read about them below!
Lesson One: Use Words Before Symbols
• 2 paper plates
• small candy pieces
• index cards
• marker
• greater than, less than, equal to cards
Begin by showing students two plates of candy. Make the plate on the left obviously have more candy than the one on the right. Ask students if they get to eat the candy from one of the two plates,
which one would they choose and why.
I’m going to go out on a limb here and say that students are going to choose the plate with the greatest amount of candy. Who doesn’t want the most candy, right?!
Now, ask your students how they knew the plate had the most candy. They’ll probably say something along the lines of “It looked like it had more candy” or “I could tell there was more candy on that
plate than the other one.”
Point out that they compared the two amounts of candy. Explain that when we compare numbers or amounts, we decide if one is greater, less than, or equal to the other. When we look at the plates, we
see the amount on the left is greater than the amount on the right.
Count the amount of candy on each plate and write it on an index card under the plate. Place a card with the words “is greater than” between the two numbers. Read the comparison using the words and
Do more examples like this using different amounts of candy. Count the amount of candy on each plate and write it on an index card underneath. Place the written words between the numbers and read the
Finally, have students practice comparing numbers using the words “is greater than”, “is less than” and “is equal to”. Using the phrases first helps students when the symbols are introduced later. It
also gets them used to reading the comparisons left to right, which is how inequalities are read.
What about Non-readers?
Even though non-readers won’t be able to read the words themselves, I still think it’s important for them to hear and understand the language of comparison before they see the symbols. Here are a few
ways to support them:
• On practice problems, use the phrases in the exact same order every time so they know the pattern.
• Point out the beginning sounds in the words greater, less and equal to help them figure the words out.
• Add the words “greater”, “less” and “equal” to your sight word list for students to learn.
• Read the phrases aloud to them.
Lesson Two: Introduce the >, < and = Symbols
Now it’s time to introduce the >, < and = symbols. Students are probably familiar with the equal to symbol from addition and subtraction problems. The other two symbols can be a bit tricky!
• 2 paper plates
• small candy pieces
• index cards
• marker
• greater than, less than, equal to word cards
• > , < and = cards
Use the candy and plates from the previous lesson with the word cards in between. Review using the word cards to compare amounts.
Explain to students that it would take a really long time to compare numbers if we always had to write out the phrases “is greater than”, “is less than” or “is equal to” every time. Thankfully,
mathematicians came up with symbols to use instead as a short cut!
Many teachers use the “alligator eats the bigger number” method for teaching the > and < symbols. That’s a fun and totally acceptable method. As a matter of fact, that’s how my son remembers which
one is which!
Unfortunately, I find that trick doesn’t work with all students. It didn’t work at all for my daughter. Even though I knew she got the concept of comparing numbers, she often asked me, “Which way
does the greater than symbol go?”.
I teach it this way:
The greater than symbol always points to the right. An easy way to remember this is to say the first two letters in greater, gr, stand for “go right”. For students who need a visual reminder, the
greater than symbol looks like the shape made by the thumb and index finger on their right hand. When students draw the greater than symbol, they “go right” first and then back to the left to make
the shape.
The less than symbol always points to the left. An easy way to remember this is to say the first letter in less, l, stand for “left”. The less than symbols looks like the shape made by the thumb and
index finger on the left hand. When students draw the less than symbol, they go left first and then back to the right to make the shape.
Complete several examples with the candy and paper plates as in the previous lesson, each time replacing the phrases with the symbols.
Practice reading inequalities together, pointing out that they’re read left to right just like a sentence. Have students identify the greater or smaller number in the comparison.
Lesson Three: Compare Numbers Using the >, < , and = Symbols
• small cardboard letter “V”
Review the greater than and less than symbols introduced in the previous lesson.
A fun, hands-on way for students to practice comparing numbers is to use a tangible greater than, less than symbol. I picked up a small cardboard letter “V” at a craft store for $1. If you turn it to
the right or left, it magically becomes a greater or less than symbol!
Put magnets on the back, place it on a magnetic board, and allow students to use it to compare numbers. You might have students write two numbers and then place the symbol in between them to make the
inequality true. Another idea is give students a number on one side of the inequality and then they write the other number and turn the symbol the correct direction.
For more fun, engaging ways for students practice comparing numbers, be sure to check out my blog post, Five Fun Ways to Compare Numbers. I share activities that include:
• Another hands-on greater than, less than symbol (from a flexible drinking straw!)
• Number scavenger hunt
• How to make a greater than, less than stamp
• Random number generator activity
• Spin to make inequalities
This link takes you to the free comparing numbers video in my TPT store. The posters are available for free with the video.
Download the posters by clicking on the “Download” button where it says “Supporting Document Included”.
5 Comments
1. I am looking for resources for my grandson who is in 1st grade. He is now being Home schooled after a year and a half of private school. I teach 4th grade so don’t have enough resources for him
1. This is a great idea to teach this topic for easy understanding. Thank you very much.
|
{"url":"https://www.primarythemepark.com/2018/07/greater-than-less-than-lessons-for-first-grade/","timestamp":"2024-11-10T05:50:31Z","content_type":"text/html","content_length":"69286","record_id":"<urn:uuid:a8779b64-11a8-42f4-ba2b-abcca10ae556>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00818.warc.gz"}
|
FMS as a calculator - Functional Modelling System
Functional Modelling System
Open Source Your Knowledge, Become a Contributor
Technology knowledge has to be shared and made accessible for free. Join the movement.
Create Content
First Steps: FMS as a calculator
Let's see how we can use FMS as a calculator. We can define an identifier like x to be a numeric value like 4+5 as: x := 4 + 5. Don't forget the dot at the end (every statement ends with it)
You didn't write a constraint which uses your newly defined x. As a result of this, FMS thinks you can't be interested in this as it tries to prune as much things away as possible. If you don't want
to write a constraint about x, we can use the library function relevant to cheat our way out of this. Now we can write down a working example:
The standard operations +/*- are available. Take care that the division operator is integer division, so it truncates. Also the abs function is available which takes the absolute value of a number.
Multiple symbols
It is possible to define multiple symbols and use one to define another. The order of your statements is irrelevant. Now we can define y to be the double of x.
x := 4+5.
y := 2*x.
relevant x.
relevant y.
Symbols with local scopes
Sometimes it will be useful to introduce a symbol locally, for example when you want to reuse the result of a complex expression multiple times. This can be done with the let-construct. You could for
instance use a let-construct to simplify the expression (3+4)*2+(3+4) to let y:=3+4 in y*2+y. This helps the readability of your expressions. You can also introduce multiple new symbols in a
let-construct if you separate them with a semicolon. The order in which these symbols are introduced again does not matter.
a := (3+4)+(3+4)*2.
b := let y := 3+4 in y+(y*2).
c := let y := 3+4 ; z := y*2 in y+z.
relevant a.
relevant b.
relevant c.
A let-construct can introduce shadowing. When you locally define an identifier which was in scope already. The outer-scoped one is hidden and unreachable in the inner expressions.
a := 2.
//x is 6 although the outer a is defined as 2
x := let a := 3 in a + a.
relevant x.
The other way around
Unlike simple calculators, FMS allows you to solve equations with multiple solutions. For this we use variables that can take a range of possible values, and constraints (equations) over these
Declaring a ranged variable is done through name :: element of setExpression. For now we only introduce two kinds of sets (ranges):
• Contiguous ranges of integers. A range from a to b can be written down as {a..b}.
• Enumerations. The enumeration of a few constants like {1,2,8,6}.
Then, you can write down the constraints which should hold. All global variables occurring in a constraint are considered relevant, so we do not have to explicitly state this anymore.
Using constraints, it possible to have multiple results. By default all possible results are printed. If you want to override this you can use the statement nbModels n at the top of your file, where
the n represents the number of solutions you are interested in. If you set n to 0 it means that you are interested in all solutions.
#nbModels 1.
x :: element of {1..10}.
y :: element of {1..10}.
z :: element of {1..10}.
x + y - z = 12.
x - y = 4.
Note that for constraints we use the equals symbol = instead of the definitional equality :=.
When there are multiple solutions it is possible that you're interested in an optimal one according to a certain criterion. For this you can add an optimization statement: $minimize or $maximize with
a term which needs to be optimized.
x :: element of {1..10}.
y :: element of {1..10}.
$maximize (x-y).
In this case multiple solutions will still be printed. But each subsequent solution is guaranteed to be better than the previous one.
More than numbers
FMS also supports strings between double quotes wherever you write a number. The standard lexicographic order applies when comparing strings.
set := {"a","b","c"}.
x :: element of set.
y :: element of set.
z :: element of set.
x ~= "a".
y = z.
x =< y.
The Boolean values true and false are available through the standard library. You can introduce a Boolean variable as a proposition through p :: proposition. This behaves just like p :: element of
{true,false}, but it allows for a more efficient implementation. Constraints over propositions can be written with logical connectives | (or), & (and), ~ (not), => <= (implies), <=> (if and only if).
a :: element of {1..3}.
b :: proposition.
c :: proposition.
d :: proposition.
b => a = 1.
~c <=> (d | b).
Open Source Your Knowledge: become a Contributor and help others learn. Create New Content
|
{"url":"https://tech.io/playgrounds/12240/functional-modelling-system/fms-as-a-calculator","timestamp":"2024-11-11T18:30:21Z","content_type":"text/html","content_length":"341282","record_id":"<urn:uuid:d88de9d8-c4f2-45cd-93ab-8bb77c6229eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00498.warc.gz"}
|
How much is a mu in square meters?
How much is a mu in square meters?
Mu or Mou is a unit of measurement for area. It is one of the traditional land measurement units of China. One mu is equal to 666.7 square meters.
How big is a mu?
Mu is the unit of area that is often used in south Asia. 1 Mu corresponding to 1/15 ha1, about ⅔ × 1000 (or 666.7) m2.
How many acres is 1 mu?
How many Acres are in a Mu? The answer is one Mu is equal to 0.1647367761672 Acres.
What unit of measure is MU?
μ is used as a symbol for: The SI prefix micro, meaning a factor of 10-6 (one millionth). μ by itself is often used as the “unit” of strain, though in this context it retains its SI prefix meaning,
which is interchangeable with “x 10-6” or “ppm” (parts per million).
How many MU are in a hectare?
Mu to Hectare Conversion Table
Mu [mu] Hectare [ha]
1 0.066666667
2 0.133333334
3 0.200000001
4 0.266666668
What is Mew M?
micrometre, also called micron, metric unit of measure for length equal to 0.001 mm, or about 0.000039 inch. Its symbol is μm.
How many km are in a mu?
Mu to Square Kilometer Conversion Table
Mu [mu] Square Kilometer [km2]
1 0.00066666667
2 0.00133333334
3 0.00200000001
4 0.00266666668
What is mu full form?
The full form of MU is Miss You.
What does prefix mu mean?
one millionth
Micro (Greek letter μ (U+03BC) or the legacy symbol µ (U+00B5)) is a unit prefix in the metric system denoting a factor of 10−6 (one millionth). Confirmed in 1960, the prefix comes from the Greek
μικρός (mikrós), meaning “small”. The symbol for the prefix is the Greek letter μ (mu).
How many units are in Milliunits?
A unit is a standard of measurement, and a milliunit is one-thousandth of a unit.
What is the value of 1 Miu?
One micrometer is equal to one-millionth (1/1,000,000) of a meter, which is defined as the distance light travels in a vacuum in a 1/299,792,458 second time interval. The micrometer, or micrometre,
is a multiple of the meter, which is the SI base unit for length. In the metric system, “micro” is the prefix for 10-6.
|
{"url":"https://www.yoforia.com/how-much-is-a-mu-in-square-meters/","timestamp":"2024-11-13T13:14:06Z","content_type":"text/html","content_length":"51509","record_id":"<urn:uuid:e5f9b664-cc20-4eee-bca1-598666c08964>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00756.warc.gz"}
|
Workshop on Control Problems
• 09:50–10:00, : Opening.
Welcome, introductory words by organizers, technical information, and soundcheck for first talk.
• 10:00–10:45, : Null controllability of degenerate parabolic equations.
We study the null controllability of linear parabolic equations posed on the whole space $\mathbb{R}^n$ by means of a source term locally distributed on a subset of $\mathbb{R}^n$. We want to
know to which extend the results known for the heat equation still hold in this degenerate setting. We want to identify their geometric control condition. First, we recall known results for the
heat equation. Then, we present a variante of the Lebeau-Robbiano's method, less greedy in spectral analysis. It relies on projections for which a spectral inequality holds, and appropriate
smoothing properties to compensate the lack of commutation between the projection and the evolution. Finally, we provide interesting sufficient conditions for the null controllability of
Ornstein-Uhlenbeck equations and quadratic equations with zero singular space.
• 11:00–11:45, : Control and Machine Learning.
In this lecture we shall present some recent results on the interplay between control and Machine Learning, and more precisely, Supervised Learning and Universal Approximation. We adopt the
perspective of the simultaneous or ensemble control of systems of Residual Neural Networks (ResNets). Roughly, each item to be classified corresponds to a different initial datum for the Cauchy
problem of the ResNets, leading to an ensemble of solutions to be driven to the corresponding targets, associated to the labels, by means of the same control. We present a genuinely nonlinear and
constructive method, allowing to show that such an ambitious goal can be achieved, estimating the complexity of the control strategies. This property is rarely fulfilled by the classical
dynamical systems in Mechanics and the very nonlinear nature of the activation function governing the ResNet dynamics plays a determinant role. It allows deforming half of the phase space while
the other half remains invariant, a property that classical models in mechanics do not fulfill. The turnpike property is also analyzed in this context, showing that a suitable choice of the cost
functional used to train the ResNet leads to more stable and robust dynamics. This lecture is inspired in joint work, among others, with Borjan Geshkovski (MIT), Carlos Esteve (Cambridge),
Domènec Ruiz-Balet (IC, London) and Dario Pighin (Sherpa.ai).
• 12:00–12:25, : An inequality on operators on polynomials.
Consider the Baouendi-Grushin equation $(\partial_t - \partial_x^2 - x^2\partial_y^2)g(t,x,y) = 0$ with Dirchlet boundary conditions on $\mathbb R \times [0,\pi]$. The functions $\exp(-nt + iny -
nx^2\!/2) \sin(ny)$ are solutions of this equation. Thus, functions of the form $P(\exp(-t+iy-x^2\!/2)$ where $P$ is a polynomial, are also solutions. That way, an observability inequality on the
Baouendi-Grushin implies an inequality on polynomials. But if we put Dirichlet boundary conditions on $x = \pm 1$, this link between solutions and polynomials is only approximate. It is still
true that the observability inequality implies an inequality on polynomials, but we require a new tool to prove it. In this talk, I will present this tool: an estimate on
"pseudo-differential-type" operators on polynomials. This estimate is proved essentially by a few elementary tools from complex analysis.
• 14:00–14:45, : Solution Concepts for Optimal Feedback Control of Nonlinear Partial Differential Equations.
Optimal feedback controls for nonlinear systems are characterized by the solutions to a Hamilton Jacobi Bellmann (HJB) equation. In the deterministic case, this is a first order hyperbolic
equation. Its dimension is that of the statespace of the nonlinear system. Thus solving the HJB equation is a formidable task and one is confronted with a curse of dimensionality. In practice,
optimal feedback controls are frequently based on linearisation and subsequent treatment by efficient Riccati solvers. This can be effective, but it is local procedure, and it may fail or lead to
erroneous results. In this talk, I give a brief survey of current solution strategies to partially cope with this challenging problem. Subsequently I describe three approaches in some detail. The
first one is a data driven technique, which approximates the solution to the HJB equation and its gradient from an ensemble of open loop solves. The second one is based on Newton steps applied to
the HJB equation. Combined with tensor calculus this allows to approximately solve HJB equations up to dimension 100. Results are shown for the control of discretized Fokker Planck equations. The
third technique circumvents the direct solution of the HJB equation. Rather a neural network is trained by means of a succinctly chosen ansatz. It is proven that it approximates the optimal
feedback gains as the dimension of the network is increased. This work relies on collaborations with B.Azmi, S.Dolgov, D.Kalise, Vasquez, and D.Walter.
• 15:00–15:45, : Optimization with Learning-Informed Differential Equation Constraints.
Motivated by applications in the optimal control of phase separation and quantitative image processing, a class of optimization problems with data-driven differential equation (DE) constraints is
considered. In this context, the data-driven DE may arise from coupling ab initial components with machine learned ones with a subsequent application of a solution scheme, or from learning the DE
solver directly from data. For this setting and depending on the regularity of the activation functions of underlying deep neural networks, approximation results and stationarity conditions are
derived. Moreover, numerical tests finally validate the theoretical findings.hod. Several numerical examples are provided and the efficiency of the algorithm is shown.
• 16:00–16:45, : Controllability, Observability and Stabilizability for systems in Banach spaces I.
For given Banach spaces $X,U$ we consider abstract control systems of the form $\dot{x}(t) = -A x(t) + Bu(t)$ for $t\in (0,T]$ with $x(0) = x_0\in X$, where $-A$ is the generator of a strongly
continuous semigroup on $X$ and $B\colon U\to X$ is a bounded linear operator. For such systems the question of null-controllability arises, i.e.\ whether for all initial conditions $x_0\in X$
there exists a control function $u\colon [0,T]\to U$ such that $x(T) = 0$. In this talk we will first review the classical duality result stating that this controllability question can be
answered by showing a so-called final-state observability estimate of the form $\|x'(T)\|\leq C_{\mathrm{obs}} \|y\|$, where $y\colon [0,T]\to Y:=U'$ is the observation function of the dual
system given by $\dot{x'}(t) = -A' x'(t)$ for $t\in (0,T]$ with $x'(0) = x_0'\in X'$, and $y(t) = B' x'(t)$ for $t\in (0,T]$. We then show sufficient conditions for obtaining such a final-state
observability estimate, with explicit dependence of $C_{\mathrm{obs}}$ on all parameters. Having established this abstract result we show an application to heat-like evolution equations and
comment on further generalisations, e.g.\ non-autonomous equations, weak versions of observability and controllability, as well as stabilizability. The talk is split into two parts, and both
parts are based on joint works together with Clemens Bombach, Michela Egidi, Fabian Gabel and Dennis Gallaun.
• 17:00–17:45, : Controllability, Observability and Stabilizability for systems in Banach spaces II.
For given Banach spaces $X,U$ we consider abstract control systems of the form $\dot{x}(t) = -A x(t) + Bu(t)$ for $t\in (0,T]$ with $x(0) = x_0\in X$, where $-A$ is the generator of a strongly
continuous semigroup on $X$ and $B\colon U\to X$ is a bounded linear operator. For such systems the question of null-controllability arises, i.e.\ whether for all initial conditions $x_0\in X$
there exists a control function $u\colon [0,T]\to U$ such that $x(T) = 0$. In this talk we will first review the classical duality result stating that this controllability question can be
answered by showing a so-called final-state observability estimate of the form $\|x'(T)\|\leq C_{\mathrm{obs}} \|y\|$, where $y\colon [0,T]\to Y:=U'$ is the observation function of the dual
system given by $\dot{x'}(t) = -A' x'(t)$ for $t\in (0,T]$ with $x'(0) = x_0'\in X'$, and $y(t) = B' x'(t)$ for $t\in (0,T]$. We then show sufficient conditions for obtaining such a final-state
observability estimate, with explicit dependence of $C_{\mathrm{obs}}$ on all parameters. Having established this abstract result we show an application to heat-like evolution equations and
comment on further generalisations, e.g.\ non-autonomous equations, weak versions of observability and controllability, as well as stabilizability. The talk is split into two parts, and both
parts are based on joint works together with Clemens Bombach, Michela Egidi, Fabian Gabel and Dennis Gallaun.
|
{"url":"https://wwwold.mathematik.tu-dortmund.de/lsix/events/WCP22/index.php","timestamp":"2024-11-04T05:25:36Z","content_type":"text/html","content_length":"57359","record_id":"<urn:uuid:160117c0-c580-4e10-872d-a919ff64b236>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00004.warc.gz"}
|
On 2012-06-24 18:34:13 +0000, Steven Watanabe said:
> On 06/24/2012 08:13 AM, Dave Abrahams wrote:
>> Very interesting. Does either approach support operator overloading?
> Yes. I put quite a bit of effort into making
> sure that operator overloads would work
> correctly.
I started the work on operator overloading, as you might guess when
seeing the file
include/poly/operators.hpp (names in this file will definitely change)
I didn't get so far that poly::interface would actually recognize these
callables and create the corresponding operators for itself, but that
sure is possible.
More importantly, I'm still kind of missing the point of why the
feature of operator overloading is really needed. In essence: How
should binary operators behave? Steven's TypeErasure defines binary
operators such that only when the wrapped types of a and b are the
same, can you sum them up: "a + b".
But where would you use this kind of an "any", where only some of the
instances can be added up, and others cause undefined behavior?
Wouldn't you actually do the addition on the side where you know the
types, and not on the type-erased side?
Do we really have a real world use case that would prompt us to
implement features like operator overloading?
* * *
Before releasing more worms from the can of type-erased operators, I
must confess that I know still too little about the possible uses for
type erasure / expression problem / what you name it. What I propose is
we should look into how e.g. Haskell and Clojure programmers use their
type classes and protocols.
For one thing, I could only think of few examples where the interface
would have mutating functions. (Anybody care to throw in more examples?)
More typical use cases (that I could think of) are functions which read
the wrapped type (as const reference), and then either (1) return a
value, or (2) cause a side effect:
std::string(to_html_, self const &); // (1) "pure" function
void(print_, self const &, std::ostream &); // (2) side effect
Maybe if the whole interface is about wrapping some sort of computation
or side effect, it might make sense to have some non-const functions
using progress = interface<
std::size_t(total_progress_, self const &),
std::size_t(current_progress_, self const &),
void(run_for_, self &, std::chrono::microseconds)>;
Or maybe it's modeling a kind of a container and you can insert items into it:
using container = interface<
void(insert_, self &, std::size_t, content),
void(remove_, self &, std::size_t),
content &(at_, self &, std::size_t),
content const &(at_, self const &)>;
Pyry Jahkola
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
{"url":"https://lists.boost.org/Archives/boost/2012/06/194272.php","timestamp":"2024-11-07T23:09:29Z","content_type":"text/html","content_length":"14199","record_id":"<urn:uuid:afbc08ca-ec35-4f94-8e00-da4ca36be810>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00531.warc.gz"}
|
Compute Dynamic Principal Components and dynamic Karhunen
dpca {freqdom} R Documentation
Compute Dynamic Principal Components and dynamic Karhunen Loeve extepansion
Dynamic principal component analysis (DPCA) decomposes multivariate time series into uncorrelated components. Compared to classical principal components, DPCA decomposition outputs components which
are uncorrelated in time, allowing simpler modeling of the processes and maximizing long run variance of the projection.
dpca(X, q = 30, freq = (-1000:1000/1000) * pi, Ndpc = dim(X)[2])
X a vector time series given as a (T\times d)-matix. Each row corresponds to a timepoint.
q window size for the kernel estimator, i.e. a positive integer.
freq a vector containing frequencies in [-\pi, \pi] on which the spectral density should be evaluated.
Ndpc is the number of principal component filters to compute as in dpca.filters
This convenience function applies the DPCA methodology and returns filters (dpca.filters), scores (dpca.scores), the spectral density (spectral.density), variances (dpca.var) and Karhunen-Leove
expansion (dpca.KLexpansion).
See the example for understanding usage, and help pages for details on individual functions.
A list containing
Hormann, S., Kidzinski, L., and Hallin, M. Dynamic functional principal components. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 77.2 (2015): 319-348.
Brillinger, D. Time Series (2001), SIAM, San Francisco.
Shumway, R., and Stoffer, D. Time series analysis and its applications: with R examples (2010), Springer Science & Business Media
X = rar(100,3)
# Compute DPCA with only one component
res.dpca = dpca(X, q = 5, Ndpc = 1)
# Compute PCA with only one component
res.pca = prcomp(X, center = TRUE)
res.pca$x[,-1] = 0
# Reconstruct the data
var.dpca = (1 - sum( (res.dpca$Xhat - X)**2 ) / sum(X**2))*100
var.pca = (1 - sum( (res.pca$x %*% t(res.pca$rotation) - X)**2 ) / sum(X**2))*100
cat("Variance explained by DPCA:\t",var.dpca,"%\n")
cat("Variance explained by PCA:\t",var.pca,"%\n")
version 2.0.5
|
{"url":"https://search.r-project.org/CRAN/refmans/freqdom/html/dpca.html","timestamp":"2024-11-04T07:39:31Z","content_type":"text/html","content_length":"5455","record_id":"<urn:uuid:85e57e88-33e0-4a9c-bd7e-dfe788b4ab49>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00225.warc.gz"}
|
Inequalities that Describe Patterns - Algebra | Socratic
Inequalities that Describe Patterns
Key Questions
• You need to follow the language properly.
Phrases such as 'at least' mean $\ge$ and phrases such as 'at most' or 'not more than' indicate $\le$.
'Less than' and 'more than' indicate $<$ and $>$ respectively.
Consider this problem-
You have gone to the market to buy gifts for more than 13 people. You have already bought 3 gifts. How many more do you need to buy?
See, the keyword here is 'more than'.
Let the remaining number of gifts be x. Therefore the required inequation will be 3+x>13, or, x>10.
• Explanation:
An algebraic equality is when we have two statements and then say that they are equal. For instance:
$\frac{4}{2} = 2$ is an equality
$\frac{4}{2} = x$ is also an equality (and here we'd be looking for the value $x$)
An algebraic inequality is when there isn't a specific value or number where both sides equal each other. Instead, we'll be looking for a range of values that satisfy the statement. For instance:
$\frac{4}{2} < x$
We know that the value $x$ is all values less than 2 (there's an infinite number of solutions).
• Definitely
Most of the time, an inequality has more than one or even infinity solutions. For example the inequality: $x > 3$.
The solutions of this inequality are "all numbers strictly greater than 3". There's an infinite amount of these numbers (e.g. 4, 5, 100, 100000, 6541564564654645 ...). The inequality has an
infinite amount of solutions. The notation for this is:
#x in ]3;+oo[# or #x in (3; +oo)#.
There will probably be more notations depending on in which country you live.
|
{"url":"https://api-project-1022638073839.appspot.com/algebra/expressions-equations-and-functions/inequalities-that-describe-patterns","timestamp":"2024-11-06T02:38:56Z","content_type":"text/html","content_length":"67291","record_id":"<urn:uuid:3373d1bb-5828-4690-a4db-d00774d62298>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00538.warc.gz"}
|
Parameterized 3D Heat Sink
This tutorial walks through the process of simulating a parameterized problem using Modulus. The neural networks in Modulus allow us to solve problems for multiple parameters/independent variables in
a single training. These parameters can be geometry variables, coefficients of a PDE or even boundary conditions. Once the training is complete, it is possible to run inference on several geometry/
physical parameter combinations as a post-processing step, without solving the forward problem again. You will see that such parameterization increases the computational cost only fractionally while
solving the entire desired design space.
To demonstrate this feature, this example will solve the flow and heat over a 3-fin heat sink whose fin height, fin thickness, and fin length are variable. We will then perform a design optimization
to find out the most optimal fin configuration for the heat sink example. By the end of this tutorial, you would learn to easily convert any simulation to a parametric design study using Modulus’s
CSG module and Neural Network solver. In this tutorial, you would learn the following:
1. How to set up a parametric simulation in Modulus.
This tutorial is an extension of tutorial Conjugate Heat Transfer which discussed how to use Modulus for solving Conjugate Heat problems. This tutorial uses the same geometry setup and solves it for
a parameterized setup at an increased Reynolds number. Hence, it is recommended that you to refer tutorial Conjugate Heat Transfer for any additional details related to geometry specification and
boundary conditions.
The same scripts used in example Conjugate Heat Transfer will be used. To make the simulation parameterized and turbulent, you will set the custom flags parameterized and turbulent both as true in
the config files.
In this tutorial the focus will be on parameterization which is independent of the physics being solved and can be applied to any class of problems covered in the User Guide.
Please refer the geometry and boundary conditions for a 3-fin heat sink in tutorial Conjugate Heat Transfer. We will parameterize this problem to solve for several heat sink designs in a single
neural network training. We will modify the heat sink’s fin dimensions (thickness, length and height) to create a design space of various heat sinks. The Re for this case is now 500 and you will
incorporate turbulence using Zero Equation turbulence model.
For this problem, you will vary the height (\(h\)), length (\(l\)), and thickness (\(t\)) of the central fin and the two side fins. The height, length, and thickness of the two side fins are kept the
same, and therefore, there will be a total of six geometry parameters. The ranges of variation for these geometry parameters are given in equation (208).
(208)\[\begin{split}\begin{split} h_{central fin} &= (0.0, 0.6),\\ h_{side fins} &= (0.0, 0.6),\\ l_{central fin} &= (0.5, 1.0) \\ l_{side fins} &= (0.5, 1.0) \\ t_{central fin} &= (0.05, 0.15) \\ t_
{side fins} &= (0.05, 0.15) \end{split}\end{split}\]
Fig. 146 Examples of some of the 3 Fin geometries covered in the chosen design space
In this tutorial, you will use the 3D geometry module from Modulus to create the parameterized 3-fin heat sink geometry. Discrete parameterization can sometimes lead to discontinuities in the
solution making the training harder. Hence tutorial only covers parameters that are continuous. Also, you will be training the parameterized model and validating it by performing inference on a case
where \(h_{central fin}=0.4\), \(h_{side fins}=0.4\), \(l_{central fin}=1.0\), \(l_{side fins}=1.0\), \(t_{central fin}=0.1\), and \(t_{side fins}=0.1\). At the end of the tutorial a comparison
between results for the above combination of parameters obtained from a parameterized model versus results obtained from a non-parameterized model trained on just a single geometry corresponding to
the same set of values is presented. This will highlight the usefulness of using PINNs for doing parameterized simulations in comparison to some of the traditional methods.
Since the majority of the problem definition and setup was covered in Conjugate Heat Transfer, this tutorial will focus only on important elements for the parameterization.
The parameters chosen for variables act as additional inputs to the neural network. The outputs remain the same. Also, for this example since the variables are geometric only, no change needs to be
made for how the equation nodes are defined (except the addition of turbulence model). In cases where the coefficients of a PDE are parameterized, the corresponding coefficient needs to be defined
symbolically (i.e. using string) in the equation node.
Note for this example, the viscosity is set as a string in the NavierStokes constructor for the purposes of turbulence model. The ZeroEquation equation node 'nu' as the output node which acts as
input to the momentum equations in Navier-Stokes.
The code for this parameterized problem is shown below. Note that parameterized and turbulent are set to true in the config file.
Parameterized flow network:
# make navier stokes equations
if cfg.custom.turbulent:
ze = ZeroEquation(nu=0.002, dim=3, time=False, max_distance=0.5)
ns = NavierStokes(nu=ze.equations["nu"], rho=1.0, dim=3, time=False)
navier_stokes_nodes = ns.make_nodes() + ze.make_nodes()
ns = NavierStokes(nu=0.01, rho=1.0, dim=3, time=False)
navier_stokes_nodes = ns.make_nodes()
normal_dot_vel = NormalDotVec()
# make network arch
if cfg.custom.parameterized: input_keys = [ Key("x"), Key("y"), Key("z"), Key("fin_height_m"), Key("fin_height_s"), Key("fin_length_m"), Key("fin_length_s"), Key("fin_thickness_m"), Key("fin_thickness_s"), ] else:
input_keys = [Key("x"), Key("y"), Key("z")]
flow_net = FullyConnectedArch(
input_keys=input_keys, output_keys=[Key("u"), Key("v"), Key("w"), Key("p")]
# make list of nodes to unroll graph on
flow_nodes = (
+ normal_dot_vel.make_nodes()
+ [flow_net.make_node(name="flow_network")]
geo = ThreeFin(parameterized=cfg.custom.parameterized)
# params for simulation
# fluid params
inlet_vel = 1.0
volumetric_flow = 1.0
Parameterized heat network:
# make thermal equations
ad = AdvectionDiffusion(T="theta_f", rho=1.0, D=0.02, dim=3, time=False)
dif = Diffusion(T="theta_s", D=0.0625, dim=3, time=False)
dif_inteface = DiffusionInterface("theta_f", "theta_s", 1.0, 5.0, dim=3, time=False)
f_grad = GradNormal("theta_f", dim=3, time=False)
s_grad = GradNormal("theta_s", dim=3, time=False)
# make network arch
if cfg.custom.parameterized: input_keys = [ Key("x"), Key("y"), Key("z"), Key("fin_height_m"), Key("fin_height_s"), Key("fin_length_m"), Key("fin_length_s"), Key("fin_thickness_m"), Key("fin_thickness_s"), ] else:
input_keys = [Key("x"), Key("y"), Key("z")]
flow_net = FullyConnectedArch(
output_keys=[Key("u"), Key("v"), Key("w"), Key("p")],
thermal_f_net = FullyConnectedArch(
input_keys=input_keys, output_keys=[Key("theta_f")]
thermal_s_net = FullyConnectedArch(
input_keys=input_keys, output_keys=[Key("theta_s")]
# make list of nodes to unroll graph on
thermal_nodes = (
+ dif.make_nodes()
+ dif_inteface.make_nodes()
+ f_grad.make_nodes()
+ s_grad.make_nodes()
+ [flow_net.make_node(name="flow_network", optimize=False)]
+ [thermal_f_net.make_node(name="thermal_f_network")]
+ [thermal_s_net.make_node(name="thermal_s_network")]
This section is again very similar to Conjugate Heat Transfer tutorial. The only difference being, now the input to parameterization argument is a dictionary of key value pairs where the keys are
strings for each design variable and the values are tuples of float/ints specifying the range of variation for those variables.
The code to setup these dictionaries for parameterized inputs and constraints can be found below.
Setting the parameter ranges (three_fin_geometry.py)
# parametric variation
fin_height_m, fin_height_s = Symbol("fin_height_m"), Symbol("fin_height_s")
fin_length_m, fin_length_s = Symbol("fin_length_m"), Symbol("fin_length_s")
fin_thickness_m, fin_thickness_s = Symbol("fin_thickness_m"), Symbol("fin_thickness_s")
height_m_range = (0.0, 0.6)
height_s_range = (0.0, 0.6)
length_m_range = (0.5, 1.0)
length_s_range = (0.5, 1.0)
thickness_m_range = (0.05, 0.15)
thickness_s_range = (0.05, 0.15)
param_ranges = {
fin_height_m: height_m_range,
fin_height_s: height_s_range,
fin_length_m: length_m_range,
fin_length_s: length_s_range,
fin_thickness_m: thickness_m_range,
fin_thickness_s: thickness_s_range,
fixed_param_ranges = {
fin_height_m: 0.4,
fin_height_s: 0.4,
fin_length_m: 1.0,
fin_length_s: 1.0,
fin_thickness_m: 0.1,
fin_thickness_s: 0.1,
# set param ranges
if parameterized:
pr = Parameterization(param_ranges)
self.pr = param_ranges
pr = Parameterization(fixed_param_ranges)
self.pr = fixed_param_ranges
# channel
self.channel = Channel(
channel_origin[0] + channel_dim[0],
channel_origin[1] + channel_dim[1],
channel_origin[2] + channel_dim[2],
parameterization=pr, )
# three fin heat sink
heat_sink_base = Box(
heat_sink_base_origin[0] + heat_sink_base_dim[0], # base of heat sink
heat_sink_base_origin[1] + heat_sink_base_dim[1],
heat_sink_base_origin[2] + heat_sink_base_dim[2],
parameterization=pr, )
Setting the parameterization argument in the constraints. Here, only a few BCs from the flow domain are shown for example purposes. But the same settings are applied to all the other BCs.
# inlet
u_profile = inlet_vel * tanh((0.5 - Abs(y)) / 0.02) * tanh((0.5 - Abs(z)) / 0.02)
constraint_inlet = PointwiseBoundaryConstraint(
outvar={"u": u_profile, "v": 0, "w": 0},
criteria=Eq(x, channel_origin[0]),
"u": 1.0,
"v": 1.0,
"w": 1.0,
}, # weight zero on edges
parameterization=geo.pr, batch_per_epoch=5000,
flow_domain.add_constraint(constraint_inlet, "inlet")
integral_continuity = IntegralBoundaryConstraint(
outvar={"normal_dot_vel": volumetric_flow},
lambda_weighting={"normal_dot_vel": 1.0},
parameterization={**geo.pr, **{x_pos: (-1.1, 0.1)}}, fixed_dataset=False,
flow_domain.add_constraint(integral_continuity, "integral_continuity")
This part is exactly similar to tutorial Conjugate Heat Transfer and once all the definitions are complete, you can execute the parameterized problem like any other problem.
As discussed previously, you can optimize the design once the training is complete as a post-processing step. A typical design optimization usually contains an objective function that is minimized/
maximized subject to some physical/design constraints.
For heat sink designs, usually the peak temperature that can be reached at the source chip is limited. This limit arises from the operating temperature requirements of the chip on which the heat sink
is mounted for cooling purposes. The design is then constrained by the maximum pressure drop that can be successfully provided by the cooling system that pushes the flow around the heat sink.
Mathematically this can be expressed as below:
Table 7 Optimization problem
Variable/Function Description
minimize \(Peak \text{ } Temperature\) Minimize the peak temperature at the source chip
with respect to \(h_{central fin}, h_{side fins}, l_{central fin}, l_{side fins}, t_{central fin}, t_{side fins}\) Geometric Design variables of the heat sink
subject to \(Pressure \text{ } drop < 2.5\) Limit on the pressure drop (Max pressure drop that can be provided by cooling system
Such optimization problems can be easily achieved in Modulus once you have a trained, parameterized model ready. As it can be noticed, while solving the parameterized simulation you created some
monitors to track the peak temperature and the pressure drop for some design variable combination. You will basically would follow the same process and use the PointwiseMonitor constructor to find
the values for multiple combinations of the design variables. You can create this simply by looping through the multiple designs. Since these monitors can be for large number of design variable
combinations, you are recommended to use these monitors only after the training is complete to achieve better computational efficiency. Do do this, once the models are trained, you can run the flow
and thermal models in the 'eval' mode by specifying: 'run_mode=eval' in the config files.
After the models are run in the 'eval' mode, the pressure drop and peak temperature values will be saved in form of a .csv file. Then, one can write a simple scripts to sift through the various
samples and pick the most optimal ones that minimize/maximize the objective function while meeting the required constraints (for this example, the design with the least peak temperature and the
maximum pressure drop < 2.5):
NOTE: run three_fin_flow and Three_fin_thermal in "eval" mode
after training to get the monitor values for different designs.
# import Modulus library
from modulus.utils.io.csv_rw import dict_to_csv
from modulus.hydra import to_absolute_path
# import other libraries
import numpy as np
import os, sys
import csv
# specify the design optimization requirements
max_pressure_drop = 2.5
num_design = 10
path_flow = to_absolute_path("outputs/three_fin_flow")
path_thermal = to_absolute_path("outputs/three_fin_thermal")
invar_mapping = [
outvar_mapping = ["pressure_drop", "peak_temp"]
# read the monitor files, and perform a design space search
def DesignOpt(
path_flow += "/monitors"
path_thermal += "/monitors"
directory = os.path.join(os.getcwd(), path_flow)
values, configs = [], []
for _, _, files in os.walk(directory):
for file in files:
if file.startswith("back_pressure") & file.endswith(".csv"):
value = []
# read back pressure
with open(os.path.join(path_flow, file), "r") as datafile:
data = []
reader = csv.reader(datafile, delimiter=",")
for row in reader:
columns = [row[1]]
last_row = float(data[-1][0])
# read front pressure
with open(
os.path.join(path_flow, "front_pressure" + file[13:]), "r"
) as datafile:
reader = csv.reader(datafile, delimiter=",")
data = []
for row in reader:
columns = [row[1]]
last_row = float(data[-1][0])
# read temperature
with open(
os.path.join(path_thermal, "peak_temp" + file[13:]), "r"
) as datafile:
data = []
reader = csv.reader(datafile, delimiter=",")
for row in reader:
columns = [row[1]]
last_row = float(data[-1][0])
# perform the design optimization
values = np.array(
[values[i][1] - values[i][0], values[i][2] * 273.15]
for i in range(len(values))
indices = np.where(values[:, 0] < max_pressure_drop)[0]
values = values[indices]
configs = [configs[i] for i in indices]
opt_design_index = values[:, 1].argsort()[0:num_design]
opt_design_values = values[opt_design_index]
opt_design_configs = [configs[i] for i in opt_design_index]
# Save to a csv file
opt_design_configs = np.array(
for i in range(num_design)
opt_design_configs_dict = {
key: value.reshape(-1, 1)
for (key, value) in zip(invar_mapping, opt_design_configs.T)
opt_design_values_dict = {
key: value.reshape(-1, 1)
for (key, value) in zip(outvar_mapping, opt_design_values.T)
opt_design = {**opt_design_configs_dict, **opt_design_values_dict}
dict_to_csv(opt_design, "optimal_design")
print("Finished design optimization!")
if __name__ == "__main__":
The design parameters for the optimal heat sink for this problem are: \(h_{central fin} = 0.4\), \(h_{side fins} = 0.4\), \(l_{central fin} = 0.83\), \(l_{side fins} = 1.0\), \(t_{central fin} = 0.15
\), \(t_{side fins} = 0.15\). The above design has a pressure drop of 2.46 and a peak temperature of 76.23 \((^{\circ} C)\) Fig. 147
Fig. 147 Three Fin geometry after optimization
Table 8 represents the computed pressure drop and peak temperature for the OpenFOAM single geometry and Modulus single and parameterized geometry runs. It is evident that the results for the
parameterized model are close to those of a single geometry model, showing its good accuracy.
Table 8 A comparison for the OpenFOAM and Modulus results
Property OpenFOAM Single Run Single Run Parameterized Run
Pressure Drop \((Pa)\) 2.195 2.063 2.016
Peak Temperature \((^{\circ} C)\) 72.68 76.10 77.41
By parameterizing the geometry, Modulus significantly accelerates design optimization when compared to traditional solvers, which are limited to single geometry simulations. For instance, 3 values
(two end values of the range and a middle value) per design variable would result in \(3^6 = 729\) single geometry runs. The total compute time required by OpenFOAM for this design optimization would
be 4099 hrs. (on 20 processors). Modulus can achieve the same design optimization at ~17x lower computational cost. Large number of design variables or their values would only magnify the difference
in the time taken for two approaches.
The Modulus calculations were done using 4 NVIDIA V100 GPUs. The OpenFOAM calculations were done using 20 processors.
Fig. 148 Streamlines colored with pressure and temperature profile in the fluid for optimal three fin geometry
Here, the 3-Fin heatsink was solved for arbitrary heat properties chosen such that the coupled conjugate heat transfer solution was possible. However, such approach causes issues when the
conductivities are orders of magnitude different at the interface. We will revisit the conjugate heat transfer problem in tutorial Heat Transfer with High Thermal Conductivity and Industrial Heat
Sink to see some advanced tricks/schemes that one can use to handle the issues that arise in Neural network training when real material properties are involved.
|
{"url":"https://docs.nvidia.com/deeplearning/modulus/modulus-v2209/user_guide/advanced/parametrized_simulations.html","timestamp":"2024-11-03T00:43:36Z","content_type":"text/html","content_length":"719416","record_id":"<urn:uuid:c252e39e-6c1d-48c5-89c1-5db0fb88bb75>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00259.warc.gz"}
|
Array API specification for broadcasting semantics.
Broadcasting refers to the automatic (implicit) expansion of array dimensions to be of equal sizes without copying array data for the purpose of making arrays with different shapes have compatible
shapes for element-wise operations.
Broadcasting facilitates user ergonomics by encouraging users to avoid unnecessary copying of array data and can potentially enable more memory-efficient element-wise operations through
vectorization, reduced memory consumption, and cache locality.
Given an element-wise operation involving two compatible arrays, an array having a singleton dimension (i.e., a dimension whose size is one) is broadcast (i.e., virtually repeated) across an array
having a corresponding non-singleton dimension.
If two arrays are of unequal rank, the array having a lower rank is promoted to a higher rank by (virtually) prepending singleton dimensions until the number of dimensions matches that of the array
having a higher rank.
The results of the element-wise operation must be stored in an array having a shape determined by the following algorithm.
1. Let A and B both be arrays.
2. Let shape1 be a tuple describing the shape of array A.
3. Let shape2 be a tuple describing the shape of array B.
4. Let N1 be the number of dimensions of array A (i.e., the result of len(shape1)).
5. Let N2 be the number of dimensions of array B (i.e., the result of len(shape2)).
6. Let N be the maximum value of N1 and N2 (i.e., the result of max(N1, N2)).
7. Let shape be a temporary list of length N for storing the shape of the result array.
8. Let i be N-1.
9. Repeat, while i >= 0
1. Let n1 be N1 - N + i.
2. If n1 >= 0, let d1 be the size of dimension n1 for array A (i.e., the result of shape1[n1]); else, let d1 be 1.
3. Let n2 be N2 - N + i.
4. If n2 >= 0, let d2 be the size of dimension n2 for array B (i.e., the result of shape2[n2]); else, let d2 be 1.
5. If d1 == 1, then set the ith element of shape to d2.
6. Else, if d2 == 1, then
☆ set the ith element of shape to d1.
7. Else, if d1 == d2, then
☆ set the ith element of shape to d1.
8. Else, throw an exception.
9. Set i to i-1.
10. Let tuple(shape) be the shape of the result array.
The following examples demonstrate the application of the broadcasting algorithm for two compatible arrays.
A (4d array): 8 x 1 x 6 x 1
B (3d array): 7 x 1 x 5
Result (4d array): 8 x 7 x 6 x 5
A (2d array): 5 x 4
B (1d array): 1
Result (2d array): 5 x 4
A (2d array): 5 x 4
B (1d array): 4
Result (2d array): 5 x 4
A (3d array): 15 x 3 x 5
B (3d array): 15 x 1 x 5
Result (3d array): 15 x 3 x 5
A (3d array): 15 x 3 x 5
B (2d array): 3 x 5
Result (3d array): 15 x 3 x 5
A (3d array): 15 x 3 x 5
B (2d array): 3 x 1
Result (3d array): 15 x 3 x 5
The following examples demonstrate array shapes which do not broadcast.
A (1d array): 3
B (1d array): 4 # dimension does not match
A (2d array): 2 x 1
B (3d array): 8 x 4 x 3 # second dimension does not match
A (3d array): 15 x 3 x 5
B (2d array): 15 x 3 # singleton dimensions can only be prepended, not appended
In-place Semantics¶
As implied by the broadcasting algorithm, in-place element-wise operations (including __setitem__) must not change the shape of the in-place array as a result of broadcasting. Such operations should
only be supported in the case where the right-hand operand can broadcast to the shape of the left-hand operand, after any indexing operations are performed.
For example:
x = empty((2, 3, 4))
a = empty((1, 3, 4))
# This is OK. The shape of a, (1, 3, 4), can broadcast to the shape of x[...], (2, 3, 4)
x[...] = a
# This is not allowed. The shape of a, (1, 3, 4), can NOT broadcast to the shape of x[1, ...], (3, 4)
x[1, ...] = a
|
{"url":"https://data-apis.org/array-api/2022.12/API_specification/broadcasting.html","timestamp":"2024-11-06T19:00:10Z","content_type":"text/html","content_length":"36061","record_id":"<urn:uuid:852e44af-d9fa-45bb-a4ce-cb5084a22381>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00531.warc.gz"}
|
3 Card Shuffle
Integers from 1 through 100 are written on a hundred cards. The deck is properly shuffled, and 3 cards are drawn at random.
What is the probability that the cards are drawn in increasing order?
This section requires Javascript. You are seeing this because something didn't load right. We suggest you, (a) try refreshing the page, (b) enabling javascript if it is disabled on your browser and,
finally, (c) loading the non-javascript version of this page . We're sorry about the hassle.
|
{"url":"https://solve.club/problems/3-card-shuffle/3-card-shuffle.html","timestamp":"2024-11-08T05:10:38Z","content_type":"text/html","content_length":"36904","record_id":"<urn:uuid:a29c03a5-57fd-45d4-b19c-bf5edd49e9a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00515.warc.gz"}
|
Global Dimensions of Math (Opinion)
Every curricular area has a global dimension that can be taught in the classroom, including math. This is often the area that is most difficult for people to grasp as being global, but math helps us
understand the world—and we use the world to understand math. The world is interconnected and math shows these connections and possibilities. The earlier young learners can put these skills to
practice, the more likely we are to remain an innovation society and economy.
The body of knowledge and practice known as mathematics is derived from the contributions of thinkers throughout the ages and across the globe. It gives us a way to understand patterns, to quantify
relationships, and to predict the future.
Algebra can explain how quickly water becomes contaminated and how many people in a third-world country drinking that water might become ill on a yearly basis. A study of geometry can explain the
science behind architecture throughout the world. Statistics and probability can estimate death tolls from earthquakes, conflicts and other calamities around the world. It can also predict profits,
how ideas spread, and how previously endangered animals might repopulate.
For students to function in a global context, math content needs to help them get to global competence, which is understanding different perspectives and world conditions, recognizing that issues are
interconnected across the globe, as well as communicating and acting in appropriate ways. In math, this means reconsidering the typical content in atypical ways, and showing students how the world
consists of situations, events, and phenomena that can be sorted out using the right math tools.
Any global contexts used in math should add to an understanding of the math, as well as the world. To do that, teachers should stay focused on teaching good, sound, rigorous, and appropriate math
content and use global examples that work. For instance, learners will find little relevance in solving a word problem in Europe using kilometers instead of miles when instruments already convert the
numbers easily. It doesn’t contribute to a complex understanding of the world.
Math is often studied as a pure science, but is typically applied to other disciplines, extending well beyond physics and engineering. For instance, studying exponential growth and decay (the rate at
which things grow and die) within the context of population growth, the spread of disease, or water contamination, is meaningful. It not only gives students a real-world context in which to use the
math, but helps them understand global phenomena—they may hear about a disease spreading in India, but can’t make the connection without understanding how fast something like cholera can spread in a
dense population. In fact, adding a study of growth and decay to lower level algebra—it’s most often found in algebra II—may give more students a chance to study it in the global context than if it’s
reserved for the upper level math that not all students take.
In a similar vein, a study of statistics and probability is key to understanding many of the events of the world, and is usually reserved for students at a higher level of math, if it gets any study
in high school at all. But many world events and phenomena are unpredictable and can only be described using statistical models, so a globally focused math program needs to consider including
statistics. Probability and statistics can be used to estimate death tolls from natural disasters, such as earthquakes and tsunamis; the amount of aid that might be necessary to help in the
aftermath; and the number people who would be displaced.
Understanding the world also means appreciating the contributions of other cultures. In algebra, students could benefit from studying numbers systems that are rooted in other cultures, such the Mayan
and Babylonian systems, a base 20 and base 60 system, respectively. They gave us elements that still work in current math systems, such as the 360 degrees in a circle, and the division of the hour
into 60 minute intervals. Including this type of content can help develop an appreciation for the contributions other cultures have made to our understanding of math.
It’s important, though, to only include examples that are relevant to the math and help students make sense of the world. In geometry, for example, Islamic tessellations—shapes arranged in an
artistic pattern—might be used as a context to develop, explore, teach, and reinforce the important geometric understandings of symmetry and transformations. Students might study the different types
of polygons that can be used to tessellate the plane (cover the space without any holes or overlapping) and even how Islamic artists approached their art. Here, the content and the context contribute
to an understanding of the other.
If students are given the right content and context for a globally infused math curriculum, they’ll be able to make global connections using math, and create a math model that reflects the complexity
and interrelatedness of global situations and events. They’ll be able to apply math strategies to solve problems and develop and explain the use of a given math concept in the global sense. And
they’ll be able to use the right math tools in the right situations, and explain why a math model they chose is relevant. More importantly, students will be able to use data to draw defensible
conclusions, and use mathematical knowledge and skills to make real-life impact.
By the time a student graduates high school, he or she should be able to use mathematical tools and procedures to explore problems and opportunities in the world, and use mathematical models to make
and defend conclusions and actions.
The examples here are just a sampling of how it could be done, and they can be used to launch content-focused conversations for math teachers. These aren’t meant to be separate courses of study,
either, but overlapping and interrelated elements that schools will have to decide to use in ways that meet their individual needs.
At the heart of any discussion on a global curriculum through math, it’s important to consider how the math helps students make sense of the world, what in a student’s experience enables them to use
the math to make contributions to the global community, and what math content students need to solve complex problems in a complex world. Then, the challenge is finding genuine, relevant, and
significant examples of global or cultural contexts that enhance, deepen, and illustrate an understanding of the math.
The global era will demand these skills of its citizens—the education system should provide students the wherewithal to be proficient in them.
See Asia Society’s rubrics and performance outcomes for mathematics to get started.
Follow Asia Society on Twitter.
|
{"url":"https://www.edweek.org/teaching-learning/opinion-global-dimensions-of-math/2014/10","timestamp":"2024-11-12T10:51:08Z","content_type":"text/html","content_length":"161646","record_id":"<urn:uuid:4cc69f17-dfd2-417c-890f-75cb6b527f0c>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00515.warc.gz"}
|
Hilbert's paradox of the grand hotel, , or How Hilbert created a paradox
For those of you who are already familiar with Hilbert’s paradox and want to move straight to the solution and skip my sarcastic nonsense, please click here.
Today I read about Hilbert’s paradox of the Grand Hotel (shame, only today) and I was struck by the illogical human thinking.
That’s right!
Paradoxes are nothing but illogical thinking.
Hilbert’s paradox goes like that:
Consider a hypothetical hotel with a countably infinite number of rooms, all of which are occupied. One might be tempted to think that the hotel would not be able to accommodate any newly
arriving guests, as would be the case with a finite number of rooms.[Read the rest in Wikipedia]
When for some stupid reason we create infinite sets, we create them from objects with the same properties. Right?
We can have an infinite set of apples, or an infinite set of shoes, or an infinite set of shoe pairs.
What do we have in Hilbert’s case?
We have an infinite set of hotel rooms, all of which are occupied by guests.
What we actually have in Hilbert’s paradox is an infinite set of room/guest pairs.
But according to Hilbert’s choice of words, we have hotel with a countably infinite number of rooms, all of which are occupied.
(If the mathematicians knew the importance of the word choice, they would know how difficult it would be to create a paradox.)
Hilbert’s Paradox
Now we have to accommodate one guest in that hotel.
It appears that if we choose to think of the occupied room as of room + guest, we can move the guests with one room up and vacate room №1 for the new guest.
„Hold on! – one would say. – All the rooms are occupied! How do you move the guests with one room up.“
Well, apparently Hilbert has a good explanation; since the rooms are infinite, there is no last room to block the guests’ movement with one room up. So, it is possible, although it contradicts the
idea that all rooms are occupied.
Mentioning „all“, don’t you think that this word is inappropriate when it comes to infinity, because „all“ implies a number, but infinity is not a real number.
„Not a real number“ is the first step for creating a paradox.
How many unicorns can we accommodate in fully occupied unreal number?
The answer could be any real and unreal number and it would be the right answer because you can adjust the unreal parameters in any convenient for you way.
But isn’t there any way to make sense of this and put it in a logical explanation?
Can we refute the illogical statement that infinite number of occupied rooms can accommodate one and even infinite number of guests?
Sure, there is nothing easier than refuting a paradox.
The logic for a limited set of numbers tells us that we cannot create a pair in a row of room/guest pairs by adding only a guest in that row.
Simply said, we cannot accommodate a guest when all the rooms are occupied.
As Hilbert tells us that’s not the case with the infinite number of occupied rooms.
Then let’s change Hilbert’s choice of words!
Here is how our change would look:
1. empty room = bucket of water
2. guest = 1 kg of cement
3. occupied room = concrete
Let’s accommodate the cement in the bucket of water in the same way we accommodate the guests in the empty rooms.
Now we have an infinite number of concrete buckets.
What happened here?
We paired a with b and the result is c (pair)
To make it even easier, you can use an infinite number of 10L buckets filled with 10L of water, and try to add 10L water from a plastic back in the infinite bucket-water pairs.
And something else for those who work with numbers:
Infinite number of pairs still means 50% room items and 50% guest items.
Now add 1 item to one of the sides and the percentage changes, making the room items under 50%, which leaves no room for the 1 item added in the infinity of the 50% guests. So much for the infinity,
which when put in percentage cannot take even one more item.
Still not convinced?
Look at the stars. They are only 4% of the universe. We have 96% empty space and only 4% matter. Imagine that there are 50% matter and 50% space in the infinitely large universe. Do you think that it
would stay 50/50 if we put 1% more matter in it?
You see now? Infinity is not infinity when measured in percentage.
You didn’t think of it, eh?
Imagine that we fill all that empty space in the universe with matter. It would be infinity of matter. But after we filled up the universe with matter, can we put one more electron in it?
Ahaaa! Because there is no such thing as INFINITY.
Now, back to Hilbert’s paradox.
It would be a different story if we had infinite number of rooms and infinite numbers of guests waiting to be accommodated. Before you accommodate the guests you can bring a whole lot of infinite
guests, and there will still be enough rooms for them, but once we put them in the rooms, we change the property of the infinity to infinite number of pairs.
(I wish Bertrand Arthur William Russell knew that too.)
Now, you cannot put a single guest in that hotel.
It doesn’t work that way, you say?
Oh, I see, you prefer unicorns.
So, where the paradox comes from?
It comes from the concrete, of course.
Hilbert wasn’t clever enough to see the real property of „occupied room“.
He saw it as 1+1. But it actually is 1.
Now you can tell me that the logic of mathematics allows you to do pairing and mapping and blah-blah, and all the blahs seems to be logical, but if there is a paradox, one or all of your blahs are
And by the way, mathematics is not logic, but tool to prove logic. It is a way to frame your thinking in order to prevent nonsense slipping out of the cage.
Here is another way to solve Hilbert’s paradox which I posted in StackExchange
Three points to solve the paradox
The fully occupied infinite hotel is actually infinite set of pairs room/guest. In a set of pairs the main property is 50/50 percent ratio (the rooms equal the guests)
This condition sets the three points bellow.
1. If we „vacate“ a room, we cannot use it for new guest unless we brake the infinity. How so? Because ∞+1=∞ rooms, and we already have infinite number of guests on the other side of the set (50/50
ratio is still present). Hence the conclusion that although it seams that we have an empty room we don’t really have one, because the number on both sides is always equal – infinite.
2. Not taking in account the above, we actually make the number of the hotel guests finite (if there is empty room it is because the guests are finite) which automatically makes the number of the
occupied rooms finite + 1 empty room. Now the number of the rooms is with 1 greater than the number of the guests and we can accommodate one guest. But in this case we don’t work with infinity, do
3. We can ignore point 1 and use the obvious argument against point 2 (which would be wrong) that after we move the guests we end up with infinite occupied rooms and one empty room in the hotel (!)
Well, here is the fallacy – the hotel is defined as countably infinite number of rooms, all of which are occupied. Let’s lose the “hotel” word which is deceiving and let’s use “infinite set of
pairs”. If we have one element which is not paired, does it belong to the set? Of course not. So, the “empty” room does not belong to the hotel because it is not a member of the set of pairs. And
this is what I call the “language deception”. The “hotel” word is bringing deception.
Let me explain it in a simpler way.
What happens if we move the guests with one room down, instead of moving them up? The guest from room №1 leaves the room, and the room is taken by the guest from room №2, the guest from №3 goes in
room №2, the guest from №4 goes in №3 and so on (it is the opposite of vacating a room) Now one guest is out of the hotel because he doesn’t occupy a hotel room. Is this person part of the hotel
guests? Of course not. So is the case with the empty room – it does not belong to the hotel. Instead of moving the guests with one room up, lets move the rooms with one down. It is essentially the
same action but put in a different wording. Now you can see that we actually take one room out of the hotel by moving the guests with one room up.
If you still have objections on the matter, please read this short explanation I gave in Quora to a stubborn minded people like you.
To avoid paradoxes, we should follow two simple rules:
1. do not assign wrong properties
2. do not have missing properties
Here is a TED-Ed video lesson about the Hilbert’s Grand Hotel
Запиши се за отговори
3 коментара
Мнения в полето
Виж всички коментари
15.02.2014 19:24
The paradox breaks in its initial definition. It says that Infinite amount of rooms are totally Full. Infinite suggests a set of numbers without an end. Full suggests a set of numbers WITH an end. So
those two words cannot actually exist together in one statement as they contradict each other and making a statement that includes both of them is like saying „The sky is on the ground in this sunlit
night“. Either the number of rooms is not infinite and they do have an end, or they are not all full 🙂 I don’t understand how that „paradox“… Прочети нататък »
15.02.2014 19:47
But then again looking at the definition of the word paradox i think that there is nothing wrong with this one as well 🙂
a seemingly absurd or contradictory statement or proposition which when investigated may prove to be well founded or true.
„the uncertainty principle leads to all sorts of paradoxes, like the particles being in two places at once“
Is it seemingly absurd and contradictory? Yes it is. Then it must be a paradox 🙂
|
{"url":"https://truden.truden.com/1509.html","timestamp":"2024-11-08T08:37:23Z","content_type":"text/html","content_length":"230086","record_id":"<urn:uuid:f44078c4-b313-4428-9f84-35f09e487c44>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00810.warc.gz"}
|
Question 2 [11 Marks]
Question 2 [11 Marks]
We would to conduct an analysis of variance at the 5% level of significance to compare the
mean delivery time for five different branches of a restaurant in the city.
We take a random sample of 7 measurements from the branch in St. Vital, 11
measurements from St. Boniface, 9 measurements from Transcona, 10 measurements from
Fort Rouge, and 8 measurements from Fort Garry.
We will assume for this test that delivery times for each branch follow a Normal
The ANOVA table (with some values missing) is given below:
Sum of
Source of
Mean Square
F/nPart (f) [2 marks]
The mean delivery times for Transcona and St. Vital, as well as the standard deviations, are
given below:
St. Vital
Calculate a 95% confidence interval for the true mean delivery time for Transcona. Also,
provide an interpretation of the confidence interval.
Do this work separately on paper.
Part (g) [2 marks]
Use the information above to calculate a 95% confidence interval for the difference in the
mean delivery times for Transcona and St. Vital (PTranscona St. Vital). Also, provide an
interpretation of the confidence interval.
Do this work separately on paper./nPart (a) [1 mark]
What are the hypotheses for the appropriate test of significance?
Do this work separately on paper.
Part (b) [3 marks]
Find all missing values in the table above. Show your calculations and display the final
complete ANOVA table.
Do this work separately on paper.
Part (c) [1 mark]
What is the P-value for the appropriate test of significance?
Do this work separately on paper.
Part (d) [1 mark]
Provided a fully-worded conclusion for this test.
Do this work separately on paper.
Part (e) [2 marks]
Suppose you had used the critical value method to conduct the test. What would be the
decision rule and the conclusion?
Do this work separately on paper.
Part (f) [2 marks]
The mean delivery times for Transcona and St. Vital, as well as the standard deviations, are
given below:
St. Vital
Fig: 1
Fig: 2
Fig: 3
|
{"url":"https://tutorbin.com/questions-and-answers/question-2-11-marks-we-would-to-conduct-an-analysis-of-variance-at-the-5-level-of-significance-to-compare-the-mean","timestamp":"2024-11-02T21:01:41Z","content_type":"text/html","content_length":"74313","record_id":"<urn:uuid:f56bc94d-e612-4dfb-b74f-fda4321eb212>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00564.warc.gz"}
|
5 Ancient Indian mathematicians whom you don't know much about | भारत की गुरुकुल परम्परा | Vediconcepts
Ancient India had one of the most diverse knowledge system in the world at that time, if not the most. It had a long tradition of astronomy and mathematics. The guru-shishya parampara produced some
of the exceptional intellectuals the world had ever seen.
In this post, we will learn about few of them and what made them stand apart. Please bear in mind that these are just the few who are most famous ones. We’ll be going into complete detail of their
works in future blog posts.
Baudhayana (बौधायन)
Baudhayana’s Sulbha sutras are one among many Sulbha sutras, but the most famous one. Sulbha is a Sanskrit world that means “rope”. Baudhayana published at least 6 sutras including Dharmasutra and
sulbhasutra. Sulbasutra is a work on geometry.
Pythagorean theorem as stated in Sulbasutra centuries before Pythagoras was born, if he was ever a real person.
दीर्घचतुरश्रस्याक्ष्णया रज्जु: पार्श्र्वमानी तिर्यग् मानी च यत् पृथग् भूते कुरूतस्तदुभयं करोति ॥
A rope stretched along the length of the diagonal produces an area which the vertical and horizontal sides make together.
Value of square root of 2 and circling the square the other things those are mentioned in this work of Baudhayana.
Brahmagupta of Bhinmal was the first to give rules to compute with Zero. He composed many treaties Brāhmasphuṭasiddhānta (ब्रह्मस्फुटसिद्धांता) among them, was the most famous one. This was the first book to
give rules on summing positive, negative and zero. Few of them are:
• The sum of two positive quantities is positive
• The sum of two negative quantities is negative
• The sum of zero and a negative number is negative
• The sum of zero and a positive number is positive
• The sum of zero and zero is zero
Podometic™ Bharatiya Mathematics developed by Jonathan J. Crabtree, an Australian historian of mathematics follows this very book. Podometic™ is Arithmetic freely updated for how laws of physics
work. Brahmgupta’s work went to Europe through Al-Khwarizmi. Leonardo Pisano a.k.a. Fibonacci translated Al-Khwarizmi’s work but none of them understood the concept of zero or negative numbers.
Mistakes made at that time are still taught in mathematics as fact e.g. -1 is less than 0
Vrah Mihira
He was among one of nine jewels of King Vikramaditya. Brihat Samhita is the most famous work composed by Vrah Mihira in which he deals with work on architecture, temples, planetary motions, eclipses,
timekeeping, astrology, seasons, cloud formation, rainfall, agriculture, mathematics, gemology, perfumes and many other topics. According to Vrah Mihira himself, he was merely summarizing many
earlier existing literature.
Vrah Mihira improved on the sine table of Aryabhat.
Acharya Bhaskar I
He was the first one to use Place value system and the current symbol of Zero. His most famous works include Aryabhatiya, Mahabhaskariya and Laghubhāskarīya. He worked on sine table and relation of
sine and cosine functions
Acharya Bhaskar II
Acharya Bhaskar’s most noted work is Siddhanta Shiromani (सिद्धांतशिरोमणी). His works show the influence of Brahmagupta, Sridhara, Mahavira, Padmanabha and others. His work Lilavati (his daughter’s name) is
the other one that is quite famous in which he teaches math to his daughter through dialogue.
|
{"url":"https://vediconcepts.org/ancient-indian-mathematicians/","timestamp":"2024-11-07T10:24:44Z","content_type":"text/html","content_length":"95943","record_id":"<urn:uuid:735e97e1-36c2-4f85-8b9a-cf9c36e0a39f>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00494.warc.gz"}
|
How to use a mapping set in defining an equation
I have a mapping between products and the loading port from which they are sourced
Set LPort_Prod_Mapping (LPort,Product) /LPort1.A, LPort2.A, LPort3,B,LPort4.C/
Now I have a variable X such that X is a function of Lport and Dport (destination port) ie X(Lport,Dport)
I now want to write an equation which is a function of Product and DPort instead of Lport and Dport
What I tried is something like this for which I am getting an error.
Eqn_flow (Product,DPort)( LPort_Prod_Mapping (LPort,Product)).. X(Lport,Dport)=e=constant(Product,DPort) \ \ the error points to ( LPort_Prod_Mapping (LPort,Product)) and says “Uncontrolled set
entered as constant”!
Where am I making a mistake? I believe the mapping between Lport and Product should help me do what I am trying to do. Can anyone please help?
To view this discussion on the web visit https://groups.google.com/d/msg/gamsworld/-/ZGpATHZukKgJ.
To post to this group, send email to gamsworld@googlegroups.com.
To unsubscribe from this group, send email to gamsworld+unsubscribe@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/gamsworld?hl=en.
Hi Ranjand
Your equation is defined over both sets Product and DPort. In the $-condition you use the sets LPort and Product, which causes a problem in Gams as the sets should be in the equation definition.
Why don’t you write an equation over LPort, Product, DPort? Then you can keep the mapping like you want to. The dollar sign condition just keeps those where there is a mapping.
Eqn_flow (Lport, Product,DPort)$( LPort_Prod_Mapping (LPort,Product))… X(Lport,Dport)=e=constant(Product,DPort)
From: gamsworld@googlegroups.com [mailto:gamsworld@googlegroups.com] On Behalf Of ranjand2005@gmail.com
Sent: Freitag, 26. Oktober 2012 14:26
To: gamsworld@googlegroups.com
Subject: How to use a mapping set in defining an equation
I have a mapping between products and the loading port from which they are sourced
Set LPort_Prod_Mapping (LPort,Product) /LPort1.A, LPort2.A, LPort3,B,LPort4.C/
Now I have a variable X such that X is a function of Lport and Dport (destination port) ie X(Lport,Dport)
I now want to write an equation which is a function of Product and DPort instead of Lport and Dport
What I tried is something like this for which I am getting an error.
Eqn_flow (Product,DPort)( LPort_Prod_Mapping (LPort,Product)).. X(Lport,Dport)=e=constant(Product,DPort) \ \ \ \ \ the error points to ( LPort_Prod_Mapping (LPort,Product)) and says “Uncontrolled set
entered as constant”!
Where am I making a mistake? I believe the mapping between Lport and Product should help me do what I am trying to do. Can anyone please help?
To view this discussion on the web visit https://groups.google.com/d/msg/gamsworld/-/ZGpATHZukKgJ.
To post to this group, send email to gamsworld@googlegroups.com.
To unsubscribe from this group, send email to gamsworld+unsubscribe@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/gamsworld?hl=en.
To post to this group, send email to gamsworld@googlegroups.com.
To unsubscribe from this group, send email to gamsworld+unsubscribe@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/gamsworld?hl=en.
Thanks Renger,
My apologies I should have mentioned what the constant is on the RHS.
I cannot have LPort in this because The right hand side is actually sum(Plants, Q(Product,DPort,Plants)). So I am essentially trying to do the following
LHS: For a given Product-DPort combination calculate the total volume of the product coming from the various LPorts (Load Ports)
RHS: For the same Product-DPort combination above, the above calculated quantity now goes into different Plants.
That is why I cannot have Load port in the equation.
This is what I tried - which runs but does not give me the right answer
Sum((LPort)$(Lport_Prod_Map(LPort,Products)), X(LPort,DPort)=e=Sum((Plants),Y(Products,DPort,Plants))
and I defined the following set
Lport_Prod_Map(LPort,Products)/LPort1.A, LPort2.A, LPort3,B,LPort4.C/
Somehow the gams file runs fine but when i see the final outputs, the total quantity of Product A and B is not conserved in the sense that if I know that there are x1 tons of A and X2 tons of B and
X3 tons of C, I get the following output
Total amounnt of A,B,C = x1+x2+x3
amount of A=x’1, amount of b=x’2 and amoint of C=x3. and x1 +x2= x1’+x’2. Not really sure what the pproblem is.
On Friday, 26 October 2012 17:55:31 UTC+5:30, ranja...@gmail.com wrote:
I have a mapping between products and the loading port from which they are sourced
Set LPort_Prod_Mapping (LPort,Product) /LPort1.A, LPort2.A, LPort3,B,LPort4.C/
Now I have a variable X such that X is a function of Lport and Dport (destination port) ie X(Lport,Dport)
I now want to write an equation which is a function of Product and DPort instead of Lport and Dport
What I tried is something like this for which I am getting an error.
Eqn_flow (Product,DPort)( LPort_Prod_Mapping (LPort,Product)).. X(Lport,Dport)=e=constant(Product,DPort) \ \ the error points to ( LPort_Prod_Mapping (LPort,Product)) and says “Uncontrolled set
entered as constant”!
Where am I making a mistake? I believe the mapping between Lport and Product should help me do what I am trying to do. Can anyone please help?
To view this discussion on the web visit https://groups.google.com/d/msg/gamsworld/-/7ttkV8V4P9QJ.
To post to this group, send email to gamsworld@googlegroups.com.
To unsubscribe from this group, send email to gamsworld+unsubscribe@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/gamsworld?hl=en.
|
{"url":"https://forum.gams.com/t/how-to-use-a-mapping-set-in-defining-an-equation/708","timestamp":"2024-11-03T21:37:18Z","content_type":"text/html","content_length":"27873","record_id":"<urn:uuid:488f6303-aac6-4e41-adef-2a6751848308>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00060.warc.gz"}
|
s static electricity?
Everyone has experienced static electricity. Examples include: when you see a spark in the mirror combing your hair, or you touch a door knob after walking on a rug in winter. The spark you see is
static electricity 'discharging. So why is it called static electricity? It's called "static" because the charges remain separated in one area rather than moving or "flowing" to another area as is
the case of electricity flowing in a wire-- called current electricity.
Static electricity has been known as far back as the ancient Greeks that things could be given a static electric "charge" (a buildup of static) simply by rubbing them, but they had no idea that the
same energy could be used to generate light or power machines. It was Benjamin Franklin that helped bring electricity to the forefront. He believed electricity could be harnessed from lightning.
What exactly is static electricity?
Static electricity is basically an imbalance of electric charges within or on the surface of a material. The charge remains until it is"discharged". A static electric charge can be created whenever
two surfaces contact and separate, and at least one of the surfaces has a high resistance to electric current (and is therefore an electrical insulator). The familiar spark one sees of a static
shock–more specifically, is an electrostatic discharge– caused by the neutralization of charge.
Where is that charge coming from?
We know that all objects are made up of atoms and atoms are composed of protons, electrons and neutrons. The protons are positively charged, the electrons are negatively charged, and the neutrons are
neutral. Therefore, all things are made up of charges. Opposite charges attract each other (negative to positive). Like charges repel each other (positive to positive or negative to negative). Most
of the time positive and negative charges are balanced in an object, which makes that object neutral as is the case of molecules.
Static electricity is the result of an imbalance between negative and positive charges in an object. These charges can build up on the surface of an object until they find a way to be released or
discharged. Rubbing certain materials against one another can transfer negative charges, or electrons. For example, if you rub your shoe on the carpet, your body collects extra electrons from the
rug. The electrons cling to your body until they can be released as the case when you touch a metal door handle.
"... The phenomenon of static electricity requires a separation of positive and negative charges. When two materials are in contact, electrons may move from one material to the other, which leaves an
excess of positive charge on one material, and an equal negative charge on the other. When the materials are separated they retain this charge imbalance..."
Why does your hair stand-up when removing you hat?
As you remove your hat, electrons are transferred from hat to hair-- why does you hair stand up? Since objects with the same charge repel each other .As hair gains more electrons they will have the
same charge and your hair will stand on end. Your hairs are simply trying to get as far away from each other as possible!
What is the Triboelectric Effect?
The triboelectric effect is a type of contact electrification in which certain materials become electrically charged after coming into contact with another different material, and are then separated.
Most everyday static electricity is triboelectric. The polarity and strength of the charges produced differ according to the materials, surface roughness, temperature, strain, and other properties.
The triboelectric effect is now considered to be related to the phenomenon of adhesion, where two materials composed of different molecules tend to stick together because of attraction between the
different molecules. Chemical adhesion occurs when the surface atoms of two separate surfaces form ionic, covalent, or hydrogen bonds under these conditions there is an exchange of electrons between
the different types of molecules, resulting in an electrostatic attraction between the molecules that holds them together.
Depending on the triboelectric properties of the materials, one material may "capture" some of the electrons from the other material. If the two materials are now separated from each other, a charge
imbalance will occur.
Examples of Triboelectric Series that give up electrons:
POSITIVE CHARGE - Dry human skin > leather > rabbit fur > glass > hair > nylon > wool > lead > silk> aluminum > paper LEAST POSITIVE CHARGE
Examples of Triboelectric Series that give up electrons:
NEGATIVE CHARGE - teflon > silicon > PVC > scotch tape > saran wrap > styrofoam > polyester > gold > nickel > rubber - LEAST NEGATIVE CHARGE
How to Create Static Electricity using a Van de Graaf Generator
A Van de Graaff generator is an electrostatic generator which uses a moving belt to accumulate electric charge on a hollow metal globe on the top of an insulated column. This can create very high
electric potentials. It produces very high voltage direct current (DC) electricity at low current levels. It was invented by American physicist Robert J. Van de Graaff in 1929. (See Reference below
in Scientific American) The potential difference achieved in modern Van de Graaff generators can reach 5 megavolts. A tabletop version can produce on the order of 100,000 volts and can store enough
energy to produce a visible spark. Small Van de Graaff machines are produced for entertainment, and in physics classrooms to teach electrostatics.
Readings and References:
Possibilities of Electro-Static Generators - Nikola Tesla - Scientific American 1934
Triboelectric Charging of Common Objects
Materials that Cause Static Electricity
Test your Understanding:
1. What is the main reason for static electricity
a) Static electricity is the result of an excess of negative charges on an object
b) Static electricity is the result of an excess of positive charges on an object
c) Static electricity is the result of an imbalance between negative and positive charges in an object.
d) Static electricity is causes when objects of different materials come close to each other.
2. Which statement is not correct
a) in the triboelectric effect at least one of the surfaces has a high resistance to electric current (and is therefore an electrical insulator)
b) )static electricity was discovered by Benjamin Franklin
c) lightning is a discharge of static electricity
d) aMost everyday static electricity is triboelectric
3. What is Adhesion?
a) the force of attraction between water molecules
b) two materials composed of different molecules tend to stick together because of attraction between two different molecules
c) the force of attraction between any two similar molecules
d) hydrogen bonding between molecules
4. Hair stands up when you remove a wool hat because
a) Protons are transferred from hat to hair - like charges will repel
b) Electrons are transferred from hair to hat leaving an excess of protons in your hair
c) Electrons are transferred from hat to hair - like charges repel so hair fibers repel.
d) Neutrons are removed from both hat and hair leaving an excess of positive and negative charges.
5. According to the Triboelectric Effect - which to materials will give to greatest discharge when rubbed together
a) dry skin and polyester
b) gold and nickel
c) teflon and saran wrap
|
{"url":"https://www.edinformatics.com/math_science/what-causes-static-electricity.html","timestamp":"2024-11-02T14:56:00Z","content_type":"text/html","content_length":"27009","record_id":"<urn:uuid:c6c65e5f-09e1-46ae-9632-bd85569a1ce5>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00712.warc.gz"}
|
Limit Calculator
Limit Calculator With Steps
Limit calculator helps you find the limit of a function with respect to a variable. This limits calculator is an online tool that assists you in calculating the value of a function when an input
approaches some specific value.
Limit calculator with steps shows the step-by-step solution of limits along with a plot and series expansion. It employs all limit rules such as sum, product, quotient, and L'hopital's rule to
calculate the exact value.
You can evaluate limits with respect to \(\text{x, y, z, v, u, t}\) and \(w\) using this limits calculator.
That’s not it. By using this tool, you can also find,
1. Right-hand limit (+)
2. Left-hand limit (-)
3. Two-sided limit
How does the limit calculator work?
To evaluate the limit using this limit solver, follow the below steps.
• Enter the function in the given input box.
• Select the concerning variable.
• Enter the limit value.
• Choose the side of the limit. i.e., left, right, or two-sided.
• Hit the Calculate button for the result.
• You will find the answer below the tool.
• Use the Reset button to enter new values and the Keypad icon
What is a limit in Calculus?
The limit of a function is the value that f(x) gets closer to as x approaches some number. Limits can be used to define the derivatives, integrals, and continuity by finding the limit of a given
function. It is written as:
\(\lim _{x\to a}\:f\left(x\right)=L\)
If f is a real-valued function and a is a real number, then the above expression is read as,
the limit of f of x as x approaches a equals L.
How to find a limit? – With steps
Limits can be applied as numbers, constant values (π, G, k), infinity, etc. Let’s go through a few examples to learn how to calculate limits.
Example - Right-hand Limit
\(\lim _{x\to \:2^+}\frac{\left(x^2+2\right)}{\left(x-1\right)}\)
A right-hand limit means the limit of a function as it approaches from the right-hand side.
Step 1: Apply the limit x➜2 to the above function. Put the limit value in place of x.
\(\lim \:_{x\to 2^+}\frac{\left(x^2+2\right)}{\left(x-1\right)}\)
Step 2: Solve the equation to reach a result.
\(=\frac{\left(4+2\right)}{\left(2-1\right)} =\frac{6}{1} =6 \)
Step 3: Write the expression with its answer.
\(\lim \:_{x\to \:\:2^+}\frac{\left(x^2+2\right)}{\left(x-1\right)}=6\)
Example - Left-hand Limit
\(\lim _{x\to 3^-}\left(\frac{x^2-3x+4}{5-3x}\right)\)
A left-hand limit means the limit of a function as it approaches from the left-hand side.
Step 1: Place the limit value in the function.
\(\lim _{x\to 3^-}\left(\frac{x^2-3x+4}{5-3x}\right)\)
Step 2: Solve the equation further.
\(=\frac{\left(0+4\right)}{\left(-4\right)} =\frac{4}{-4} =-1 \)
Step 3: Write down the function as written below.
\(\lim \:_{x\to \:3^-}\left(\frac{x^2-3x+4}{5-3x}\right)=-1\)
Example - Two-sided Limit
\( \lim _{x\to 5}\left(cos^3\left(x\right)\cdot sin\left(x\right)\right) \)
A two-sided limit exists if the limit coming from both directions (positive and negative) is the same. It is the same as a limit.
Step 1: Substitute the value of the limit in the function.
\(\lim _{x\to 5}\left(cos^3\left(x\right)\cdot sin\left(x\right)\right)\)
\(=cos^3\left(5\right)\cdot \:sin\left(5\right)\)
Step 2: Simplify the equation as we did in previous examples.
\( \lim _{x\to 5}\left(cos^3\left(x\right)\cdot sin\left(x\right)\right) \)
\( =cos^3\left(5\right)\:sin\left(5\right)\)
Step 3: The above equation can be considered as the final answer. However, if you want to solve it further, solve the trigonometric values in the equation.
\(=\frac{1141}{50000}\cdot \:-\frac{23973}{25000} =-\frac{10941}{500000} \)
\(\lim \:\:_{x\to \:\:5}\left(cos^3\left(x\right)\cdot \:\:sin\left(x\right)\right)\)
\(=-0.021882 \)
Does sin x have a limit?
Sin x has no limit. It is because, as x approaches infinity, the y-value oscillates between 1 and −1.
What is the limit of e to infinity?
The limit of e to the infinity (∞) is e.
What is the limit as e^x approaches 0?
The limit as e^x approaches 0 is 1.
What is the limit as x approaches the infinity of ln(x)?
The limit as x approaches the infinity of ln(x) is +∞. The limit of this natural log can be proved by reductio ad absurdum.
• If x >1ln(x) > 0, the limit must be positive.
• As ln(x[2]) − ln(x[1]) = ln(x[2]/x1). If x[2]>x[1], the difference is positive, so ln(x) is always increasing.
• If lim x→∞ ln(x) = M ∈ R, we have ln(x) < M ⇒ x < e^M, but x→∞ so M cannot be in R, and the limit must be +∞.
• What is limit calculus? Study.com | Take Online Courses. Earn College Credit. Research Schools, Degrees & Careers.
• Limits: A graphical approach - concept - calculus video by Brightstorm.
|
{"url":"https://www.limitcalculator.online/","timestamp":"2024-11-09T03:42:23Z","content_type":"text/html","content_length":"90040","record_id":"<urn:uuid:f36fe818-f7a8-4c5b-b81c-e9e7ca076ae8>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00760.warc.gz"}
|
Chomsky generative grammar
Chomsky generative grammar $G$ is a system used for generating strings over some alphabet. It is defined by a 4-tuple:
$G = (N, \Sigma, P, S)$
1. $N$ is a finite set of non-terminal symbols. These symbols are never present in the strings of the language, but are used in the production rules to generate those strings.
2. $\Sigma$ is a finite set of terminal symbols. These symbols appear in the strings generated by the grammar and are distinct from the non-terminals, i.e., $N \cap \Sigma = \emptyset$. The symbols
in $\Sigma$ form the alphabet of the language generated by the grammar.
3. $P$ is a finite set of production rules. Each production rule is of the form: $\alpha \rightarrow \beta$ Where $\alpha$ is a string of symbols that contains at least one non-terminal (i.e., $\
alpha \in (N \cup \Sigma)^+$ and $\alpha$ contains at least one element from $N$), and $\beta$ is a string of symbols from $N \cup \Sigma$, i.e., $\beta \in (N \cup \Sigma)^*$.
4. $S$ is the start symbol, where $S \in N$. This is the symbol from which derivation or string generation starts.
A grammar specifies how to construct valid strings of the language. A string is considered part of the language if there exists some sequence of production rule applications (a derivation) that
starts with the start symbol $S$ and results in that string.
Chomsky hierarchy
The Chomsky hierarchy classifies grammars into four levels based on their generative power:
1. Type 0 - Recursively Enumerable (RE) Grammar
□ Production Rules: $\alpha \rightarrow \beta$ where $\alpha, \beta \in (N \cup \Sigma)^+$ and $|\alpha| \leq |\beta|$
□ Language: Recursively Enumerable Language
□ Recognized by: Turing Machine
2. Type 1 - Context-Sensitive Grammar (CSG)
□ Production Rules: $\alpha \rightarrow \beta$ where $\alpha, \beta \in (N \cup \Sigma)^+$ and $|\alpha| \leq |\beta|$
□ Language: Context-Sensitive Language (CSL)
□ Recognized by: Linear Bounded Automaton (LBA)
3. Type 2 - Context-Free Grammar (CFG)
□ Production Rules: $A \rightarrow \beta$ where $A \in N$ and $\beta \in (N \cup \Sigma)^*$
□ Language: Context-Free Language (CFL)
□ Recognized by: Pushdown Automaton (PDA)
4. Type 3 - Regular Grammar
□ Production Rules:
☆ $A \rightarrow aB$ or $A \rightarrow a$ or $A \rightarrow \epsilon$ for right-linear regular grammars.
☆ $A \rightarrow Ba$ or $A \rightarrow a$ or $A \rightarrow \epsilon$ for left-linear regular grammars.
□ Language: Regular Language
□ Recognized by: Finite Automaton (either Deterministic (DFA) or Nondeterministic (FA))
However, this is a classical hierarchy proposed by Chomsky in 1956 in his paper “Three Models for the Description of Language.” Since then, it has been extended to capture more nuances.
Extended Chomsky hierarchy
1. Linear grammar or Linear Context-Free grammar
□ Is a Context-Free Grammar that has at most one non-terminal in the right-hand side of each of its productions
□ Examples: $A \rightarrow aB$, $A \rightarrow Ba$, $A \rightarrow aBa$
2. Right-Linear grammar
□ Examples: $A \rightarrow aB$ or $A \rightarrow a$ or $A \rightarrow \epsilon$
3. Left-Linear grammar
□ Examples: $A \rightarrow Ba$ or $A \rightarrow a$ or $A \rightarrow \epsilon$
4. Deterministic Context Free
□ A language is deterministic context-free (DCF) iff accepted by a determensitic pushdown automaton (DPDA)
□ Example: “balanced” language
□ Closed under negation?
5. (Non-deterministic) Context Free
□ A language is context-free (CF) iff accepted by a (non-determensitic) pushdown automaton (PDA)
□ Example: set of all palindromes $\{ ww^R | w \in \Sigma^* \}$
6. Indexed language
□ A language is indexed iff accepted by a nested stack automaton (NSA)
□ A language is indexed iff produced by a indexed grammar ↗
7. Context-sensitive language
□ A language is context-sensitive iff accepted by a non-determensitic TM with linearly bounded memory (LBA)
□ A language is context-sensitive iff generated by a context-sensitive grammar
□ Closed under negation?
|
{"url":"https://parsing.stereobooster.com/grammar/","timestamp":"2024-11-09T15:31:24Z","content_type":"text/html","content_length":"79602","record_id":"<urn:uuid:e3a45e47-8aa9-4ab0-889d-a4d817145dcd>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00134.warc.gz"}
|
1. When time is up quiz submitted automatically.
At what time between 6 o’ clock and 7 o’ clock, the hour hand and minute hand of the clock, creates an angle of 1800
If the clock shows the time 12:05, then which of the following is the degree of the angle between hour hand and minute hand?
If a clock shows the time 12:45, What will be the time shown in the mirror image of the same clock?
At what time between 9 o’ clock and 10 o’ clock, the angle in the clock between hour hand and minute hand will show 1050?
Which of the statements given below is correct?
What will be the angle between hour hand and minute hand of the clock at 5.25pm?
At what time between 8 o’ clock and 9 o’ clock, the angle between the hour hand and minute hand of the clock will be 1750
If a clock shows the time 6. 30, then what will be the time in the mirror image of the clock?
If a clock shows the time 4.20, then what will be the time in the water image of the same clock?
Which of the following time is correct for the clock to show 00 angle between 10Pm and 11PM?
A wrong clock which gains uniformly, shows 3 minutes faster on Saturday at 5.00 am. If it shows 2 minutes slower on Tuesday at 3:00am, then at what time during this period, the clock was showing the
correct time
At what time between 1 o’ clock and 2 o, clock, the hour hand and minute hand of a clock coincide?
Consider the following statements:
1. The hour hand and minute hand of the clock coincide at 12:36 PM in the clock
2. 78o angle in the clock happens between 7:20 and 7:30
Which of the given statements is/are correct?
If a clock shows the time 9:45, then what will be the time in the water image of the same clock?
In which of the following situations, the hour hand and minute hand of a clock come opposite to each other?
A wrong clock which gains uniformly was showing 5 minutes slow on Tuesday at 3 pm. If the clock was showing 10 minutes faster at 9 pm on Wednesday, at what time the clock was showing correct time
during this period?
By looking in a mirror, it appears that it is 3:20 in the clock. What is the real time?
What will be the mirror image of 11:58 in the clock?
If the water image of a clock showing 10:15. The correct time is
What will be the water image of 4 hours 46 minutes?
|
{"url":"https://acsindiaias.com/quiz/outside/questions/187/Probable%20Question","timestamp":"2024-11-06T23:15:25Z","content_type":"text/html","content_length":"64106","record_id":"<urn:uuid:ccd4d1d9-06c5-49e5-869c-aad09743e184>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00396.warc.gz"}
|
Commanding the editor | MPS
Commanding the editor
When you are coding in MPS you will notice there are some differences between how you normally type code in text editors and how code is edited in MPS. In MPS you manipulate the AST directly as you
type your code through the projectional editor. The editor gives you an illusion of editing text, which, however, has its limits. So you are slightly limited in where you can place your caret and
what you can type on that position. As we believe, projectional editor brings huge benefits in many areas. It requires some getting used to, but once you learn a few tricks you'll leave your
plain-text-editor colleagues far behind in productivity and code quality. In general, only the items suggested by a completion menu can be entered. MPS can always decide, which elements are allowed
and which are disallowed at a certain position. Once the code you type is in the red color you know you're off the track.
Code completion Ctrl+Space will be your good friend allowing you to quickly complete the statements you typed. This is the default way to enter new things.
It is positioned with the current text cell and can be filtered further by typing ahead. Remember that CamelHumps are supported, so you only need to type the capital characters of long names and MPS
will guess the rest for you.
The completion menu contains from 1 up to hundreds of items, sorted by groups. Defining hierarchical menus for it will be flattened so the leaves of them will be listed as well as menu titles.
Open it with Ctrl+Space. Type to filter. Select an item with arrow keys, or clicking it. Finish with Tab, Enter, or double-click. Abort with Escape, clicking outside the menu, or changing to another
application (for example Alt). To reduce the filtering, press left arrow; to jump to the beginning to remove all filtering press Ctrl+Space for a second time.
Frequently you can enhance or alter your code by means of predefined semi-automated procedures called Intentions. By pressing Alt+Enter MPS will show you a popup dialog with options applicable to
your code at the current position. Some intentions are only applicable to a selected code region, for example, to wrap code inside a try-catch block. These are called Surround With intentions and
once you select the desired block of code, press Ctrl+Alt+T to show the list of applicable intentions.
Whenever you need to see the definition of an element you are looking at, press Ctrl+B or Ctrl + mouse click to open up the element definition in the editor. To quickly navigate around editable
positions on the screen use Shift+Tab. Enter will typically insert a new element right after your current position and let you immediately edit it. The Insert key will do the same for a position
right before your current position.
When a piece of code is underlined in either red or yellow, indicating an error or a warning respectively, you can display a popup with the error message by hovering over the error or by pressing
Control/Cmd + Up/Down key combination allows you to increase/decrease block selection. It ensures you always select valid subtrees of the AST. The usual Shift + Arrow keys way of text-like selection
is also possible.
To quickly find out the type of an element, press Alt/Control + X will open the selected element in the Node Explorer allowing you to investigate the appropriate part of the AST. Alt+F7 will enable
you to search for usages of a selected element. To quickly visualize the inheritance hierarchy of an element, use Ctrl+H.
The Inspector window opens after you press Alt+2. Some code and properties (for example editor styles, macros and so on) are shown and edited inside the Inspector window so it is advisable to keep
the window ready.
BaseLanguage postfix transformations
Postfix code transformations let you conveniently modify code with predefined keywords typed right behind the code that you have just typed. For example, the "if" postfix applied to an expression
wraps it with an if statement. BaseLanguage provides several postfix code transformations:
• if - wraps a boolean expression with an IfStatement, the expression becomes its condition.
• while - wraps a boolean expression with a WhileStatement, the expression becomes its condition.
• cast - wraps an expression with a CastExpression.
• paren - wraps an expression with parentheses.
• var - turns an expression into a variable declaration initialized with that expression.
• return - wraps an expression with a ReturnStatement.
• not - wraps a boolean expression with a NotExpresssion.
• for - wraps a collection-typed expression with a ForeachStatement that iterates over the values provided by the expression.
Most useful key shortcuts
Shortcut Action
Ctrl+Space Code completion
Ctrl+B Go To Definition
Alt+Enter Intentions
Tab Move to the next cell
Shift+Tab Move to the previous cell
Expand/Shrink the code selection
Shift + Arrow keys Select regions
Ctrl+F9 Compile project
Shift+F10 Run the current configuration
Alt+Shift+F10 Show the type of the expression under caret
Ctrl+Alt+X, P Open the expression under caret at the Node Explorer to inspect the appropriate node and its AST surroundings
Ctrl+H Show the structure (inheritance hierarchy)
Alt+Insert A generic contextual New command - will typically popup a menu with elements that can be created at the given location
Ctrl+Alt+T Surround with...
Ctrl+O Override methods
Ctrl+I Implement methods
Ctrl+/ Comment/uncomment the current node
Ctrl+Shift+/ Comment/uncomment with block comment (available in BaseLanguage only)
Ctrl+X Cut current line or selected block to buffer
Ctrl+C Copy current line or selected block to buffer
Ctrl+Alt+Shift+V Paste from buffer
Ctrl+Shift+V Paste from history (displays a popup dialog that lists all previously copied code blocks)
Ctrl+Z Undo
Ctrl+Shift+Z Re-do
Ctrl+D Duplicate current line or selected block
For more information about MPS keyboard shortcuts, refer to the Default Keymap Reference page. Also, available from the MPS Help menu.
Last modified: 11 February 2024
|
{"url":"https://www.jetbrains.com/help/mps/commanding-the-editor.html","timestamp":"2024-11-02T20:43:20Z","content_type":"text/html","content_length":"42878","record_id":"<urn:uuid:1d7732c4-0526-4a3d-a050-92e3b0ff773a>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00154.warc.gz"}
|
Gallivanting Glades
Problem G
Gallivanting Glades
Gunter lives in a small village on the outskirts of a magic forest. Every morning, after finishing his chores, he has a little bit of time left before the school bus arrives. Naturally, he likes to
use this time to go on walks in the magic forest. Gunter (and the school bus) are very precise, so he always has exactly the same amount of time for his walks. Each morning, he picks a route through
the forest that will bring him back to his house at precisely the time the school bus arrives. This makes him wonder—how many different routes through the forest could he take?
The magic forest consists of a number of magic glades connected by magic paths. However, in a magic forest things like “distance” and “time” do not have their usual meanings; each magic path takes
exactly one second to traverse in either direction, and going through a magic glade from one path to another takes no time at all. Also, one second inside the magic forest takes one femtosecond in
the real world, so Gunter may have quite a lot of time for his walk.
Stopping to rest in the magic forest can be dangerous (Gunter might fall asleep and never wake up), so the only valid routes are ones where Gunter keeps moving constantly. Other than that, there are
no restrictions on the routes Gunter can choose. For example, there is nothing wrong with traversing the same glade or path more than once, or with traversing a path and then immediately turning
around and traversing it in the opposite direction.
The first line of input contains two space-separated integers $N$ and $P$, where $N$ is the number of magic glades in the forest ($1 \leq N \leq 100$), and $P$ is the number of magic paths ($0 \leq P
\leq 10^5$).
Next follow $P$ lines describing the magic paths, where each line contains two space-separated integers giving the endpoints of the path (the glades are numbered from $0$ to $N-1$). Gunter’s house is
right next to glade $0$.
Finally, there is a line containing a single integer $T$ specifying the number of seconds in the magic forest Gunter has for his walk ($1 \leq T \leq 10^{18}$).
Output the number of distinct routes through the forest which start and end at glade $0$ and take exactly $T$ seconds for Gunter to traverse. Since this number may be very large, output it modulo $10
^9 + 7$.
Sample Input 1 Sample Output 1
Sample Input 2 Sample Output 2
Sample Input 3 Sample Output 3
|
{"url":"https://ccsc19.kattis.com/contests/ccsc19/problems/ccsc19.gallivantingglades","timestamp":"2024-11-12T10:55:46Z","content_type":"text/html","content_length":"29597","record_id":"<urn:uuid:51a03f06-7517-466c-bb23-d0ab12d37079>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00428.warc.gz"}
|
Does Chat GPT Think, Is AI Research Physics, And Are We In A Universe Based On Voxels?
he answers to the title questions are, No, Yes, and why not. What I have been doing instead of blogging lately has mostly been AI training as a freelancer for a company called Outlier.AI. I can't say
what we work on or who we work for. Name a company or other organization known to have an interest in AI research and Outlier has likely provided services for them.
It is interesting and engaging work that pays better than teaching actual human beings
As one who works to train models that can be the next models or products you may hear about from various companies and organizations these questions hit differently. As I type this a precursor of
LLM's corrects spelling and grammar and tries to predict the next word. No one considers this to be unusual at all. Yet I can recall a time in the 1990's when typing in MS word and using spell checks
was considered academically dishonest cheating. Good spelling of ordinary useful words was considered a valuable skill, like being good at arithmetic. Now spell check is a fact of life. LLM's will at
a minimum be like that. We'll all get used to giving a rough draft to a GPT and getting back something polished. We'll all feed our CV and a job description to an LLM and have it spit out a cover
letter that connects our achievements to their specified needs... all so that an LLM on that end can decide if we match.
What Chat GPT "Thinks" of itself.
Amongst all of this Do LLM's think. From the "mouth" of Chat GPT themself. I asked it "Sure let me ask you does ChatGPT4o reason? IF I asked you to tell me the distance between the points (-5,7) and
(7,14) why would you tell me what you do? I know you'd use the distance formula because it would give you the right answer. How do you know what you know, compared to how a human does? "
Great question! When you ask me, ChatGPT-4, to calculate the distance between two points, I "know" how to respond based on patterns in the data I was trained on, not through conscious reasoning
like a human. Here’s how I arrive at an answer and why it might seem like reasoning, even though it’s different from human thought.
1. How I Calculate the Distance Between Points:
When you ask me to find the distance between the points (-5,7) and (7,14), I recognize this as a problem that can be solved using the distance formula:
d=(x2−x1)2+(y2−y1)2d = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}
I’ve been trained on a large dataset, which includes examples of this type of problem and their correct solutions. So I "know" that when you mention two points, the next appropriate step is to
apply this formula. I am essentially pattern-matching your query to similar problems in my training data and predicting the most statistically likely correct answer based on how I’ve seen
problems like this handled.
2. What’s Happening Behind the Scenes:
Unlike a human, I don’t “understand” the problem the way you do. I don’t visualize the points, conceptualize geometry, or experience an "aha" moment when I use the distance formula. What I do is:
□ Recognize a pattern: You’ve given me two points, which fits the pattern of a distance problem.
□ Retrieve the solution method: From my training, I associate this pattern with the distance formula, which is just the most statistically probable next step.
□ Calculate: I perform the necessary arithmetic, plug values into the formula, and give you the correct answer.
3. Human Reasoning vs. My Pattern Matching:
A human goes through a different process when solving this problem. Here's what that might look like:
□ Understanding the problem: A human knows what points on a plane represent and understands the concept of distance intuitively.
□ Recalling the formula: A human retrieves the distance formula from memory, knowing it was derived from the Pythagorean theorem (a deep conceptual connection that I don’t grasp).
□ Visualizing: Humans may visualize the points and imagine the distance, reinforcing the reasoning process.
□ Applying logic: Humans reason through each step, knowing why the formula works, based on their understanding of Euclidean geometry.
In contrast, I don’t "reason" through the problem like this. My responses are based purely on learned statistical associations. I don’t have a mental model of the problem or the physical world;
I’m just very good at recognizing the patterns that produce correct answers based on past data.
4. Is This Reasoning?
Reasoning involves logical steps to reach conclusions, often requiring understanding and interpretation of principles or relationships. When I generate an answer, it may appear like reasoning
because I provide accurate results, but fundamentally, I am predicting the next best word, number, or formula based on patterns in my training data.
To draw a parallel:
□ Human reasoning: You reason by understanding the principles and concepts, such as the Pythagorean theorem’s connection to distance in a Cartesian plane.
□ My pattern matching: I "know" to use the distance formula not because I understand geometry, but because it’s statistically the most likely way to respond when two points are mentioned in
this format.
Here is a public link to the whole conversation
I feel this illustrates a good valid use for an LLM. A brainstorming tool. It would probably also make a good thorough literature review easier, now that these sort of tools have access to the open
internet for searching.
The Nobel Prize.
This brings me to the issue of the Nobel Prize in Physics for 2024 going to very foundational AI research. This research went into the physical and statistical work that would lead to neural
networks. I am ok with this idea. The real problem with AI work as it is done now is the hardware is not very efficient. Someday someone will get a Nobel for figuring out how to run AI models on
something which uses much less power and generates much less heat. That too will be physics.
Chat GPTs Variation of the Simulation Hypothesis.
Last but not least. Something to think about. I asked Chat GPT to propose a theory of everything base on that it "knows" all published physics data. The concept it came up with sounds like what a LLM
that has likely been used to create code for computer simulations would come up with. Space-Time Voxels. I had it write up LaTeX of its idea and put it on Overleaf. It is curious.
The idea that at the Planck scale, whatever exist down there effectively acts like a tiny computer that computes its own state, like a little Turing machine based on the states of its nearest
neighbors. The laws of physics we know and find are then either just a large scale emergent approximation of what goes on in these things or are sometimes the "code" by which these "voxels" operate.
In a way to a computer, everything is a computer. I don't think the "simulation" hypothesis is true but it's getting harder and harder to argue against.
What do I think of this theory. The main problem is that there has to be a physical ... thing doing the calculations.
It is that thing that concerns us. Any theory that makes the whole universe discrete needs to propose and explain what is this, most fundamental thing? Near as I can say is that perhaps there is just
a generalized energy field that is in various states and those different states interact in ways that we call the fundamental forces. All just one smooth continuous "thing", in different states.
|
{"url":"https://www.science20.com/hontas_farmer/does_chat_gpt_think_is_ai_research_physics_and_are_we_in_a_universe_based_on_voxels-257194","timestamp":"2024-11-07T12:47:44Z","content_type":"text/html","content_length":"43050","record_id":"<urn:uuid:f153558d-0226-4bf8-9d98-85bddf61a22e>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00817.warc.gz"}
|
Linear algebra homework help
A car heading south and an identical car heading north at the same speed have opposite momenta. But if instead, you wrote "All of the students should have received an email about spring break
protocols," see how much clearer the message is? If you are still contemplating getting linear algebra assignment helpbecause you are sure about the topics we cover, it's time to remove your doubts
once and for all. It's a linear algebra homework, I'd appreciate it if you could upload the answers to python as I can copy and paste them. Further, you can check our sample repository to evaluate
the linear algebra assignment samples to verify the quality and accuracy of the assignments. Students aspiring to getlinear algebra assignment help can directly avail of the necessary support by
following three quick steps. Additionally, the experts work closely with the students explaining them all the steps of writing assignments which enhance their subject knowledge. Here at
Statanalytica, we are helping the students to improve their performance in the class.
The Ultimate Guide to Writing Impressive Academic Essays- Linear algebra homework help
In addition to the essays, Johns Hopkins University gives the most weight to these supplemental essays for Johns Hopkins are examples. Full, updated list of the University of Chicago Pritzker
secondary essay prompts and helpful tips on how to answer secondary essays. Although content is the most important aspect of a nursing school entrance essay, you will want to pay some attention to
your spelling, punctuation, and grammar too. The tutors will also help you understand the primary factors so that you develop confidence in the subject and can solve the equations on your own. This
time, we challenge you to write a captivating story about a day in the lives of Mr. and Mrs. Since there is tons of information to find and go through, more time is taken to do the research. During
World War I, the United States was bustling and the amount of production occurring was extremely high due to the war effort. A reckless indifference to the value of human life (also known as
"depraved heart" murder); or 4. Therefore, it is the point that corresponds to the maximum or minimum cost of the entire operation. To maintain accountability, it is relevant for the.
The algebra homework helper website our team has designed is known for its sleek and intuitive interface. Matrix Algebra Assignment Help Matlab Help, Matlab Assignment & Homework Help, Matlab Tutor
Matlab is a tool, which is basically used in the linear. Homework assistance in Linear algebra, Math tutoring online. A free linear algebra textbook and online resource written by. Ans: Linear
algebra is an important branch of mathematics that primarily focuses on studying vector spaces and mappings and lines and planes requiring linear transformations. Basic linear algebra is a branch of
mathematics that focuses on solving systems of linear equations using matrix operations. Most students struggle with linear algebra assignmentsand needhelpto solve problems accurately. You have
direct access to a Python expert dealing with your homework assignment or project. It'll help you complete the texts without changing their meaning. What can people do to maintain their culture and
traditions? Keep in mind, though, that research papers showcase new ideas and analysis. There are many fictional characters who live in unusual houses, such as the old woman who lived in a shoe.
physics homework problems help
homework help for uop
frankenstein homework help
homework help 4 you
sample research paper in english
Elevate Your Essays with Professional Writing Techniques- I need help with my english homework
Squirrels have treated me differently since I wrote an ode to squirrels: They give me the nod, those little fiends. In addition, we focus on the needs of our customers to make sure they keep coming
back. To write academically, one needs to know about the audience and the purpose of writing. But even when it cannot claim this benefit, this way of thinking keeps us alert to the genuine needs of
the future. Seek our English homework assignment help to stay ahead of the rest Can You Do My English Homework For Me? Subheadings have the main intent of grabbing the readers' attention and making
them pause for a moment.
Homework help for uop
We explain how colleges set word limits and how much they matter to ensure your college essay is essay, we suggest about 250-500 words. It was brought about when German philosophers denounced that
"Jewish spirit is alien to Germandom" ("Antisemitism") which states that a Jew is non-German. It is not simply a statement of your topic. The accumulation of plastic products in huge amounts in the
Earth's environment is called plastic pollution. But the same essay-writing criteria applies as in other types of essays: homework help for uop. Professionals can teach the children the ways of right
handling with bullies through role-plays or other techniques (Wallace, 2012).
Help on balance sheet accountants homework
The minimum requirement for a good letter is someone who taught a class in which you did well. Wade is court case of 1973 in which the Supreme Court ruled that a woman has a constitutional right to
an abortion during the first six months of. A balance sheet, also referred to as Statement of Financial Position, is a sheet that exhibits the accountants and liabilities of a homework enterprise
prepared at a particular. With an extensive experience of helping several thousand students, with their balance sheet homework, across globe the expertise of our accounting experts spans domains like
trial balance, balance sheets and other accounting assignment problems. The sacraments are then formed from the Church, nonetheless are introduced for the people and the Church - help on balance
sheet accountants homework. It has also been seen to create disagreement within the family when the children make wish for all those advertised goods that they see on television that are beyond the
spending capacity of parents. Why would colleges allow unverified information to determine admission decisions and, in fact, move even more so in that direction by implementing test-optional
policies? Government action required: The Fourth Amendment search and seizure protections apply only against actions by the government.
|
{"url":"https://arsodenglishclasses.com/mahatma-jyotiba-phule/","timestamp":"2024-11-11T11:22:43Z","content_type":"text/html","content_length":"42354","record_id":"<urn:uuid:094b1029-1273-45dc-a3eb-24a3c1486f6f>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00227.warc.gz"}
|
Matlab toolbox for submodular function optimization
Matlab Toolbox for Submodular Function Optimization
By Andreas Krause (krausea@gmail.com).
Slides, videos and detailed references available at http://www.submodularity.org
Tested in MATLAB 7.0.1 (R14), 7.2.0 (R2006a), 7.4.0 (R2007a, MAC).
This toolbox provides functions for optimizing submodular set functions, i.e., functions that take a subset A of a finite ground set V to the real numbers, satisfying
$$F(A)+F(B)geq F(Acup B)+F(Acap B)$$
It also presents several examples of applying submodular function optimization to important machine learning problems, such as clustering, inference in probabilistic models and experimental
design. There is a demo script: sfo_tutorial.m
Some information on conventions:
All algorithms will use function objects (see sfo_tutorial.m for examples). For example, to measure variance reduction in a Gaussian model, call
F = sfo_fn_varred(sigma,V)
where sigma is the covariance matrix and V is the ground set, e.g., 1:size(sigma,1) They will also take an index set V, and A must be a subset of V.
Implemented algorithms:
1) Minimization:
□ sfo_min_norm_point: Fujishige's minimum-norm-point algorithm for minimizing general submodular functions
□ sfo_queyranne: Queyranne's algorithm for minimizing symmetric submodular functions
□ sfo_sssp: Submodular-supermodular procedure of Narasimhan & Bilmes for minimizing the difference of two submodular functions
□ sfo_s_t_min_cut: For solving min F(A) s.t. s in A, t not in A
□ sfo_minbound: Return an online bound on the minimum solution
□ sfo_greedy_splitting: Greedy splitting algorithm for clustering of Zhao et al
2) Maximization:
□ sfo_polyhedrongreedy: For solving an LP over the submodular polytope
□ sfo_greedy_lazy: The greedy algorithm for constrained maximization / coverage using lazy evaluations
□ sfo_greedy_welfare: The greedy algorithm for solving allocation problems
□ sfo_cover: Greedy coverage algorithm using lazy evaluations
□ sfo_celf: The CELF algorithm of Leskovec et al. for budgeted
□ sfo_saturate: The SATURATE algorithm of Krause et al. for robust optimization of submodular functions
□ sfo_max_dca_lazy: The Data Correcting algorithm of Goldengorin et al. for maximizing general (not necessarily nondecreasing) submodular functions
□ sfo_maxbound: Return an online bound on the maximum solution
□ sfo_pspiel: pSPIEL algorithm for trading off information and
communication cost
□ sfo_pspiel_orienteering: pSPIEL algorithm for submodular orienteering
□ sfo_balance: eSPASS algorithm for simultaneous placement and balanced scheduling
3) Miscellaneous
□ sfo_lovaszext: Computes the Lovasz extension for a submodular function
□ sfo_mi_cluster: Example clustering algorithm using both maximization and minimization
□ sfo_pspiel_get_path: Convert a tree into a path using the MST heuristic algorithm
□ sfo_pspiel_get_cost: Compute the Steiner cost of a tree / path
4) Submodular functions:
□ sfo_fn_cutfun: Cut function
□ sfo_fn_detect: Outbreak detection / facility location
□ sfo_fn_entropy: Entropy of Gaussian random variables
□ sfo_fn_mi: Gaussian mutual information
□ sfo_fn_varred: Variance reduction (truncatable, for use in SATURATE)
□ sfo_fn_example: Two-element submodular function example from tutorial slides
□ sfo_fn_iwata: Iwata's test function for testing minimization code
□ sfo_fn_ising: Energy function for Ising model for image denoising
□ sfo_fn_residual: For defining residual submodular functions
Changes to previous version:
Initial Announcement on mloss.org.
No one has posted any comments yet. Perhaps you'd like to be the first?
Leave a comment
You must be logged in to post comments.
|
{"url":"https://mloss.org/revision/view/283/","timestamp":"2024-11-08T08:54:24Z","content_type":"application/xhtml+xml","content_length":"12588","record_id":"<urn:uuid:ff6333df-8b9a-4649-ba11-e1d1aa40e0c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00881.warc.gz"}
|
Dynamic Macroeconomics
1. Dynamic Macroeconomics
Dynamic Macroeconomics
by Alogoskoufis
ISBN: 9780262043014 | Copyright 2019
An advanced treatment of modern macroeconomics, presented through a sequence of dynamic equilibrium models, with discussion of the implications for monetary and fiscal policy.
This textbook offers an advanced treatment of modern macroeconomics, presented through a sequence of dynamic general equilibrium models based on intertemporal optimization on the part of economic
agents. The book treats macroeconomics as applied and policy-oriented general equilibrium analysis, examining a number of models, each of which is suitable for investigating specific issues but may
be unsuitable for others.
After presenting a brief survey of the evolution of macroeconomics and the key facts about long-run economic growth and aggregate fluctuations, the book introduces the main elements of the
intertemporal approach through a series of two-period competitive general equilibrium models—the simplest possible intertemporal models. This sets the stage for the remainder of the book, which
presents models of economic growth, aggregate fluctuations, and monetary and fiscal policy. The text focuses on a full analysis of a limited number of key intertemporal models, which are stripped
down to essentials so that students can focus on the dynamic properties of the models. Exercises encourage students to try their hands at solving versions of the dynamic models that define modern
macroeconomics. Appendixes review the main mathematical techniques needed to analyze optimizing dynamic macroeconomic models. The book is suitable for advanced undergraduate and graduate students who
have some knowledge of economic theory and mathematics for economists.
Expand/Collapse All
Contents (pg. vii)
List of Figures and Table (pg. xxi)
Preface (pg. xxvii)
1. Introduction (pg. 1)
1.1 The Nature and Evolution of Macroeconomics (pg. 2)
1.1.1 Pre-Keynesian Macroeconomics (pg. 2)
1.1.2 Classical and Keynesian Macroeconomics (pg. 4)
1.1.3 Microeconomic Foundations of Macroeconomics (pg. 6)
1.1.4 Deterministic and Stochastic Dynamic General Equilibrium Models (pg. 7)
1.2 Key Facts about Long-Run Economic Growth (pg. 11)
1.2.1 Cross-Country Differences in Per Capita Output and Income (pg. 11)
1.2.2 Evolution of Per Capita Output and Income over Time (pg. 13)
1.2.3 Economic Growth and Convergence since 1820 (pg. 14)
1.3 Key Facts about Aggregate Fluctuations (pg. 18)
1.3.1 Frequency, Severity, and Duration of Recessions (pg. 18)
1.3.2 Unemployment in Booms and Recessions (pg. 20)
1.3.3 Trends and Fluctuations in the Price Level and Inflation (pg. 22)
1.3.4 Monetary Policy and Government Debt (pg. 26)
1.3.5 Monetary Policy and Inflation in the Postwar Period (pg. 31)
1.4 Conclusion (pg. 32)
2. The Intertemporal Approach (pg. 33)
2.1 Models, Variables, and Functions (pg. 34)
2.2 General Equilibrium in a One-Period Competitive Model (pg. 36)
2.2.1 Endowments, Preferences, and the Optimal Behavior of Households (pg. 36)
2.2.2 The Production Function and the Profit-Maximizing Behavior of Firms (pg. 38)
2.2.3 The Cobb-Douglas Production Function (pg. 40)
2.2.4 General Equilibrium in the One-Period Model (pg. 40)
2.3 Savings and Investment in a Two-Period Competitive Model (pg. 42)
2.3.1 The Representative Household in a Two-Period Model (pg. 42)
2.3.2 Implications of the Euler Equation for Consumption (pg. 45)
2.3.3 The Case of a Constant Elasticity of Intertemporal Substitution (pg. 46)
2.3.4 Firms, Technology, and the Optimal Output Path (pg. 48)
2.3.5 General Equilibrium in the Two-Period Model (pg. 49)
2.3.6 Diagrammatic Exposition of the Intertemporal Equilibrium (pg. 53)
2.3.7 Implications for Growth and Business Cycle Theory (pg. 55)
2.4 Consumption and Labor Supply in a One-Period Competitive Model (pg. 56)
2.4.1 The Optimal Choice of Consumption and Labor Supply (pg. 56)
2.4.2 Income and Substitution Effects on Labor Supply (pg. 58)
2.4.3 The Frisch Elasticity of Labor Supply (pg. 60)
2.4.4 The Production Function and the Optimal Decisions of Firms (pg. 61)
2.4.5 General Equilibrium and the Determination of Output and Employment (pg. 61)
2.5 Consumption and Labor Supply in a Two-Period Competitive Model (pg. 62)
2.5.1 Optimal Consumption and Labor Supply in a Two-Period Model (pg. 63)
2.5.2 Intertemporal Substitution in Consumption and Labor Supply (pg. 64)
2.5.3 Optimal Production Decisions of Firms (pg. 65)
2.5.4 General Equilibrium and the Determination of Output and Employment (pg. 66)
2.5.5 Implications for Business Cycle Theory (pg. 68)
2.6 Money, Prices, and Inflation in a Two-Period Competitive Model (pg. 69)
2.6.1 The Representative Household and the Demand for Money (pg. 69)
2.6.2 The Classical Dichotomy and the Neutrality of Money (pg. 71)
2.6.3 The Two-Period Competitive Model and Classical Monetary Theory (pg. 74)
2.7 Fiscal Policy in a Two-Period Competitive Model (pg. 74)
2.7.1 Government Expenditure and Taxes in a One-Period Economy (pg. 74)
2.7.2 Income Taxes and Labor Supply (pg. 76)
2.7.3 Government Expenditure, Taxes, and Debt in a Two-Period Economy (pg. 77)
2.7.4 Ricardian Equivalence between Tax and Debt Finance (pg. 78)
2.7.5 Income Taxation and Aggregate Savings and Investment (pg. 81)
2.7.6 Implications for Fiscal Policy and Government Debt (pg. 81)
2.8 The Treatment of Time and the Intertemporal Approach (pg. 82)
2.9 Conclusion (pg. 84)
3. Savings, Investment, and Economic Growth (pg. 85)
3.1 The Solow Growth Model (pg. 87)
3.1.1 The Neoclassical Production Function (pg. 87)
3.1.2 The Cobb-Douglas Production Function (pg. 89)
3.1.3 Population Growth and Technical Progress (pg. 90)
3.1.4 Savings, Capital Accumulation, and Economic Growth (pg. 90)
3.1.5 The Balanced Growth Path and the Convergence Process (pg. 92)
3.1.6 The Rate of Growth of Capital and Output (pg. 93)
3.1.7 Significance of the Inada Conditions (pg. 95)
3.2 Competitive Markets, the Real Interest Rate, and Real Wages (pg. 96)
3.3 The Savings Rate and the Golden Rule (pg. 97)
3.3.1 The Savings Rate and the Balanced Growth Path (pg. 98)
3.3.2 The Savings Rate, the Golden Rule, and Dynamic Inefficiency (pg. 99)
3.3.3 The Elasticity of Steady State Output with Respect to the Savings Rate (pg. 101)
3.4 Total Factor Productivity and Population Growth (pg. 102)
3.4.1 Dynamic Effects of Total Factor Productivity in the Solow Model (pg. 103)
3.4.2 Dynamic Effects of Population Growth in the Solow Model (pg. 104)
3.5 Speed of Convergence toward the Balanced Growth Path (pg. 104)
3.6 The Process of Economic Growth and the Solow Model (pg. 106)
3.6.1 The Kaldor Stylized Facts of Economic Growth (pg. 107)
3.6.2 Differences in Per Capita Output and Income between Developed and Less Developed Economies (pg. 108)
3.6.3 Conditional Convergence (pg. 110)
3.7 Convergence with a Cobb-Douglas Production Function (pg. 110)
3.8 Dynamic Simulations of a Calibrated Solow Model (pg. 111)
3.8.1 The Solow Model in Discrete Time (pg. 112)
3.8.2 The Calibrated Solow Model (pg. 113)
3.8.3 Dynamic Simulations of the Model (pg. 114)
3.9 Conclusion (pg. 117)
4. The Representative Household Model of Optimal Growth (pg. 119)
4.1 The Optimal Intertemporal Path of Consumption (pg. 121)
4.2 The Ramsey Model of Economic Growth (pg. 124)
4.2.1 The Production Function (pg. 124)
4.2.2 The Utility Function of the Representative Household (pg. 125)
4.2.3 The Accumulation of Capital and the Optimality of the Decentralized Competitive Equilibrium (pg. 126)
4.2.4 Conditions for Utility Maximization by the Representative Household (pg. 128)
4.2.5 The Euler Equation for Consumption (pg. 129)
4.2.6 The Intertemporal Budget Constraint of the Representative Household (pg. 130)
4.2.7 The Transversality Condition with an Infinite Time Horizon (pg. 132)
4.2.8 The Consumption Function of the Representative Householdwith an Infinite Horizon (pg. 133)
4.3 Dynamic Adjustment and the Balanced Growth Path (pg. 135)
4.3.1 Dynamic Adjustment toward the Balanced Growth Path (pg. 135)
4.3.2 The Balanced Growth Path and the Modified Golden Rule (pg. 138)
4.3.3 Effects of a Permanent Increase in the Pure Rate of Time Preference (pg. 139)
4.3.4 Effects of a Permanent Increase in Total Factor Productivity (pg. 141)
4.3.5 Effects of a Permanent Increase in the Rate of Growth of Population (pg. 142)
4.4 Properties of the Adjustment Path and the Speed of Convergence (pg. 143)
4.5 Dynamic Simulations of a Calibrated Ramsey Model (pg. 146)
4.5.1 The Ramsey Model in Discrete Time (pg. 146)
4.5.2 The Calibrated Ramsey Model (pg. 148)
4.5.3 Dynamic Simulations of the Model (pg. 149)
4.6 Conclusion (pg. 152)
5. Overlapping Generations Models of Growth (pg. 153)
5.1 The Diamond Model (pg. 154)
5.1.1 Definitions (pg. 155)
5.1.2 The Production Function (pg. 155)
5.1.3 The Intertemporal Utility Function of Households (pg. 155)
5.1.4 Markets and the Behavior of Households (pg. 156)
5.1.5 Capital Accumulation and the Dynamic Adjustment of the Economy (pg. 157)
5.1.6 A Simplified Diamond Model with Logarithmic Preferences and Cobb-Douglas Technology (pg. 159)
5.1.7 The Speed of Adjustment in the Simplified Diamond Model (pg. 163)
5.1.8 Welfare Implications of the Diamond Model and the Possibility of Dynamic Inefficiency (pg. 164)
5.1.9 Dynamic Simulations of a Calibrated Diamond Model (pg. 165)
5.2 The Blanchard-Weil Model (pg. 169)
5.2.1 Definitions (pg. 169)
5.2.2 The Production Function (pg. 170)
5.2.3 The Intertemporal Utility Function of Households and Household Consumption (pg. 170)
5.2.4 Aggregation across Generations (pg. 172)
5.2.5 The Model in Terms of Efficiency Units of Labor (pg. 173)
5.2.6 The Balanced Growth Path and the Adjustment Path (pg. 173)
5.3 Dynamic Simulations of a Calibrated Blanchard-Weil Model (pg. 177)
5.3.1 The Blanchard-Weil Model in Discrete Time (pg. 178)
5.3.2 Dynamic Simulations of the Model (pg. 178)
5.4 Conclusion (pg. 182)
6. Fiscal Policy and Economic Growth (pg. 183)
6.1 The Government Budget Constraint (pg. 185)
6.1.1 Government Deficits, Debt, and Solvency (pg. 185)
6.2 Ricardian Equivalence and the Ramsey Model (pg. 187)
6.2.1 Ricardian Equivalence between Government Debt and Taxes (pg. 187)
6.2.2 Government Expenditure, Taxes, and Debt in the Ramsey Model (pg. 188)
6.3 Dynamic Effects of Fiscal Policy in the Blanchard-Weil Model (pg. 191)
6.3.1 The Blanchard-Weil Model with Government Expenditure and Debt (pg. 191)
6.3.2 Government Debt, Taxes, and Redistribution across Generations (pg. 192)
6.3.3 Dynamic Simulations of Fiscal Policy in a Calibrated Blanchard-Weil Model (pg. 194)
6.4 Dynamic Effects of Distortionary Taxation (pg. 201)
6.4.1 Distortionary and Nondistortionary Taxes (pg. 201)
6.4.2 Dynamic Effects of Capital Income and Business Gross Profits Taxation (pg. 203)
6.4.3 Dynamic Simulations of Increases in Capital Income and Business Gross Profits Taxation (pg. 205)
6.5 Conclusion (pg. 206)
7. Money, Inflation, and Economic Growth (pg. 209)
7.1 Private Consumption and Money Demand in a Representative Household Model (pg. 211)
7.1.1 Money in the Utility Function of Households (pg. 211)
7.1.2 Nominal and Real Interest Rates and the Opportunity Cost of Real Money Balances (pg. 212)
7.1.3 First-Order Conditions for an Optimum (pg. 212)
7.1.4 The Money Demand Function (pg. 213)
7.1.5 Growth Rate of the Money Supply and Inflation (pg. 214)
7.1.6 The Euler Equation for Consumption (pg. 215)
7.2 Aggregate Capital Accumulation in a Ramsey Model with Money (pg. 216)
7.2.1 The Production Function, the Real Interest Rate, and the Real Wage (pg. 216)
7.2.2 The Inflation Tax and the Accumulation of Capital (pg. 216)
7.3 Effects of the Growth Rate of the Money Supply in the Ramsey Monetary Model (pg. 218)
7.3.1 The Balanced Growth Path in the Ramsey Model with Money (pg. 219)
7.3.2 The Superneutrality of Money and Inflation (pg. 220)
7.3.3 The Welfare Costs of Inflation in a Ramsey Model (pg. 221)
7.4 Effects of Monetary Growth in an OLG Model (pg. 222)
7.4.1 The Blanchard-Weil Model with Money (pg. 223)
7.4.2 Real Effects of the Growth Rate of the Money Supply (pg. 224)
7.4.3 A Dynamic Simulation of the Effects of a Rise in the Growth Rate of the Money Supply in a Calibrated Blanchard-Weil Model (pg. 227)
7.5 Conclusion (pg. 229)
8. Externalities, Human Capital, and Technical Progress (pg. 231)
8.1 Externalities from Capital Accumulation and Economic Growth (pg. 232)
8.1.1 Definitions (pg. 233)
8.1.2 The Production Function (pg. 233)
8.1.3 Externalities from the Accumulation of Capital (pg. 234)
8.1.4 Determination of the Real Interest Rate and the Real Wage (pg. 237)
8.1.5 The Savings Rate and the Endogenous Growth Rate (pg. 238)
8.1.6 Externalities and Endogenous Growth in the Ramsey Model (pg. 239)
8.1.7 The Suboptimality of the Competitive Equilibrium with ExternalitiesDue to Capital Accumulation (pg. 241)
8.1.8 Externalities and Endogenous Growth in the Blanchard-Weil Model (pg. 242)
8.1.9 Fiscal Policy and Endogenous Growth (pg. 245)
8.1.10 Convergence in Exogenous and Endogenous AK Growth Models (pg. 247)
8.2 Investment in Human Capital and Economic Growth (pg. 248)
8.2.1 The Extended Solow Model and the Share of Spending on Educationand Training (pg. 249)
8.2.2 The Balanced Growth Path in the Extended Solow Model (pg. 250)
8.2.3 Endogenous Growth in the Extended Solow Model (pg. 251)
8.2.4 The Jones Model of Human Capital Accumulation (pg. 252)
8.2.5 The Lucas Model of Human Capital Accumulation and Endogenous Growth (pg. 253)
8.2.6 A Detailed Analysis of the Lucas Model (pg. 254)
8.3 Ideas, Innovations, and Technical Progress (pg. 258)
8.3.1 Key Features of Ideas and Innovations (pg. 258)
8.3.2 Key Elements of an Ideas-and-Innovations Growth Model (pg. 259)
8.3.3 Endogenous Determination of the Rate of Technical Progress (pg. 261)
8.3.4 The Balanced Growth Path with Endogenous Technical Progress (pg. 261)
8.4 Unified Growth Theory and the Transition from Stagnation to Growth (pg. 262)
8.5 Institutions and Long-Run Growth (pg. 263)
8.6 The New Stylized Facts of Economic Growth (pg. 265)
8.7 Conclusion (pg. 266)
9 Dynamic Stochastic Models under Rational Expectations (pg. 269)
9.1 A Stochastic Expectational Model of a Competitive Market (pg. 271)
9.1.1 Absence of Uncertainty and Perfect Foresight (pg. 272)
9.1.2 Uncertainty and Adaptive Expectations (pg. 273)
9.1.3 The Rational Expectations Hypothesis (pg. 275)
9.2 Rational Expectations for Linear Autoregressive Processes (pg. 277)
9.3 First-Order Linear Expectational Models (pg. 279)
9.3.1 The Method of Repeated Substitutions (pg. 279)
9.3.2 The Method of Factorization (pg. 281)
9.3.3 The Method of Undetermined Coefficients (pg. 282)
9.3.4 Two Additional Economic Examples (pg. 282)
9.3.5 Alternative Assumptions about the Evolution of Exogenous Variables (pg. 284)
9.3.6 The Expectational Competitive Market Model Revisited (pg. 286)
9.4 Second-Order Linear Expectational Models (pg. 286)
9.4.1 The Method of Factorization (pg. 287)
9.4.2 The Method of Undetermined Coefficients (pg. 288)
9.4.3 An Economic Example of a Second-Order System (pg. 290)
9.5 Multivariate Linear Models with Rational Expectations (pg. 291)
9.5.1 The Blanchard-Kahn Method (pg. 291)
9.5.2 Other Solution Methods (pg. 293)
9.5.3 A Second-Order Example of the Blanchard-Kahn Method (pg. 293)
9.6 Rational Expectations and Learning (pg. 295)
9.7 Conclusion (pg. 295)
10. Consumption and Portfolio Choice under Uncertainty (pg. 297)
10.1 Consumption and Portfolio Choice (pg. 298)
10.1.1 The Random Walk Model of Consumption (pg. 301)
10.1.2 The Consumption Capital Asset Pricing Model (pg. 302)
10.2 Full Analysis of Consumption and Portfolio Choice (pg. 303)
10.2.1 The Case of Logarithmic Preferences (pg. 303)
10.2.2 Quadratic Preferences and Certainty Equivalence (pg. 305)
10.2.3 The Permanent-Income Hypothesis with Quadratic Preferences (pg. 306)
10.2.4 The Consumption CAPM with Quadratic Preferences (pg. 307)
10.2.5 The Efficient Markets Hypothesis (pg. 308)
10.3 Precautionary Savings and Borrowing Constraints (pg. 310)
10.4 Conclusion (pg. 311)
11. Investment and the Cost of Capital (pg. 313)
11.1 Optimal Investment with Convex Adjustment Costs (pg. 315)
11.1.1 The Choice of Optimal Investment (pg. 315)
11.1.2 The Case of Zero Adjustment Costs (pg. 317)
11.1.3 The Investment Function with Convex Adjustment Costs (pg. 317)
11.1.4 The Determinants of q (pg. 317)
11.1.5 Dynamic Adjustment of q and the Capital Stock K (pg. 318)
11.2 Optimal Investment under Uncertainty (pg. 320)
11.2.1 The Value of a Firm under Uncertainty (pg. 321)
11.2.2 The Lucas-Prescott Model of Investment under Uncertainty (pg. 323)
11.2.3 Rational Expectations Equilibrium and Aggregate Investment in the Lucas-Prescott Model (pg. 325)
11.3 Conclusion (pg. 327)
12. Money, Interest, and Prices (pg. 329)
12.1 The Functions of Money (pg. 331)
12.2 The Supply of Money and Central Banks (pg. 332)
12.2.1 Central Banks and Their Functions (pg. 332)
12.2.2 Central Banks and the Money Supply (pg. 333)
12.3 The Demand for Money (pg. 336)
12.4 Nominal Interest Rates and Short-Run Equilibrium in the Money Market (pg. 339)
12.5 The Long-Run Neutrality of Money (pg. 343)
12.5.1 Monetary Growth, Inflation, and Nominal Interest Rates in the Long Run (pg. 345)
12.5.2 The Welfare Cost of Inflation (pg. 346)
12.5.3 The Long-Run Neutrality of Money and Monetary Reforms (pg. 346)
12.6 Money and the Price Level in Dynamic General Equilibrium Models (pg. 347)
12.6.1 The Samuelson OLG Model (pg. 347)
12.6.2 Money in the Utility Function of a Representative Household (pg. 351)
12.6.3 Cash in Advance in a Representative Household Model (pg. 353)
12.6.4 Cash in Advance in an OLG Model (pg. 355)
12.7 Nominal and Real Interest Rates and the Money Supply (pg. 357)
12.7.1 Money in the Utility Function of a Representative Household (pg. 357)
12.7.2 Cash in Advance in a Representative Household Model (pg. 359)
12.7.3 Cash in Advance in an OLG Model (pg. 359)
12.7.4 The Liquidity Effect in Representative Household Models (pg. 360)
12.8 Interest Rate Pegging and Price Level Indeterminacy (pg. 361)
12.8.1 Interest Rate Pegging and Price Level Indeterminacy in Representative Household Models (pg. 361)
12.8.2 The Wicksell Solution to the Problem of Price Level Indeterminacy (pg. 362)
12.8.3 The Fiscal Theory of the Price Level (pg. 363)
12.8.4 The Pigou Effect and Price Level Determinacy in OLG Models (pg. 364)
12.9 Money Growth, Seigniorage, and Inflation (pg. 364)
12.9.1 Relations between Monetary Growth, Seigniorage, and Inflation (pg. 365)
12.9.2 The Seigniorage Laffer Curve (pg. 367)
12.9.3 The Demand for Seigniorage and Equilibrium with High Inflation (pg. 368)
12.9.4 The Transition to Hyperinflation (pg. 368)
12.9.5 How Can High Inflation and Hyperinflation Be Tackled? (pg. 371)
12.10 Conclusion (pg. 372)
13. The Stochastic Growth Model of Aggregate Fluctuations (pg. 375)
13.1 The Stochastic Growth Model (pg. 376)
13.1.1 Extending the Ramsey Model to Account for Aggregate Fluctuations (pg. 377)
13.1.2 The Representative Firm (pg. 377)
13.1.3 The Representative Household (pg. 378)
13.1.4 Exogenous Population Growth, Efficiency of Labor, and Government Expenditure (pg. 378)
13.1.5 Labor Supply of the Representative Household (pg. 380)
13.1.6 Intertemporal Substitution in Labor Supply (pg. 381)
13.1.7 Uncertainty and the Behavior of the Representative Household (pg. 382)
13.2 A Simplified Version of the Stochastic Growth Model (pg. 383)
13.2.1 Fluctuations of Output in the Simplified Stochastic Growth Model (pg. 384)
13.2.2 The Simplified Stochastic Growth Model and the Evidence on Aggregate Fluctuations (pg. 385)
13.3 A Log-Linear Approximation to the General Stochastic Growth Model (pg. 386)
13.3.1 The Steady State (pg. 387)
13.3.2 Log-Linearizing the Model around the Steady State (pg. 388)
13.4 Solving the Log-Linear Stochastic Growth Model (pg. 390)
13.4.1 Aggregate Fluctuations around the Steady State (pg. 391)
13.4.2 A Dynamic Simulation of the Log-Linear Stochastic Growth Model (pg. 391)
13.5 Conclusion (pg. 393)
14. Perfectly Competitive Models with Flexible Prices (pg. 395)
14.1 A Perfectly Competitive Model without Capital (pg. 396)
14.1.1 The Representative Household (pg. 396)
14.1.2 The Representative Firm (pg. 397)
14.1.3 General Equilibrium (pg. 398)
14.2 Monetary Factors in a Perfectly Competitive Model (pg. 400)
14.2.1 An Exogenous Path for the Money Supply (pg. 400)
14.2.2 An Exogenous Path for the Nominal Interest Rate (pg. 401)
14.2.3 An Inflation-Based Nominal Interest Rate Rule (pg. 401)
14.2.4 Optimal Monetary Policy (pg. 402)
14.3 Imperfect Information and the Nonneutrality of Money (pg. 403)
14.3.1 Competitive Equilibrium under Imperfect Information about the Price Level (pg. 403)
14.3.2 The Determination of Output and Employment (pg. 406)
14.3.3 The Real Effects of Monetary Shocks in a Rational Expectations Equilibrium (pg. 407)
14.3.4 Optimal Monetary Policy in the Lucas Model (pg. 410)
14.3.5 The New Classical Model and the Great Depression (pg. 410)
14.3.6 Models of Informational Frictions and Rational Inattention (pg. 411)
14.4 Conclusion (pg. 411)
15. Keynesian Models and the Phillips Curve (pg. 413)
15.1 The Original Keynesian Models (pg. 415)
15.1.1 The Keynesian Cross (pg. 416)
15.1.2 The IS-LM Model (pg. 419)
15.1.3 The AD-AS Model (pg. 422)
15.1.4 The Impact of Aggregate Demand Policies (pg. 424)
15.2 The Samuelson Multiplier Accelerator Model (pg. 427)
15.3 The Theory of Discretionary Monetary and Fiscal Policy (pg. 429)
15.3.1 The Tinbergen-Theil Theory of Discretionary Aggregate Demand Policies (pg. 431)
15.3.2 Monetary and Fiscal Policy with a Full Employment Target (pg. 431)
15.3.3 Monetary and Fiscal Policy with a Full Employment Target and a Price Level Target (pg. 432)
15.4 The Phillips Curve and Inflationary Expectations (pg. 435)
15.4.1 The Phillips Curve and the Trade-off between Inflation and Unemployment (pg. 435)
15.4.2 Instability of the Phillips Curve and Inflationary Expectations (pg. 437)
15.5 The Natural Rate of Unemployment and Aggregate Demand Policies (pg. 439)
15.5.1 The Path of Inflation and Unemployment under Adaptive Expectations (pg. 440)
15.5.2 Rules versus Discretion in Aggregate Demand Policy (pg. 444)
15.5.3 Inflation and Unemployment under Rational Expectations (pg. 446)
15.6 Conclusion (pg. 447)
16. A Model of Imperfect Competition and Staggered Pricing (pg. 449)
16.1 An Imperfectly Competitive Model of Aggregate Fluctuations (pg. 451)
16.1.1 The Representative Household (pg. 452)
16.1.2 The Representative Firm and Optimal Pricing (pg. 454)
16.1.3 Full Price Flexibility and the Natural Rate (pg. 455)
16.1.4 Inefficiency of the Natural Rate (pg. 456)
16.2 Staggered Price Adjustment and Aggregate Fluctuations (pg. 457)
16.2.1 Optimal Pricing with Staggered Price Adjustment (pg. 459)
16.2.2 Equilibrium in the Market for Goods and Services and the New Keynesian IS Curve (pg. 461)
16.2.3 Labor Market Equilibrium and the New Keynesian Phillips Curve (pg. 462)
16.2.4 The Imperfectly Competitive Model with Staggered Pricing and the Taylor Rule (pg. 463)
16.2.5 Real and Monetary Shocks and Aggregate Fluctuations (pg. 464)
16.2.6 The Divine Coincidence and Optimal Monetary Policy in the New Keynesian Model with Staggered Pricing (pg. 468)
16.2.7 A Dynamic Simulation of the Model (pg. 469)
16.3 The Rotemberg Model of Convex Costs of Price Adjustment (pg. 470)
16.4 Conclusion (pg. 473)
17. A Model of Unemployment and Nominal Wage Contracts (pg. 475)
17.1 Alternative Views of the Labor Market and Equilibrium Unemployment (pg. 477)
17.2 Households and Optimal Consumption and Money Demand (pg. 478)
17.3 Firms and Optimal Pricing and Production (pg. 481)
17.4 Wage Setting and Employment in a Model with Insiders and Outsiders (pg. 483)
17.4.1 Wage Determination, Unemployment Persistence, and the Phillips Curve (pg. 485)
17.4.2 The Relation between Output and Unemployment Persistence (pg. 487)
17.4.3 The Phillips Curve in Terms of Deviations of Output from Its Natural Rate (pg. 488)
17.5 The Implications of Staggered Pricing (pg. 489)
17.5.1 Optimal Pricing with Staggered Price Adjustment (pg. 490)
17.5.2 Inflation and Unit Labor Costs under Staggered Pricing (pg. 491)
17.6 An Extended New Keynesian Phillips Curve: Combining Staggered Pricing with Periodic Nominal Wage Contracts (pg. 492)
17.7 Inflation and Aggregate Fluctuations under a Taylor Rule (pg. 494)
17.7.1 New Neoclassical Synthesis IS-LM Functions (pg. 494)
17.7.2 The Natural and Equilibrium Real Interest Rate (pg. 494)
17.7.3 Equilibrium Fluctuations with Exogenous Preference and Productivity Shocks (pg. 495)
17.7.4 Does Staggered Pricing Matter for Inflation Persistence? (pg. 500)
17.7.5 Inflation Stabilization and the Divine Coincidence (pg. 501)
17.8 The Optimal Taylor Rule (pg. 502)
17.8.1 Optimal Inflation Policy (pg. 502)
17.9 A Dynamic Simulation of the Effects of Monetary and Real Shocks (pg. 504)
17.10 Conclusion (pg. 506)
18. Matching Frictions and Equilibrium Unemployment (pg. 509)
18.1 The Matching Function (pg. 510)
18.1.1 The Probability of Filling a Vacancy and Labor Market Tightness (pg. 511)
18.1.2 The Probability of the Unemployed Finding a Job (pg. 511)
18.2 Flows into and out of Employment, Equilibrium Unemployment,and the Beveridge Curve (pg. 512)
18.3 Firms and the Creation of Vacancies (pg. 513)
18.3.1 The Present Value of Net Expected Profits from an Existing Job (pg. 514)
18.3.2 The Present Value of Net Expected Profits from a Vacancy and the Creation of Vacancies (pg. 515)
18.3.3 Free Entry and the Job Creation Condition (pg. 516)
18.4 The Behavior of Unemployed Job Seekers (pg. 517)
18.4.1 The Permanent Income of an Unemployed Job Seeker (pg. 518)
18.4.2 The Permanent Income of an Employed Worker (pg. 518)
18.4.3 Comparing the Permanent Income of the Employed and the Unemployed (pg. 518)
18.5 Wage Bargaining and the Wage Equation (pg. 519)
18.6 Wage Determination and Equilibrium Unemployment (pg. 521)
18.7 Determinants of Equilibrium Unemployment, Real Wages, and Labor Market Tightness (pg. 523)
18.7.1 An Increase in Labor Productivity (pg. 523)
18.7.2 An Increase in Unemployment Benefits (pg. 525)
18.7.3 An Increase in the Real Interest Rate (pg. 527)
18.7.4 An Increase in the Probability of Job Destruction (pg. 527)
18.8 Dynamic Adjustment to the Steady State (pg. 527)
18.8.1 The Dynamic Adjustment of Unemployment and Vacancies (pg. 530)
18.8.2 Numerical Simulations of the Model (pg. 532)
18.9 Matching Models and Nominal Rigidities (pg. 534)
18.10 Conclusion (pg. 535)
19. The Macroeconomic Implications of Financial Frictions (pg. 539)
19.1 The Role of Finance and Financial Markets (pg. 539)
19.1.1 Financial Frictions and Financial Intermediation (pg. 541)
19.1.2 The Risks of Financial Intermediation, Leverage, and the External Finance Premium (pg. 542)
19.1.3 The Links between the Financial Sector and Real Activity in the Presence of Frictions (pg. 543)
19.2 Financial Frictions in a New Keynesian Model with Staggered Pricing (pg. 544)
19.3 Financial Frictions in a Model with Unemployment Persistenceand Nominal Wage Contracts (pg. 546)
19.4 Conclusion (pg. 548)
20. The Role of Monetary Policy (pg. 551)
20.1 Rules versus Discretion in Monetary Policy (pg. 552)
20.2 Rules, Discretion, and Credibility in a New Keynesian Model (pg. 554)
20.2.1 The Social Welfare Loss from Inflation and Unemployment (pg. 555)
20.2.2 Monetary Policy under Discretion: The Problem of Credibility (pg. 556)
20.2.3 Monetary Policy under a Fixed Inflation Rule (pg. 559)
20.2.4 Central Bank Constitutions (pg. 559)
20.2.5 Reputation as a Solution to the Problem of Inflationary Bias (pg. 560)
20.3 Optimal Monetary Policy in the Presence of Stochastic Shocks (pg. 562)
20.4 The Mechanics of Monetary Policy (pg. 564)
20.4.1 Financial Markets and Open Market Operations (pg. 565)
20.4.2 The Term Structure of Interest Rates (pg. 566)
20.5 Optimal Monetary Policy and the Taylor Rule (pg. 567)
20.6 Monetary Policy Shocks and the Optimal Policy Rule (pg. 569)
20.7 Monetary Policy, Financial Frictions, and the Zero Lower Bound onInterest Rates (pg. 571)
20.7.1 The Liquidity Trap (pg. 572)
20.7.2 Monetary Policy at the Zero Lower Bound (pg. 573)
20.7.3 The Zero Lower Bound and Unconventional Monetary Policy (pg. 574)
20.8 Conclusion (pg. 576)
21. Fiscal Policy and Government Debt (pg. 579)
21.1 Tax Smoothing and Government Debt Accumulation (pg. 581)
21.1.1 The Barro Tax-Smoothing Model (pg. 581)
21.1.2 Steady State Implications of Tax Smoothing (pg. 583)
21.2 Keynesian Stabilization Policy, Automatic Stabilizers, and Fiscal Implicationsof the Zero Lower Bound (pg. 585)
21.3 Optimal Dynamic Ramsey Taxation (pg. 586)
21.4 Fiscal Policy and Politics (pg. 589)
21.4.1 Distributional Considerations and Politics (pg. 589)
21.4.2 Electoral Factors and Partisan Differences (pg. 590)
21.5 The Burden of High Government Deficits and Debt (pg. 592)
21.6 A Model of Government Debt Crises (pg. 593)
21.6.1 The Calvo Model (pg. 593)
21.6.2 Multiple Equilibria and Self-Fulfilling Prophecies (pg. 598)
21.7 Conclusion (pg. 600)
22. Bubbles, Multiple Equilibria, and Sunspots (pg. 601)
22.1 Bubbles in Linear Rational Expectations Models (pg. 602)
22.1.1 Bubbles versus Fundamentals (pg. 603)
22.1.2 Deterministic versus Stochastic Bubbles (pg. 604)
22.1.3 Bubbles as Self-Fulfilling Prophecies in Inherently Unstable Models (pg. 605)
22.1.4 Higher-Order Linear Models (pg. 607)
22.2 Bubbles in Models of Stock and Money Markets (pg. 607)
22.2.1 Stock Market Bubbles (pg. 607)
22.2.2 Money Market Bubbles, the Price Level, and Inflation (pg. 609)
22.3 Ruling Out Unstable Bubbles (pg. 611)
22.4 Indeterminacy, Self-Fulfilling Prophecies, and Sunspots (pg. 612)
22.4.1 The Samuelson OLG Model with Money, Revisited (pg. 614)
22.4.2 Other Models of Indeterminacy and Sunspots in Macroeconomics (pg. 619)
22.5 Conclusion (pg. 620)
23. The Interaction of Events and Ideas in Dynamic Macroeconomics (pg. 623)
23.1 The Financial Crisis and Recent Developments in Dynamic Macroeconomics (pg. 624)
23.2 The Interaction of Events and Ideas and the Role of Empirical Macroeconomics (pg. 626)
23.3 Policy Evaluation and DSGE Models (pg. 628)
23.4 Conclusion (pg. 629)
Appendixes (pg. 631)
A. Variables, Functions, and Optimization (pg. 633)
A.1 Models, Variables, and Functions (pg. 633)
A.2 Mathematical Optimization under Constraints (pg. 644)
A.3 Some Useful Functional Forms (pg. 651)
B. Linear Models and Linear Algebra (pg. 659)
B.1 Linear Models (pg. 659)
B.2 Elements of Linear Algebra (pg. 660)
B.3 An Example with Two Endogenous Variables (pg. 663)
C. Ordinary Differential Equations (pg. 667)
C.1 Definitions (pg. 667)
C.2 First-Order Linear Differential Equations (pg. 669)
C.3 Second-Order Linear Differential Equations (pg. 673)
C.4 A Pair of First-Order Linear Differential Equations (pg. 675)
C.5 A System of n First-Order Linear Differential Equations (pg. 678)
D. Difference Equations (pg. 683)
D.1 Lag Operators and Difference Equations (pg. 683)
D.2 First-Order Linear Difference Equations (pg. 686)
D.3 Second-Order Linear Difference Equations (pg. 687)
D.4 A Pair of First-Order Linear Difference Equations (pg. 689)
D.5 A System of n First-Order Linear Difference Equations (pg. 690)
E. Methods of Intertemporal Optimization (pg. 693)
E.1 The Form of Dynamic Optimization Problems (pg. 693)
E.2 The Method of Optimal Control (pg. 694)
E.3 The Optimal Control Method in Continuous Time (pg. 696)
E.4 Dynamic Programming and the Bellman Equation (pg. 697)
E.5 An Example Based on Optimal Savings in Continuous Time (pg. 700)
F. Random Variables and Stochastic Processes (pg. 703)
F.1 Probability (pg. 703)
F.2 Random Variables and Probability Distributions (pg. 704)
F.3 Stochastic Processes (pg. 715)
F.4 Univariate Linear Stochastic Processes in Discrete Time (pg. 716)
F.5 Vector Stochastic Processes and Vector Autoregressions (pg. 720)
References (pg. 723)
Index (pg. 743)
Instructors Only
You must have an instructor account and submit a request to access instructor materials for this book.
Go paperless today! Available online anytime, nothing to download or install.
• Bookmarking
• Note taking
• Highlighting
|
{"url":"https://mitpress.ublish.com/book/dynamic-macroeconomics","timestamp":"2024-11-03T07:42:38Z","content_type":"text/html","content_length":"223684","record_id":"<urn:uuid:d81dba85-3c99-46a4-bd05-db798c6007bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00636.warc.gz"}
|
Topology-aware Parallel Data Processing: Models, Algorithms and Systems at Scale (Conference Paper) | NSF PAGESAbstract
The analysis of massive datasets requires a large number of processors. Prior research has largely assumed that tracking the actual data distribution and the underlying network structure of a
cluster, which we collectively refer to as the topology, comes with a high cost and has little practical benefit. As a result, theoretical models, algorithms and systems often assume a uniform
topology; however this assumption rarely holds in practice. This necessitates an end-to-end investigation of how one can model, design and deploy topology-aware algorithms for fundamental data
processing tasks at large scale. To achieve this goal, we first develop a theoretical parallel model that can jointly capture the cost of computation and communication. Using this model, we explore
algorithms with theoretical guarantees for three basic tasks: aggregation, join, and sorting. Finally, we consider the practical aspects of implementing topology-aware algorithms at scale, and show
that they have the potential to be orders of magnitude faster than their topology-oblivious counterparts.
more » « less
|
{"url":"https://par.nsf.gov/biblio/10178062-topology-aware-parallel-data-processing-models-algorithms-systems-scale","timestamp":"2024-11-14T15:24:01Z","content_type":"text/html","content_length":"244332","record_id":"<urn:uuid:dfc2a94a-00ad-4b9e-8e15-f55870f72f91>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00316.warc.gz"}
|
Conjugaison du verbe anglais
Verbe irrégulier : prove - proved - proven
Traduction française : prouver - s'avérer
Présent Présent continu Prétérit Prétérit
I prove I am proving I proved I was proving
you prove you are proving you proved you were proving
he proves he is proving he proved he was proving
we prove we are proving we proved we were proving
you prove you are proving you proved you were proving
they prove they are proving they proved they were proving
Present perfect Present perfect continu Pluperfect Pluperfect continu
I have proven I have been proving I had proven I had been proving
you have proven you have been proving you had proven you had been proving
he has proven he has been proving he had proven he had been proving
we have proven we have been proving we had proven we had been proving
you have proven you have been proving you had proven you had been proving
they have proven they have been proving they had proven they had been proving
Futur Futur continu Futur antérieur Futur antérieur continu
I will prove I will be proving I will have proven I will have been proving
you will prove you will be proving you will have proven you will have been proving
he will prove he will be proving he will have proven he will have been proving
we will prove we will be proving we will have proven we will have been proving
you will prove you will be proving you will have proven you will have been proving
they will prove they will be proving they will have proven they will have been proving
Présent Présent continu Passé Passé continu
I would prove I would be proving I would have proven I would have been proving
you would prove you would be proving you would have proven you would have been proving
he would prove he would be proving he would have proven he would have been proving
we would prove we would be proving we would have proven we would have been proving
you would prove you would be proving you would have proven you would have been proving
they would prove they would be proving they would have proven they would have been proving
Infinitif Impératif
to prove prove
Let's prove
Présent Passé
proving proven
|
{"url":"https://www.theconjugator.com/imprimer/verbe/to+prove.html","timestamp":"2024-11-05T09:36:54Z","content_type":"text/html","content_length":"8278","record_id":"<urn:uuid:be1fd993-a72f-492f-96bb-d805497d4e5d>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00038.warc.gz"}
|
summarizePeaks: summarizePeaks in sjessa/chromswitch: An R package to detect chromatin state switches from epigenomic data
Given peaks for a set of samples in a query region, construct a sample-by- feature matrix where each row is a vector of summary statistics computed from peaks in the region.
localpeaks LocalPeaks object
mark String specifying the name of the mark for which the LocalPeaks object is given
cols Character vector of column names on which to compute summary statistics
fraction Loogical: compute the fraction of the region overlapped by peaks?
n Logical: compute the number of peaks in the region?
String specifying the name of the mark for which the LocalPeaks object is given
Character vector of column names on which to compute summary statistics
Loogical: compute the fraction of the region overlapped by peaks?
A matrix where rows are samples and columns are features
samples <- c("E068", "E071", "E074", "E101", "E102", "E110") bedfiles <- system.file("extdata", paste0(samples, ".H3K4me3.bed"), package = "chromswitch") metadata <- data.frame(Sample = samples,
H3K4me3 = bedfiles, stringsAsFactors = FALSE) lpk <- retrievePeaks(H3K4me3, metadata = metadata, region = GRanges(seqnames = "chr19", ranges = IRanges(start = 54924104, end = 54929104)))
summarizePeaks(lpk, mark = "H3K4me3", cols = c("qValue", "signalValue"))
For more information on customizing the embed code, read Embedding Snippets.
|
{"url":"https://rdrr.io/github/sjessa/chromswitch/man/summarizePeaks.html","timestamp":"2024-11-06T11:37:00Z","content_type":"text/html","content_length":"26945","record_id":"<urn:uuid:f86d1ca6-411e-4aa3-834a-96453d1fabf1>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00316.warc.gz"}
|
Algebra and
Algebra and Trigonometry Credit Exam
The following is a list of competencies addressed on the exam.
The student will be able to:
1. Classify real numbers as rational , irrational, integer, and/or non-integer values.
2. Identify real number properties such as commutativity, associativity, distributive
law, identities, and inverses.
3. Graph points and intervals of points on the real number line.
4. Simplify real number expressions using order of operations .
5. Simplify exponential expressions using rules for exponents .
6. Express numbers in scientific notation.
7. Simplify radical expressions using rules for radicals.
8. Identify the terms and degree of a polynomial.
9. Add, subtract, and multiply polynomials .
10. Factor algebraic expressions using such techniques as common factoring,
factoring by grouping, difference of squares, difference of cubes , sum of cubes,
and trial and error techniques for trinomials.
11. Simplify, multiply, divide, add, and subtract rational algebraic expressions.
12. Solve linear equations.
13. Solve quadratic equations using such techniques as factoring, completing the
square, and quadratic formula.
14. Construct and use a linear model to solve an application problem.
15. Construct and use a quadratic model to solve an application problem.
16. Solve linear, quadratic, rational, and absolute value inequalities.
17. Plot points in the Cartesian plane.
18. Find the distance between two points in the plane.
19. Find the midpoint of a line segment.
20. Find the x- and y- intercepts of the graph of an equation.
21. Write the equation of a circle in standard form and identify its center and radius.
22. Determine the symmetry of a graph.
23. Determine the slope of a line passing through two points.
24. Use the point-slope formula to find the equation of a line.
25. Use the slope-intercept form of a line to determine its slope and y-intercept.
26. Determine if lines are parallel or perpendicular using their slope.
27. Determine whether an expression represents a function using various techniques,
including the Vertical Line Test.
28. Determine the domain and range of various functions.
29. Graph a variety of functions, including, linear, parabolic, cubic, square root ,
piecewise, absolute value, and greatest integer.
30. Construct and use models to relate quantities that vary directly, inversely, and/or
31. Identify common transformations of functions such as vertical shifting, horizontal
shifting, reflecting, vertical stretching and shrinking, and horizontal stretching and
32. Use the concept of transformation to sketch functions.
33. Classify a function as even or odd.
34. Find the sum, difference, product, and quotient of functions and identify their
35. Form the composition of functions and identify their domain.
36. Determine the inverse of a function, if appropriate.
37. Divide polynomials using techniques such as long division and synthetic division.
38. Find the zeros of a polynomial function.
39. Add, subtract, multiply, and divide complex numbers and write the results in
standard form.
40. Find the domain of a rational function, sketch its graph, and determine vertical
and horizontal asymptotes.
41. Determine the domain and range of an exponential function and sketch its graph.
42. Use an exponential model to solve an application problem.
43. Determine the domain and range of a logarithmic function and sketch its graph.
44. Use a logarithmic model to solve an application problem.
45. Use properties of logarithms to expand and evaluate expressions.
46. Solve exponential and logarithmic equations algebraically.
47. Convert between radian and degree mode.
48. Given the measure of an angle in standard position, determine its corresponding
reference and coterminal angles.
49. Solve right triangles using the six trigonometric ratios.
50. Apply the arc length formula of a circle.
51. Apply the area formula for a sector of a circle.
52. Apply the arc length and area formulas (i.e. angular and linear velocity, etc.)
53. Evaluate the six trigonometric functions (circular functions).
54. Determine the period, phase shift, amplitude (when applicable) of trigonometric
55. Graph trigonometric functions.
56. Determine the domain and range of trigonometric functions.
57. Graph inverse trigonometric functions.
58. Determine the domain and range of inverse trigonometric functions.
59. Evaluate inverse trigonometric functions and compositions of trigonometric
60. Represent real-world situations (applications) using trigonometric functions.
61. Prove trigonometric identities.
62. Determine zeros of trigonometric functions.
An excellent source to review before attempting the exam is Schaum’s Outline of
Precalculus by Fred Safier (1998, McGraw-Hill).
The format of the exam is multiple-choice. Here are some sample questions. These
sample questions are designed to give you an idea of the format of the exam. In no way
is this an exhaustive list of the topics covered on the exam.
1. Factor the expression 6x^2 +13x + 6
The correct answer is D.
2. Find the solution set for the nonlinear inequality x^2 + x − 20 > 0
The correct answer is B.
3. Evaluate the expression
a. 1
b. 2
c. 3
d. -3
The correct answer is A.
4. Determine the domain of the function
The correct answer is D.
5. If sec t = -2 and tan t > 0, then the csc t is
The correct answer is C.
6. The period of
The correct answer is A.
|
{"url":"https://www.softmath.com/tutorials-3/relations/algebra-and-trigonometry.html","timestamp":"2024-11-12T00:44:31Z","content_type":"text/html","content_length":"38249","record_id":"<urn:uuid:1dc0eefc-3c66-4ac7-b365-a1af36ce087a>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00868.warc.gz"}
|
Margin Loans: Leverage Your Investments
What is margin?
In Webtransfer, margin refers to a participant’s own funds used for issuing loans, enabling them to obtain margin credits to enhance their credit capacity and potential profit. These funds provide an
additional credit limit, increasing potential income from lending.
How Margin Loans Work:
• Interest Rates: Margin loan interest rates are floating and determined by supply and demand. The initial rate at the time of launch on August 17, 2024, is 0.15% per day.
• Loan Amounts: Loan amounts are also variable, up to 300% of the value of your credit certificate (three times the amount of your credit certificate or existing loans).
• Loan Terms: The minimum loan term is 10 days, and the maximum is 30 days.
• Minimum and Maximum Loan Amounts: The minimum credit certificate value for a margin loan in USDT is 50 USDT, and the maximum margin loan amount is 3000 USDT.
Important Considerations:
• Late Payments: If you fail to repay your margin loan on time, you will be charged interest for the entire loan period at the rate of your credit certificate. This can result in losses, so it’s
crucial to repay loans promptly. After repayment, you can take out a new loan or pay the interest and extend the margin loan up to three times.
• Profit Maximization: Margin loans will soon be available for funds earned from bonus loans, further increasing your earning potential.
Why Use Margin Loans?
Let’s illustrate the benefits with two scenarios:
Scenario 1: Using Only Your Own Funds:
Imagine you have issued a loan of 100 USDT at 0.25% interest for 30 days.
Your profit after deducting the 25% insurance pool contribution would be 5.625 USDT.
• 100 USDT x 0.25% x 30 days = 7.50 USDT (profit)
• 7.50 USDT – 1.875 USDT (25% insurance pool contribution) = 5.625 USDT (net profit)
This represents a 5.625% net profit on your initial investment of 100 USDT.
Scenario 2: Using Margin:
Now, let’s say you obtain a 300% margin loan and issue a loan of 400 USDT under the same conditions. Your profit after deducting the insurance pool contribution (25%) and margin interest (0.15% per
day) would be:
• 100 USDT x 0.25% x 30 days = 7.50 USDT (profit from your initial 100 USDT)
• 300 USDT x 0.25% x 30 days = 22.50 USDT (profit from the 300 USDT margin loan)
• 7.50 USDT + 22.50 USDT = 30.00 USDT (total profit)
• 30.00 USDT – 7.50 USDT (25% insurance pool) = 22.50 USDT (after insurance pool contribution)
• 22.50 USDT – 13.50 USDT (margin interest) = 9.00 USDT profit on your initial 100 USDT.
Your profit increased by 60%, reaching 9% on your initial 100 USDT investment, compared to 5.625% without using a margin loan.
Risks and Mitigation:
While margin loans can significantly increase your earnings, it’s important to use them cautiously.
• Insurance Coverage: Remember that only the principal amount of your own loans is covered by the insurance pool, along with 10% annual interest (0.027397% per day). Margin loans are not covered by
the insurance pool. According to our rules, loans issued using margin credits are excluded from insurance coverage in the event of non-repayment. However, in practice, such loans are granted only
with the collateral of borrowers’ credit certificates.
• Risk Management: To minimize risk, consider issuing smaller loans and varying loan terms.
Margin loans are best suited for experienced users who understand the risks involved. Carefully assess your risk tolerance and make informed decisions.
Webtransfer is not responsible for any losses incurred due to the use of margin loans.
소셜에서 우리를 팔로우하세요
독점 거래, 제안 및 보상을 받으세요!
|
{"url":"https://webtransfer.com/ko/margin-loans","timestamp":"2024-11-04T13:29:19Z","content_type":"text/html","content_length":"160646","record_id":"<urn:uuid:93f83368-011d-4c36-90b5-92e19da9c171>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00246.warc.gz"}
|
Archives for August 2020 | A Drop in the Digital Ocean | A Drop In The Digital Ocean
08/20/20 06:50 PM Filed in:
[updated 5/28/2024 to add picture; 7/6/2024 to add links]
This afternoon my sister informed me that our brother Tom (second of four, two years younger than me) passed away. There are very few additional details at this time.
This picture of Tom with Mom was taken on 12/28/2000. Mom passed on April 16, 2019.
Tom and Scientology: here.
Is mind an emergent property of matter or is matter an emergent property of mind?
According to Douglas Hofstadter^1:
What is a self, and how can a self come out of the stuff that is as selfless as a stone or a puddle? What is an "I" and why are such things found (at least so far) only in association with, as
poet Russell Edison once wonderfully phrased it, "teetering bulbs of dread and dream" .... The self, such as it is, arises solely because of a special type of swirly, tangled pattern among the
meaningless symbols. ... there are still quite a few philosophers, scientists, and so forth who believe that patterns of symbols per se ... never have meaning on their own, but that meaning
instead, in some most mysterious manner, springs only from the organic chemistry, or perhaps the quantum mechanics, of processes that take place in carbon-based biological brains. ... I have no
patience with this parochial, bio-chauvinist view...
According to the Bible^2:
In the beginning was the Word, and the Word was with God, and the Word was God. ... And God said, "Let there be light"; and there was light.
I believe that computability theory, in particular, the lambda calculus, can shed some light on this problem.
In 1936, three distinct formal approaches to computability were proposed: Turing’s Turing machines, Kleene’s recursive function theory (based on Hilbert’s work from 1925) and Church’s λ calculus.
Each is well defined in terms of a simple set of primitive operations and a simple set of rules for structuring operations; most important, each has a proof theory.
All the above approaches have been shown formally to be equivalent to each other and also to generalized von Neumann machines – digital computers. This implies that a result from one system will
have equivalent results in equivalent systems and that any system may be used to model any other system. In particular, any results will apply to digital computer languages and any of these
systems may be used to describe computer languages. Contrariwise, computer languages may be used to describe and hence implement any of these systems. Church hypothesized that all descriptions of
computability are equivalent. While Church’s thesis cannot be proved formally, every subsequent description of computability has been proved to be equivalent to existing descriptions.
It should be without controversy that if a computer can do something then a human can also do the same thing, at least in theory. In practice, the computer may have more effective storage and be much
faster in taking steps than a human. I could calculate one million digits of 𝜋, or find the first ten thousand prime numbers, but I have better things to do with my time. It is with controversy that
a human can do things that a computer, in theory, cannot do.^4 In any case, we don't need to establish this latter equivalence to see something important.
The lambda calculus is typically presented in two parts. Lambda expressions and the lambda expression evaluator:
One way to understand this is that lambda expressions represent software and the lambda evaluator represents hardware. This is a common view, as our computers (hardware) run programs (software). But
this distinction between software and hardware, while economical and convenient, is an arbitrary distinction which hides a deep truth.
Looking first at 𝜆 expressions, they are defined by two kinds of objects. The first set of five arbitrary symbols: 𝜆 . ( ) and space represent simple behaviors. It isn't necessary at this level of
detail to fully specify what those behaviors are, but they represent the "swirly, tangled" patterns posited by Hofstadter. The next set of symbols are meaningless. They represent arbitrary objects,
called atoms. Here, they are characters are on a screen. They can just as well be actual atoms: hydrogen, oxygen, nitrogen, and so on.
The only requirement for atoms is that they can be "strung together" to make more objects, here called names (naming is hard).
With these components, a lambda expression is defined as:
Note that a lambda expression is recursive, that is, a lambda expression can contain a lambda expression which can contain a lambda expression, .... This will become important in a future post when
we consider the impact of infinity on worldviews.
With this simple notation, we can write any computer program. Nobody in their right mind would want to, because this notation is so tedious to use. But by careful arrangement of these symbols we can
get memory, meaning, truth, programs that play chess, prove theorems, distinguish between cats and dogs.
Given this definition of lambda expressions, and the cursory explanation of the lambda expression evaluator (again, see [3] for details), the first key insight is that the lambda expression evaluator
can be written as lambda expressions. Everything is software, description, word. This includes the rules for computation, the rules for physics, and perhaps even the rules for creating the universe.
But the second key insight is that the lambda evaluator can be expressed purely as hardware. Paul Graham shows how to implement a Lisp evaluator (which is based on lambda expressions) in Lisp. And
since this evaluator runs on a computer, and computers are logic gates, then lambda expressions are all hardware. With the right wiring, not only can lambda expressions be evaluated, they can be
generated. We can (and do) argue about how the wiring in the human brain came to be the way that it is, but that doesn't obscure the fact that the program is the wiring, the wiring is the program.
That we can modify our wiring/programming, and therefore our programming/wiring, keeps life interesting.
Therefore, it seems that materialism and idealism remain in a stalemate as to which is more fundamental. It might be that dualism is true, but I think that by considering infinity that dualism can be
ruled out as an option, as I hope to show in a future post.
[1] Gödel. Escher, Bach: an Eternal Golden Braid, Twentieth-anniversary Edition; Douglas R. Hofstadter; pg. P-2 & P-3
[2] The Bible, New Revised Standard Version, John 1:1, Genesis 1:3
[3] An Introduction to Functional Programming Through Lambda Calculus, Greg Michaelson
[4] This would require a behavior we cannot observe; a behavior we can't describe; or a behavior we can't duplicate. If we can't observe it, how do we know it's a behavior? A behavior that we can't
describe would mean that nature is not self-describing. That seems impossible given the flexibility of description, but who knows?. There might be behaviors we can't duplicate, but that would mean
that nature behaves inside human brains like it can behave nowhere else. But there just aren't examples of local violation of general covariance, except by special pleading.
Update 9/30/20
In The Emperor's New Mind, Richard Penrose muses:
How can concrete reality become abstract and mathematical? This is perhaps the other side of the coin to the question of how abstract mathematical concepts can achieve an almost concrete reality
in Plato’s world. Perhaps, in some sense, the two worlds are actually the same?
Note the unexamined bias. Why not ask, "how can the abstract and mathematical become concrete?" In any case, they can't be the same, since infinity is different in both.
|
{"url":"https://stablecross.com/files/archive-august-2020.html","timestamp":"2024-11-10T21:29:16Z","content_type":"application/xhtml+xml","content_length":"39309","record_id":"<urn:uuid:05ebbc59-70b7-4a22-abfd-d6767b30f20e>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00713.warc.gz"}
|
The Cube: Its Relatives, Geodesics, Billiards, and Generalisations
Download PDFOpen PDF in browser
The Cube: Its Relatives, Geodesics, Billiards, and Generalisations
EasyChair Preprint 306
12 pages•Date: June 26, 2018
Starting with a cube and its symmetry group one can get related polyhedrons via corner and edge trimming and dualizing processes or via adding congruent polyhedrons to its faces. These processes
deliver a subset of Archimedian solids and their duals, but also starshaped solids. Thereby, polyhedrons with congruent faces are of special interest, as these faces could be used as tiles in
mosaics, either in a Euclidean or at least in a non-Euclidean plane. Obviously this approach can be applied when taking a regular tetrahedron or a regular pentagon-dodecahedron as start figure. But a
hypercube in R^n (an “n-cube”), too, suits as start object and gives rise to interesting polytopes. The cube’s geodesics and (inner) billiards, especially the closed ones, are already well-known (see
references). Hereby, a ray’s incoming angle equals its outcoming angle. There are many practical applications of reflections in a cube’s corner, as e.g. the cat’s eye and retroreflectors or
reflectors guiding ships through bridges. Geodesics on a cube can be interpreted as billiards in the circumscribed rhombi-dodecahedron. This gives a hint, how to treat geodesics on arbitrary
polyhedrons. Generalising reflections to refractions means that one has to apply Snellius’ refraction law saying that the sine-ratio of incoming and outcoming angles is constant. Application of this
law (or a convenient modification of it) to geodesics on a polyhedron will result in trace polygons, which might be called “quasi-geodesics”. The concept “pseudo-geodesic”, coined for curves c on
smooth surfaces Φ, is defined by the property of c that its osculating planes enclose a constant angle with the normals n of Φ. Again, this concept can be modified for polyhedrons, too. We look for
these three types of traces of rays in and on a 3-cube and a 4-cube.
Keyphrases: Billiard polygon, Deltahedron, Geodesic polygon, Rhombi-dodecahedron, cube, polyhedron
Links: https://easychair.org/publications/preprint/n1cS
Download PDFOpen PDF in browser
|
{"url":"https://ww.easychair.org/publications/preprint/n1cS","timestamp":"2024-11-12T12:32:14Z","content_type":"text/html","content_length":"6149","record_id":"<urn:uuid:a82758f8-a412-4d84-b65f-ee2f412f07f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00508.warc.gz"}
|
Oscillator Basics
Last modified by Microchip on 2023/11/09 08:59
The basic condition of oscillation is positive feedback, shown in the accompanying figure. Here (A) represents the feed-forward amplifier or loop gain; (β) represents the feedback network; and (Σ)
represents the means of generating the feedback signal. With the right circuit configuration and conditions, there will be a sustained periodic oscillation with a well-defined frequency at the
output. For oscillation to start, the voltage gain around the loop needs to be greater than 1. To maintain sustained oscillation, the closed-loop needs to be reduced to 1. With an op-amp's
high-performance parameters, it is a good candidate to be used in constructing the oscillator circuit. Note that although op-amp oscillators offer decent temperature coefficients, power efficiency,
and a wide range of operating frequencies, nonetheless quartz, ceramic resonators, and RC relaxation-type clock oscillators are common choices for providing microcontroller clock signals. In summary,
there are two conditions that must be met to sustain oscillation:
• The phase shift around the feedback loop must be effectively 0°.
• The closed loop gain (Acl) must equal 1 (unity).
Feedback oscillators can be used to generate sinusoidal waves. Each internal stage of the oscillator affects the frequency response and adds to the overall phase lag between the input and the output.
The figure above is an op-amp used in a positive feedback configuration as a comparator, as discussed in the comparator page. The transfer function of the oscillator, i.e. Vout / Vin is shown below.
With the right amount of feedback attenuation and phase shift, a well-defined oscillation frequency can be achieved. The figure below demonstrates the concept of producing oscillation with positive
feedback and phase shift. A sustained oscillation is initiated by any noise picked up in the system, including the power supply transients.
If the product of the loop gain (A) and feedback (β) equals 1, then the closed loop gain is 1, and the phase shift around the loop is 0 degrees. The figure below shows the conditions for oscillation.
Microchip Support
Query Microchip Forums and get your questions answered by our community:
Microchip Forums
AVR Freaks Forums
If you need to work with Microchip Support staff directly, you can submit a technical support case. Keep in mind that many questions can be answered through our self-help resources, so this may not
be your speediest option.
|
{"url":"https://developerhelp.microchip.com/xwiki/bin/view/products/amplifiers-linear/operational-amplifier-ics/introduction/oscillator-basics/","timestamp":"2024-11-10T15:09:08Z","content_type":"application/xhtml+xml","content_length":"49335","record_id":"<urn:uuid:68377a3b-5360-4ee3-9953-60a951f6b0e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00439.warc.gz"}
|
Math, Grade 7, Getting Started, Introduction To Ratio Tables
Classroom Norms
Classroom Norms
Classroom norms help us do our best work and meet our learning goals.
Discuss the following with your classmates.
• What do you need to do your best work?
• What ways can you work with your classmates to make the class an effective group?
• What classroom norms, or rules, will allow each member of the class the opportunity to grow?
Here are ideas for classroom norms:
• Ask questions and think creatively.
• Talk about your ideas and questions with your classmates.
• Use mathematics vocabulary when you talk or write about math.
• Make connections between what you know and what you are learning.
• If you do not know how to solve a problem, write down what you do know.
• Think about what you are learning from a task, instead of just trying to finish it quickly.
• Think of a mistake as a chance to learn, not as something to hide.
• Check your work and think about what caused you to make a mistake. Learn how to correct your work.
• Work together with your classmates so that everyone learns.
• Be prepared to explain your thinking about your approach and mathematics.
|
{"url":"https://oercommons.org/courseware/lesson/2562/student/?section=2","timestamp":"2024-11-10T09:36:45Z","content_type":"text/html","content_length":"31405","record_id":"<urn:uuid:e8c40aac-974c-4d1b-b304-67bd12fa5f1f>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00070.warc.gz"}
|
Introduction to quantum optics - SILO.PUB
File loading please wait...
Citation preview
This page intentionally left blank
INTRODUCTION TO QUANTUM OPTICS From Light Quanta to Quantum Teleportation
The purpose of this book is to provide a physical understanding of what photons are and of their properties and applications. Special emphasis is made in the text on photon pairs produced in
spontaneous parametric down-conversion, which exhibit intrinsically quantum mechanical correlations known as entanglement, and which extend over manifestly macroscopic distances. Such photon pairs
are well suited to the physical realization of Einstein–Podolsky–Rosen-type experiments, and also make possible such exciting techniques as quantum cryptography and teleportation. In addition,
non-classical properties of light, such as photon antibunching and squeezing, as well as quantum phase measurement and optical tomography, are discussed. The author describes relevant experiments and
elucidates the physical ideas behind them. This book will be of interest to undergraduates and graduate students studying optics, and to any physicist with an interest in the mysteries of the photon
and exciting modern work in quantum cryptography and teleportation. H A R R Y P A U L obtained a Ph.D. in Physics at Friedrich Schiller University, Jena, in 1958. Until 1991 he was a scientific
coworker at the Academy of Sciences at Berlin. Afterwards he headed the newly created research group Nonclassical Light at the Max Planck Society. In 1993 he was appointed Professor of Theoretical
Physics at Humboldt University, Berlin. He retired in 1996. Harry Paul has made important theoretical contributions to quantum optics. In particular, he extended the conventional interference theory
based on the concept of any photon interfering only with itself to show also that different, independently produced photons can be made to interfere in special circumstances. He was also the first to
propose a feasible measuring scheme for the quantum phase of a (monochromatic) radiation field. It relies on amplification with the help of a quantum amplifier and led him to introduce a realistic
phase operator. Harry Paul is the author of textbooks on laser theory and non-linear optics, and he is editor of the encyclopedia Lexikon der Optik.
INTRODUCTION TO QUANTUM OPTICS From Light Quanta to Quantum Teleportation HARRY PAUL
Translated from German by IGOR JEX
cambridge university press Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge cb2 2ru, UK Published in the United
States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521835633 © German edition: B. G. Teubner GmbH, Stuttgart/Leipzig/
Wiesbaden 1999 English translation: Cambridge University Press 2004 This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements,
no reproduction of any part may take place without the written permission of Cambridge University Press. First published in print format 2004 isbn-13 isbn-10
978-0-511-19475-7 eBook (EBL) 0-511-19475-7 eBook (EBL)
isbn-13 isbn-10
978-0-521-83563-3 hardback 0-521-83563-1 hardback
Cambridge University Press has no responsibility for the persistence or accuracy of urls for external or third-party internet websites referred to in this publication, and does not guarantee that any
content on such websites is, or will remain, accurate or appropriate.
To Herbert Walther, with heartfelt gratitude
Preface 1 Introduction 2 Historical milestones 2.1 Light waves a` la Huygens 2.2 Newton’s light particles 2.3 Young’s interference experiment 2.4 Einstein’s hypothesis of light quanta 3 Basics of the
classical description of light 3.1 The electromagnetic field and its energy 3.2 Intensity and interference 3.3 Emission of radiation 3.4 Spectral decomposition 4 Quantum mechanical understanding of
light 4.1 Quantum mechanical uncertainty 4.2 Quantization of electromagnetic energy 4.3 Fluctuations of the electromagnetic field 4.4 Coherent states of the radiation field 5 Light detectors 5.1
Light absorption 5.2 Photoelectric detection of light 5.3 The photoeffect and the quantum nature of light 6 Spontaneous emission 6.1 Particle properties of radiation 6.2 The wave aspect 6.3 Paradoxes
relating to the emission process 6.4 Complementarity 6.5 Quantum mechanical description 6.6 Quantum beats vii
page xi 1 3 3 5 9 12 17 17 19 22 24 29 29 33 38 39 41 41 43 48 59 59 63 67 69 71 77
6.7 Parametric fluorescence 6.8 Photons in “pure culture” 6.9 Properties of photons Interference 7.1 Beamsplitting 7.2 Self-interference of photons 7.3 Delayed choice experiments 7.4 Interference of
independent photons 7.5 Which way? 7.6 Intensity correlations 7.7 Photon deformation Photon statistics 8.1 Measuring the diameter of stars 8.2 Photon bunching 8.3 Random photon distribution 8.4
Photon antibunching Squeezed light 9.1 Quadrature components of light 9.2 Generation 9.3 Homodyne detection Measuring distribution functions 10.1 The quantum phase of light 10.2 Realistic phase
measurement 10.3 State reconstruction from measured data Optical Einstein–Podolsky–Rosen experiments 11.1 Polarization entangled photon pairs 11.2 The Einstein–Podolsky–Rosen paradox 11.3 Hidden
variables theories 11.4 Experimental results 11.5 Faster-than-light information transmission? 11.6 The Franson experiment Quantum cryptography 12.1 Fundamentals of cryptography 12.2 Eavesdropping and
quantum theory Quantum teleportation 13.1 Transmission of a polarization state 13.2 Transmission of a single-mode wave function Summarizing what we know about the photon Appendix. Mathematical
description 15.1 Quantization of a single-mode field
15.2 Definition and properties of coherent states 15.3 The Weisskopf–Wigner solution for spontaneous emission 15.4 Theory of beamsplitting and optical mixing 15.5 Quantum theory of interference 15.6
Theory of balanced homodyne detection References Index
All the 50 years of conscious pondering did not bring me nearer to the answer to the question “What are light quanta”. Nowadays every rascal believes, he knows it, however, he is mistaken. Albert
Einstein (1951 in a letter to M. Besso)
The rapid technological development initiated by the invention of the laser, on the one hand, and the perfection attained in the fabrication of photodetectors, on the other hand, gave birth to a new
physical discipline known as quantum optics. A variety of exciting experiments suggested by ingenious quantum theorists were performed that showed specific quantum features of light. What we can
learn from those experiments about the miraculous constituents of light, the photons, is a central question in this book. Remarkably, the famous paradox of Einstein, Podolsky and Rosen became a
subject of actual experiments too. Here photon pairs produced in entangled states are the actors. The book gives an account of important achievements in quantum optics. My primary goal was to
contribute to a physical understanding of the observed phenomena that often defy the intuition we acquired from our experience with classical physics. So, unlike conventional textbooks, the book
contains much more explaining text than formulas. (Elements of the mathematical description can be found in the Appendix.) The translation gave me a welcome opportunity to update the book. In
particular, chapters on the Franson experiment and on quantum teleportation have been included. I expect the reader to have some knowledge of classical electrodynamics, especially classical optics,
and to be familiar with the basic concepts of quantum theory.
I am very grateful to my colleague Igor Jex from the Technical University of Prague, who was not discouraged from translating my sometimes rather intricate German text. (Interested readers may like
to consult Mark Twain’s “The Awful German Language” in Your Personal Mark Twain (Berlin, Seven Seas Publishers, 1960).) Harry Paul (Berlin, September 2003)
1 Introduction
And the Lord saw that the light was good. Genesis 1:4
Most probably all people, even though they belong to different cultures, would agree on the extraordinary role that light – the gift of the Sun-god – plays in nature and in their own existence.
Optical impressions mediated by light enable us to form our views of the surrounding world and to adapt to it. The warming power of the sun’s rays is a phenomenon experienced in ancient times and
still appreciated today. We now know that the sun’s radiation is the energy source for the life cycles on Earth. Indeed, it is photosynthesis in plants, a complicated chemical reaction mediated by
chlorophyll, that forms the basis for organic life. In photosynthesis carbon dioxide and water are transformed into carbohydrates and oxygen with the help of light. Our main energy resources, coal,
oil and gas, are basically nothing other than stored solar energy. Finally, we should not forget how strongly seeing things influences our concepts of and the ways in which we pursue science. We can
only speculate whether the current state of science could have been achieved without sight, without our ability to comprehend complicated equations, or to recognize structures at one glance and
illustrate them graphically, and record them in written form. The most amazing properties, some of which are completely alien to our common experiences with solid bodies, can be ascribed to light: it
is weightless; it is able to traverse enormous distances of space with incredible speed (Descartes thought that light spreads out instantaneously); without being visible itself, it creates, in our
minds, via our eyes, a world of colors and forms, thus “reflecting” the outside world. Due to these facts it comes as no surprise that optical effects confronted our knowledge-seeking mind with more
difficult problems than those of moving material objects. Over several hundred years a bitter war was fought between two parties. One group, relying on Newton’s authority, postulated 1
the existence of elementary constituents of light. The other, inspired by the ideas of Huygens, fought for light as a wave phenomenon. It seemed that the question was ultimately settled in favor of
the wave alternative by Maxwell’s theory, which conceived light as a special form of the electromagnetic phenomena. All optical phenomena could be related without great difficulty and to a high
degree of accuracy to special solutions of the basic equations of classical electrodynamics, the Maxwell equations. However, not more than 40 years passed and light phenomena revealed another
surprise. The first originated in studies of black-body radiation (radiation emitted from a cavity with walls held at a constant temperature). The measured spectral properties of this radiation could
not be theoretically understood. The discrepancy led Max Planck to a theory which brought about a painful break with classical physics. Planck solved the problem by introducing as an ad hoc
hypothesis the quantization of energy of oscillators interacting with the radiation field. On the other hand, special features of the photoelectric effect (or photoeffect) led Einstein to the insight
that they are most easily explained by the “light quantum hypothesis”. Based on an ingenious thermodynamic argument Einstein created the concept of a light field formed from energy quanta hν
localized in space (h is Planck’s constant and ν is the light frequency). The newly created model was fully confirmed in all its quantitative predictions by studies of the photoeffect that followed,
but there was also no doubt that many optical phenomena like interference and diffraction can be explained only as wave phenomena. The old question, Is light formed from particles or waves?, was
revived on a new, higher level. Even though painful for many physicists, the question could not be resolved one way or the other. Scientists had to accept the idea that light quanta, or photons as
they were later called, are objects more complicated than a particle or a wave. The photon resembles a Janus head: depending on the experimental conditions it behaves either like a particle or as a
wave. We will face this particle–wave dualism several times in the following chapters when we analyze different experiments in our quest to elucidate the essence of the photon. Before this, let us
take a short stroll through the history of optics.
2 Historical milestones
2.1 Light waves a` la Huygens While the geometers derive their theorems from secure and unchallengeable principles, here the principles prove true through the deductions one draws from them.
Christian Huygens (Trait´e de la Lumi´ere)
Christian Huygens (1629–1695) is rightfully considered to be the founder of the wave theory of light. The fundamental principle enabling us to understand the propagation of light bears his name. It
has found its way into textbooks together with the descriptions of reflection and refraction which are based on it. However, when we make the effort and read Huygens’ Treatise of Light (Huygens,
1690) we find to our surprise that his wave concept differs considerably from ours. When we speak of a wave we mean a motion periodic in space and time: at each position the displacement (think about
a water wave, for instance) realizes a harmonic oscillation with a certain frequency ν, and an instantaneous picture of the whole wave shows a continuous sequence of hills and valleys. However, this
periodicity property which seems to us to be a characteristic of a wave is completely absent in Huygens’ wave concept. His waves do not have either a frequency or a wavelength! Huygens’ concept of
wave generation is that of a (point-like) source which is, at the same time, the wave center inducing, through “collisions” that “do not succeed one another at regular intervals,” a “tremor” of the
ether particles. The given reason for wave propagation is that ether particles thus excited “cannot but transfer this tremor to the particles in their surrounding” (Roditschew and Frankfurt, 1977, p.
31). Therefore, when Huygens speaks of a wave, he means an excitation of the ether caused by a single perturbation in the wave centrum, i.e. a single wavefront spreading with the velocity of light.
The plots drawn by Huygens showing wavefronts in an equidistant sequence have to be 3
Historical milestones
Fig. 2.1. Propagation of a spherical wave according to the Huygens principle.
understood such that it is the same wavefront at different times, and the regularity of the plot is caused exclusively by having chosen identical time differences. In fact, what was correctly
described in this way is white light – sunlight for instance. The time dependence of the excitation – or more precisely speaking the electric field strength component with respect to an arbitrarily
chosen direction – is not predictable but completely random (stochastic). On the other hand, it is also clear that such a theory is not able to explain typical wave phenomena such as interference or
diffraction where the wavelength plays an important role. It required Newton and his ingenious insight that natural light is composed of light with different colors to come nearer to an understanding
of these effects. This should not hinder us, however, from honoring Huygens’ great “model idea” known as the Huygens principle according to which each point either in the ether or in a transparent
medium reached by a wave, more precisely a wavefront, becomes itself the origin of a new elementary wave, as illustrated in Fig. 2.1 for the example of a spherical wave. The wavefront at a later time
is obtained as the envelope of all elementary waves emitted at the same, earlier, moment of time. However, Huygens could not answer the question of why a backwards running wave is generated only at
the boundary of two different media and not also in a homogeneous medium (including the ether). In fact, a satisfactory answer could not be given until Augustin Fresnel complemented the principle
with the principle of interference – we use today the term Huygens–Fresnel principle – the strength of which is demonstrated when we treat theoretically the problems of diffraction. By the way, the
answer to the above question is simple: the backwards running waves “interfere away.” But let us return to Huygens! Using the assumption that light propagates in two different media with different
velocities, we can easily explain reflection and
2.2 Newton’s light particles
Fig. 2.2. Birefringence after Huygens. o = ordinary beam; e = extraordinary beam; the arrows indicate the beam direction.
refraction of light using the Huygens principle. The explanation of the strange effects of birefringence (the splitting of the incident beam into an ordinary and an extraordinary beam) can be viewed
as an extraordinary success of the principle. It was based on the ingenious guess that the surfaces of propagation for the extraordinary beam are not spherical surfaces but the surfaces of a
rotational ellipsoid (see Fig. 2.2), an insight that is fully verified by modern crystallography. However, Huygens had great difficulty understanding the experiments he performed with two crystals of
calcite. We will discuss this problem in more detail in the next section.
2.2 Newton’s light particles As in mathematics, so in natural philosophy, the investigation of difficult things by the method of analysis, ought ever to precede the method of composition. This
analysis consists in making experiments and observations, and in drawing general conclusions from them by induction, and admitting of no objections against the conclusions, but such as are taken from
experiments, or other certain truths. Isaac Newton (Opticks, 3rd Book)
Isaac Newton (1643–1727) was the founder of the particle theory of light. Even though the light particles postulated by Newton do not have anything in common with what we now call photons, it is
still exciting to trace back the considerations which led such a sharp mind to the conclusion that light of certain color is composed of identical, elementary particles. As an abiding supporter of
the inductive method as the method of natural sciences, Newton was guided by a simple experience: the straight line propagation of light “rays,” recognizable on the sharp
Historical milestones
contours of shadows of (nontransparent) objects placed into the beam path. This effect seemed to Newton to be easily explained by assuming that the light source emits tiny “bullets” propagating along
straight lines until they interact with material objects. He believed a wave process to be incompatible with straight line propagation of the excitation. Water waves showed a completely different
type of behavior: they obviously run around an obstacle! Since the breakthroughs of Young and Fresnel it has been known that Newton’s conclusion was premature. What happens when a wavefront hits an
obstacle depends crucially on the ratio between the size of the obstacle and the wavelength. When the ratio is very large, the wave character is not noticeable; in the limit of very small wavelength
the propagation of light can be described by straight lines, i.e. light rays. On the other hand, wave properties become predominant when the dimensions of the obstacle are of the order of the
wavelength, as in the above example of water waves. Newton himself observed with impressive precision experimental phenomena where the outer edge of a body (for instance the cutting edge of a razor)
deflects the light “rays” in its proximity a little from their original straight direction so that no ideally sharp shadows are observable. He did not take these phenomena, now called the diffraction
of light, to be hints of a wave-like character of light; instead, he considered the bending as the result of a force applied onto the particles (in his opinion caused by the density of the ether
increasing with increasing distance from the object), an idea which was completely in accord with the well established concepts of mechanics. Newton’s belief that light had a particle nature should
be judged from the perspective of the seventeenth century atomism, which was, at the time, a deeply rooted concept. “True” physics – in contrast to scholasticism which categorized light and color
phenomena into the class of “forms and qualities” – was imaginable only as a mechanical motion of particles under the influence of external forces. The most important argument expounded by Newton
against the wave theory of light advanced by Christian Huygens was, however, a very odd observation made and reported by his great opponent (who even honestly admitted to have “found no satisfactory
explanation for it.”) What was it? It is well known that a light beam is split by a calcite crystal into an ordinary beam and an extraordinary beam and – provided it is incident orthogonally to the
rhombohedral plane – the latter beam is shifted to the side. Then the two beams lie in one plane, the so-called principal intersection of the incident beam. Huygens arranged vertically two calcite
crystals with different orientations and let a light beam impinge from above. He made the following observation: usually both the ordinary and the extraordinary beam leaving the first crystal were
split again in the second crystal into two beams, an ordinary and an extraordinary
2.2 Newton’s light particles
one. Only when the two crystals were oriented either so that the intersections were mutually parallel or mutually orthogonal did just two beams emerge from the second crystal. Whereas in the first
case the ordinary beam remained ordinary in the second crystal (the same naturally applied also to the extraordinary beam), in the second case, in contrast, the ordinary beam of the first crystal was
converted into the extraordinary beam of the second crystal, and correspondingly the extraordinary beam of the first crystal was converted into the ordinary. These last two observations surprised
Huygens. He wrote (Roditschew and Frankfurt, 1977, p. 43): “It is now amazing why the beams coming from the air and hitting the lower crystal do not split just as the first beam.” In the framework of
the wave theory of light – note that we are dealing with a scalar theory similar to the theory of sound, where the oscillations are characterized by alternating expansions and compressions of the
medium; at that time no one thought of a possible transverse oscillation! – we face a real dilemma: a wave, using modern terminology, is rotationally symmetric with respect to its propagation
direction, or, as Newton formulated it, “Pressions or motions, propagated from a shining body through an uniform medium, must be on all sides alike” (Newton, 1730; Roditschew and Frankfurt, 1977, p.
81). There does not seem to be a reason why the ordinary beam leaving the first crystal, for example, should in any way “take notice” of the orientation of the second crystal. Newton saw a way of
explaining the effect using his particle model of light. The rotation symmetry can be broken by assuming the particles not to be rotationally symmetric but rather to have a kind of “orientation
mark.” The simplest possible picture is the following: the light particles are not balls but cubes with physically distinguishable sides, and the experiment suggests that opposite sides should be
considered equivalent. Newton himself is not explicit about the form of the particles, whether they are cubic or block-like, and is satisfied by ascribing to the light particle four sides, the
opposites of which are physically equivalent. We thus deal with two pairs of sides, and Newton called one of these pairs the “sides of unusual refraction.” The orientation of the side pairs with
respect to the principal intersection of the calcite crystal determined in Newton’s opinion, the future of the light particle when it enters the crystal in the following sense: depending on whether
one of the sides of the unusual refraction or one of the other sides is turned towards the “coast of unusual refraction” (this means the orientation of the side is orthogonal to the principal
intersection plane), the particle undergoes an extraordinary or an ordinary refraction. Newton emphasizes that this property of a light particle is present from the beginning and is not changed by
the refraction in the first crystal. The particles remain the same also, and do not alter their orientation in space. In detail, the observations of Huygens can now be explained as follows (Fig. 2.3
(a)): the original beam is a mixture of particles oriented one or the other
Historical milestones
Fig. 2.3. Passage of a beam through two calcite crystals rotated by 90◦ . (a) Newton’s interpretation; (b) modern description. (The arrows indicate the direction of the electric field strength.)
way with respect to the principal cut plane of the first crystal. The first crystal induces a separation of the particles, depending on their orientation, into an ordinary beam and an extraordinary
beam. When the crystals are oriented with their principal intersection planes in parallel, the orientations of the particles with respect to the two crystals are identical. The ordinary beam of the
first crystal is also the ordinary beam of the second, and the same applies to the extraordinary beam. However, when the principal intersections of the crystals are mutually orthogonal, the
orientation of the particles leaving the first crystal with respect to the second changes so that the ordinary beam becomes the extraordinary beam, and vice versa. With this penetrating
interpretation of Huygens’s experiment, in fact Newton succeeded in describing phenomenologically polarization properties of light. Even the name “polarization” was coined by Newon – a fact that is
almost forgotten now. (He saw an analogy to the two poles of a magnet.) Today it is well known that the direction to which the “sides of unusual refraction” postulated by Newton points is physically
nothing else than the direction of the electric field strength (see Fig. 2.3(b)). Even though Newton’s arguments in favor of a particle nature of light no longer convince us, and the modern concept
of photons is supported by completely different experimental facts which Newton could not even divine with his atomic light concept, this ingenious researcher raised an issue which is topical even
now. Newton analyzed the simple process of simultaneous reflection and refraction which is observable when a light beam is incident on the surface of a transparent medium. The particle picture
describes the process in such a way that a certain percentage of the incident particles is reflected while the rest enters the medium
2.3 Young’s interference experiment
as the refracted beam. In the spirit of deterministic mechanics founded by him, Newton asks what causes a randomly chosen particle to do the one or the other. In fact, the problem is much more acute
for us than for Newton because we are now able to perform experiments with individual particles, i.e. photons. While quantum theory sees blind chance at work, Newton postulated the cause of the
different behavior to be “fits of easy reflection” and “fits of easy transmission” into which the particles are probably already placed during the process of their emission. These “fits” show a
remarkable similarity to the “hidden variables” of the twentieth century which were advocated (as it turned out, unsuccessfully) to overcome the indeterminism of the quantum mechanical description.
We would not be justified, however, in considering Newton, one of the founders of classical optics, to be a blind advocate of the particle theory of light. On the contrary, he was well aware that
various observations are understandable only with the aid of a kind of wave concept. He formulated his thoughts in the form of queries with which he supplemented later editions of his Opticks
(Newton, 1730; Roditschew and Frankfurt, 1977, p. 45), and all of them were characteristically formulated in the grammatical form of negations. It seemed to him that along with the light particles,
waves propagating in the “ether” also take part in the game. Is it not so, he asks, that light particles emitted by a source must excite oscillations of the ether when they hit a refracting or
reflecting surface, similar to stones being thrown into water? This idea helped him to understand the colors of thin layers which he studied very carefully, mainly in the form of rings formed on soap
bubbles (which were named after him). Altogether we find so many hints for a wave nature of light in Newton’s Opticks that Thomas Young could cite Newton as “king’s evidence” for the wave theory in
his lecture “On the theory of light and colours” (Young, 1802; Roditschew and Frankfurt, 1977, p. 153). Even though Young definitely missed the point, it would be just to see Newton as one of the
forerunners advocating the dualistic concept that light has a particle as well as a wave aspect (even though he gave emphasis to the first one). From this point of view, Newton seems to us much more
modern and nearer in mentality to us than to most of the representatives of nineteenth century physics who were absolutely convinced of the validity of the wave picture.
2.3 Young’s interference experiment The theory of light and colours, though it did not occupy a large portion of my time, I conceived to be of more importance than all that I have ever done, or ever
shall do besides. Thomas Young
Historical milestones interference screen light source
intensity distribution
observation screen
Fig. 2.4. Young’s interference experiment.
Interference phenomena are viewed as the most convincing “proof” of the wave nature of light. The pioneering work in this field was carried out by Thomas Young (1773–1829) and, independently, by
Augustin Fresnel (1788–1827). It was Young, despite his work going practically unnoticed by his contemporaries, who performed the first interference experiment which found its way into all the
textbooks. The principle of the experiment is the spatial superposition of two (we would say coherent) light beams. This is achieved by using an almost point-like monochromatic source and allowing
its light to fall onto an opaque screen with two small holes or slits (Fig. 2.4). The two holes themselves become secondary sources of radiation, but because they are excited by the same source they
do not radiate independently. We can now position an observation screen at a convenient distance from and parallel to the first screen and we will observe, near the point where the normal
(constructed precisely at the center of the straight line connecting the two holes) intersects the observation plane, a system of mutually parallel bright and dark stripes (orthogonal to the
aforementioned connecting line), the so-called interference fringes. The distance (which is always the same) between neighboring stripes depends primarily on the distance between the holes – it is
larger when the holes are closer together – and secondly on the color of the light; when we work with sunlight we obtain colored stripes which become superposed at a certain distance from the center
of the interference pattern, and the eye gets the impression of a surface uniformly illuminated by white light. The surprising feature of this observation (let us consider again a monochromatic
primary source) is the existence of positions that appear dark even though they are simultaneously illuminated by both holes. When one of the holes is blocked, the interference fringes vanish and the
observation screen looks uniformly
2.3 Young’s interference experiment
bright. So, adding light from another hole actually decreases the intensity of the light at some points (so creating the interference fringes). To stress this point, under certain conditions an
equation of the form “light + light = darkness” must hold. However, such a statement is naturally completely incompatible with a particle picture of light as put forward by Newton: when particles are
added to those already present, the brightness can only increase. Otherwise the particles would have to “annihilate” each other, which sounds rather odd and contradicts our experience that crossed
beams of light mutually penetrate without influencing each other. Thus, the particle picture seems to have been reduced to absurdity. However, the interference effects are easily explained by the
wave picture: when two wave trains (of equal intensity) overlap in certain regions of space, it is certainly possible that the instantaneous “displacement” of one of the waves, compared with the
other, is equal in amplitude but opposite in direction (i.e. they are out of phase). In such a case, as is easily demonstrated by water waves, the two displacements balance one another and the medium
remains at rest. On the other hand, at positions where the waves oscillate “in phase” (their displacements have the same amplitudes and direction) the waves are maximally amplified. Obviously there
is a continuous transition between these two limiting cases. Based on these concepts, Young was able to give a quantitatively correct description of his interference experiment (Young, 1807): The
middle . . . is always light, and the bright stripes at each side are at such distances, that the light coming to them from one of the apertures must have passed through a longer space than that
which comes from the other, by an interval which is equal to the breadth1 of one, two, three or more, of the supposed undulations, while the intervening dark spaces correspond to a difference of half
a supposed undulation, of one and a half, of two and a half, or more.
With this interpretation of his observations Young determined the wavelength to be approximately 1/36 000 inch for red light and approximately 1/60 000 inch for violet light, values that agree quite
well with known data. It should be emphasized that the explanation of the interference using the wave concept is independent of the concrete physical mechanism of wave phenomena. In fact, in Young’s
and Fresnel’s days, the physical nature of the oscillatory process taking place was not clear; using the words of the Marquis of Salisbury, scientists were “in search for the nominative of the verb
to undulate” (to move like a wave). The quest was successfully completed only by the discovery of the electromagnetic nature of light by James Clerk Maxwell (1831–1879). 1 We would say wavelength.
Historical milestones
2.4 Einstein’s hypothesis of light quanta A fundamental change of our concepts of the essence and the constitution of light is indispensable. Albert Einstein (1909)
A pioneering experimental investigation of the photoelectric effect led P. Lenard to surprising results (Lenard, 1902), results that were incompatible with the picture of an electromagnetic wave
interacting with an electron bound in some way in a metal. Three years later, Einstein (1905) proposed an interpretation of these experimental facts which openly contradicted the classical light
concept. This took the form of a hypothesis on the microscopic structure of light – cautiously declared as “a heuristic point of view concerning the generation and transformation of light”, the far
reaching consequences of which became obvious only much later in connection with the birth of quantum mechanics. A thermodynamic approach to the problem of black body radiation led Einstein to the
conjecture that light, at least as far as its energy content is concerned, cannot be viewed as if it fills the space continuously; instead it should have a “grainy” structure. Let us discuss Lenard’s
observations made when he illuminated a metallic surface (in vacuum) with light and analyzed the electrons released into free space. First of all, he found that the velocity distribution of the
electrons is independent of the intensity of the incident light.2 However, the effect was frequency dependent: when a mica or glass plate (absorbing the ultraviolet light component) was inserted in
front of the metallic surface, no electrons could be detected. Also, the number of electrons emitted per unit time was proportional to the light intensity, and this held even for very small
intensities. In particular, no threshold effect (the onset of electron emission for a certain minimum intensity, an effect almost certainly expected by Lenard) could be observed. The observation that
was least understood was the independence of the kinetic energy of the electrons from the intensity of the light. The expectation was that an electron in the metal should perform, under the influence
of an electromagnetic wave, such as light, a kind of resonance oscillation. (Lenard had already discovered that only the ultraviolet part of the spectrum of the used light was of relevance.) During
each oscillation the electron absorbs a small fraction of the energy of the wave till the accumulated energy exceeds the potential energy. Then the electron 2 Lenard performed the measurement in such
a way that electrons were collected on a metallic disc which
was parallel to the illuminated metallic surface. A voltage applied between the two surfaces decelerated the electrons which were emitted in all possible directions from the surface. By varying the
applied voltage, it became possible to measure the velocity distribution of the electrons just after their exit from the metallic surface. (More precisely speaking, it is the velocity orthogonal to
the surface of the metal that is measured.)
2.4 Einstein’s hypothesis of light quanta
will leave the “potential well” with a certain kinetic energy, and it follows that the energy surplus must have been delivered during the last half or the last full resonance oscillation. Due to
this, one should expect that the kinetic energy will be proportional to the intensity of light – in complete contradiction with experience. In addition, for the light intensity used the observed exit
velocity was much larger than that predicted by the described model. Under these circumstances, Lenard was forced to look for alternative physical mechanisms of electron emission. He wrote (Lenard,
1902): Therefore it requires the assumption of more complicated conditions for the motion of the inner parts of the body, but in addition also the, possibly provisional, idea seems to be nearer that
the initial velocity of the emitted quanta3 stems not at all from the light energy but from violent motions already being present within the atoms before the illumination so that the resonance
motions play only the role of initiation.
Let us now turn to Einstein’s reflections on the subject! This sharp-witted thinker started – similarly to Planck – from thermodynamics considerations. After clarifying that the theoretical
foundations of thermodynamics and electrodynamics as applied to the problem of black body radiation fail at small wavelengths and low temperatures of the radiation (and therefore also for low energy
densities), he concentrated on the case where Wien’s law is still applicable. First he derived an expression for the entropy of a monochromatic radiation field; he was particularly interested in its
dependence on the volume of the radiation field. Using this and the fundamental Boltzmann relation S = k log W (with k being Boltzmann’s constant), where S is the entropy and W is, up to a constant
factor, the probability, Einstein derived the probability that the radiation field of frequency ν enclosed in a “box” with randomly reflecting walls – due to fluctuations present even in the
equilibrium state – completely concentrates into a partial volume V0 of the original box volume V . The expression found in this way was formally identical to that known from the kinetic theory of
gases, giving the probability that N molecules limited in their freedom of motion to a volume V will be randomly found in a smaller volume V0 . The ratio between the total energy of the field and the
value hν (h being Planck’s action constant) played the role of the particle number of the electromagnetic field. Einstein consequently came to the following important conclusion: “monochromatic
radiation of low density (within the validity range of Wien’s radiation formula) behaves with respect to the theory of heat as if it consisted of independent energy quanta of magnitude hν.4 ” “If
this is indeed the case”, 3 Lenard meant electrons. 4 In fact, Einstein, who started from Wien’s law not Planck’s law, did not use Planck’s constant but wrote instead
Rβ/N , where R is the universal gas constant, N is Avogadro’s number and β is the constant in the exponent of Wien’s law.
Historical milestones
Einstein continued, “it is natural to investigate whether also the laws of generation and transformation of light are of such a kind as if the light would consist of such energy quanta.” He imagined
that these energy quanta, today called light quanta or photons, are “localized in space points” and “move without being split and can be absorbed or generated only as a whole.” Einstein continued his
argument by saying that the new concept of light is appropriate for the understanding of a number of peculiarities found when studying photoluminescence (Stokes’s rule), the photoelectric effect and
ionization of gases using ultraviolet radiation. Concerning the photoelectric effect we have to adopt the following picture: an incident light quantum immediately transfers all its energy to an
electron in the metal. The energy is partly used to release the electron from the metal, i.e. to perform the “exit work” A, while the remaining part is retained by the electron in the form of kinetic
energy. Mathematically this relation can be written as 1 hν = mv 2 + A, 2
where m is the mass of the electron and v is the velocity of the released electron. Because the number of elementary processes will be proportional to the number of incident light quanta, we expect a
linear increase of the number of released electrons per second with the light intensity, in agreement with Lenard’s observation. An important consequence of Equation (2.1) is that the kinetic energy
of the electrons – for a given material – depends on the frequency, but not on the intensity, of the incident light, and a minimum frequency, the so-called threshold frequency νt (determined by the
material constant A) must be exceeded to initiate the process. (The predicted linear increase of the kinetic energy with frequency for ν > νt later allowed the possibility of a very precise and
practical method for the measurement of h; more precisely, because the kinetic energy is measured using a compensating electric field, the ratio of Planck’s constant and the elementary electric
charge e could be determined.) Einstein finally convinced himself that Equation (2.1) gives the correct (i.e. that found by Lenard) order of magnitude for the kinetic energy – or, on recalculating,
for the voltage necessary to decelerate the electrons – when he inserted for the frequency the value of the ultraviolet boundary of the spectrum of the sun (and neglected in first approximation the
value of A). Herewith the observations of Lenard could be considered as “explained.” Einstein, however, was much more modest in his formulation: “Our concept and the properties of the light electric
effect observed by Mr. Lenard, as far as I can see, are not in contradiction.”
2.4 Einstein’s hypothesis of light quanta
It underlines the admirable physical intuition of Einstein that he was also thinking about, as we would say, many-photon processes. In his opinion, deviations from the observed Stokes’ rule for
fluorescence could be found when “the number of energy quanta per unit volume being simultaneously converted is so large that an energy quantum of the light generated can obtain its energy from
several generating quanta.” Thus, Einstein was the first to consider the possibility of non-linear optics, which, through the development of powerful light sources in the form of lasers, became
reality and, with its wealth of effects, became an important discipline in modern physics. Only in a following paper did Einstein (1906) establish a relationship between the light quantum hypothesis
and Planck’s theory of black body radiation. He underlined that the physical essence of Planck’s hypothesis can be distilled into the following statement: the energy of a material oscillator with
eigenfrequency ν0 interacting with the radiation field can take on only discrete values which are integer multiples of hν0 ; it changes through absorption and emission stepwise in integer multiples
of hν0 . While the conservative Planck made every effort to reconcile this phenomenon with classical physics – later he confessed (Planck, 1943) that “through several years I tried again and again to
build the quantum of action somehow into the system of classical physics” – Einstein took Planck’s hypothesis physically seriously. The hypothesis eventually proved to be the first step on the road
towards a revolutionary rethinking in physics, which was finalized by the birth of quantum mechanics. Even though the light quantum hypothesis – in the form of Equation (2.1) – was later confirmed
experimentally by the careful measurements of Millikan (1916) (completely against his own expectations!), the question of its compatibility with many optical experiments (such as interference or
diffraction), comprehensible only with the concept of waves continuously filling the space, remained open. Einstein was well aware of this and saw the only way out (as he wrote in his paper of 1905)
“in relating the optical observations to time averaged values but not to instantaneous values”. However, today this argument is no longer convincing: it has been experimentally confirmed that
interference is possible even in situations when at each instant just one photon can be found in the whole apparatus (for instance in a Michelson interferometer) and the photon has to “interfere with
itself,” as concisely formulated by Dirac (1958). It seems that we cannot avoid also assigning wave properties to individual photons, and Einstein’s light quanta hypothesis leads ultimately to a
dualistic picture of light. The following chapters illustrate the ways in which the pioneering work of Einstein was deepened and broadened due to impressive experimental, technical
Historical milestones
as well as theoretical progress. Before that, let us recall some of the fundamentals of the theory of light based on classical electrodynamics. Classical pictures will guide us through the study of
optical phenomena in the case of microscopic elementary processes, and they are, in the end, the criterion for what appears to us “intuitive” and hence “understandable” – or, on the contrary,
“paradoxical” or “inconceivable”.
3 Basics of the classical description of light
3.1 The electromagnetic field and its energy The conclusion by Maxwell, based on theoretical considerations, that light is, by its character, an electromagnetic process, is surely a milestone in the
history of optics. By formulating the equations bearing his name, Maxwell laid the foundations for the apparently precise description of all optical phenomena. The classical picture of light is
characterized by the concept of the electromagnetic field. At each point of space, characterized by a vector r, and for each time instant t, we have to imagine vectors describing both the electric
and the magnetic field. The time evolution of the field distribution is described by coupled linear partial differential equations: the Maxwell equations. The electric field strength has a direct
physical meaning: if an electrically charged body is placed into the field, it will experience a force given by the product of its charge Q and the electric field strength E. (To eliminate a possible
distortion of the measured value by the field generated by the probe body itself, its charge should be chosen to be sufficiently small.) Analogously, the magnetic field strength H, more precisely the
magnetic induction B = µH (where µ is the permeability), describes the mechanical force acting on a magnetic pole (which is thought of as isolated). Also, the field has an energy content, or, more
correctly (because in a precise field theory we can think only about energy being distributed continuously in space), a spatial energy density. The dependence on the field strength can be found in a
purely formal way: starting from the Maxwell equations and applying a few mathematical operations, we find the following equation know as the Poynting theorem (see, for example, Sommerfeld (1949)): ∂
1 2 1 2 E + µH + EJ + div S = 0, 2 2
Basics of the classical description of light
where J is the electric current density and the Poynting vector, S, is the abbreviation for the vector product of the electric and magnetic field strengths: S = E × H.
To keep the analysis simple, we have assumed a homogenous and isotropic medium with a dielectric constant and a permeability µ. Integrating Equation (3.1) over an arbitrary volume V , we find (using
Gauss’s theorem) the relation ∂ 1 2 1 2 3 3 EJ d r + Sn d f = 0, (3.3) E + µH d r + ∂t V 2 2 V O where O is the volume’s surface, d f is an element of the surface and Sn is the normal component of
the Poynting vector S. The easiest term to interpret in Equation (3.1) is EJ. It is the work done by the electric field per unit time on an electric current (related to a unit of volume), and is
usually realized as heat. Due to this it is natural to interpret Equation (3.3) as an energy balance in the following sense: the rate of change of the electromagnetic energy stored in volume V is
caused by the work performed on the electric currents present and by the inflow (or outflow) of the energy from the considered volume. This means that the quantity 1 1 u = E2 + µH2 2 2
can be interpreted as the density of the electromagnetic field energy (split into an electric and a magnetic part) – in analogy to the deformation energy of an elastic medium – while the Poynting
vector represents the energy flow density. The picture we have formed is that the electromagnetic energy is deposited – in the form of a continuous spatial distribution – in the field. In addition,
there is also an energy flow (which plays a fundamental role primarily for radiation processes). It seems that the mathematical description of the energetic properties in the electromagnetic field is
uniquely rooted in the theory. In fact this holds only for the energy density. Equation (3.2) for the energy flow is, in contrast, not unique. From Equation (3.1) it is easy to see that the Poynting
vector can be supplemented by an arbitrary divergence-free vector field, i.e. a pure vortex field, without changing the energy balance equation, Equation (3.1). This non-uniqueness in the description
of the energy flow caused discussions as to which is the most “sensible” ansatz for the flow, and these did not subside until recent times. The discussion showed that the question is far from trivial
because, depending on the situation, one or the other expression seems better suited to “visualize” the situation. We do not want to go deeper into the problem; we just find it noteworthy that the
theory does not give us a unique picture of the energy flow. In fact, we can
3.2 Intensity and interference
find completely different representations of the energy flow which are physically equivalent. What the Poynting vector actually demands from our visualization can be easily illustrated by the example
of two orthogonal (“crossed”) static fields: an electric field and a magnetic field. According to Equation (3.2), energy flows continuously through space but cannot be “caught” because the outflow
and inflow from an arbitrary volume element are the same. An interesting aspect of the continuous distribution of energy in space – a concept found not only in Maxwell’s theory, but inherent in any
classical field theory – is the possibility of arbitrarily diluting the energy. For instance, we can send light into space using a searchlight which will have an angular aperture due to diffraction.
Hence, the energy contained in a fixed volume will become smaller and smaller as the light propagates further, and there is no limit for the process of dilution. This is in sharp contrast to what is
observable for massive particles. For instance, an electron beam is also diluted, i.e. the mean particle density decreases in a similar way to the electromagnetic energy density, but an
all-or-nothing principle applies to the measurement: either we find some particles in a given volume or we do not find any. With increasing dilution, latter events become more frequent, and in these
cases the volume is absolutely empty. This discrepancy is eliminated if we consider the phenomenon in terms of Einstein’s light quanta hypothesis, and therefore also in terms of the quantum theory of
light, which ascribes particle as well as wave properties to light.
3.2 Intensity and interference It is interesting to question what are the optically measurable physical quantities when we view the electromagnetic field from the perspective of optics (this includes
visual observations). These quantities obviously cannot be the electric or the magnetic field strengths themselves because they are rapidly varying in time, and no observer would be able to follow
these high frequency oscillations – the duration of an oscillation is about 10−15 second. What really happens during the registration of optical phenomena – in a photographic plate or in the eye –
is, at least primarily, the release of an electron from an atomic structure through the action of light. Such a photoelectric effect, as described in more detail in Section 5.2, is described by the
time averaged value of the square of the electric field strength1 at the position of the particular detecting atom, i.e. by the variable 1 Very often the intensity is identified, apart from a
normalization factor, with the absolute value of the time
averaged Poynting vector (compare with Born and Wolf (1964)). This definition is equivalent to Equation (3.5) for running waves, but it fails in the case of standing waves. Then the time averaged
Poynting vector vanishes; nevertheless, a photographic plate is blackened at the positions of the maxima of the electric field strength, an effect known from Lippmann color photography.
Basics of the classical description of light
called intensity: 11 I (r, t) = 2T
t+ 12 T
E2 (r, t ) dt .
t− 12 T
The averaging extends over a duration of at least several light periods, and the factor of 1/2 was introduced for convenience to avoid the presence of factors of 2 (see Equation (3.10)). To calculate
the average it is convenient to decompose the electric field strength, generally written in the form of a Fourier integral, ∞ E(t) =
f (ν)e−2π iνt dν,
into the positive and the negative frequency parts, E(t) = E(+) (t) + E(−) (t),
where E
∞ (t) =
f (ν)e−2π iνt dν
and E
0 (t) = −∞
−2π iνt
f (ν)e
∞ dν =
f ∗ (ν)e2π iνt dν = E(+)∗ (t),
and we note that f (−ν) = f ∗ (ν) because E is real. When the distribution of frequencies is very narrow compared with the central frequency – we refer to this as quasi-monochromatic light – Equation
(3.5) for the intensity reduces to the simple form I (r, t) = E(−) (r, t)E(+) (r, t).
There is a fundamental difference between detection methods in acoustics and optics. Sound waves cause mechanical objects (acoustic resonators, the ear-drum) to oscillate, whereas the detection of
optical signals is realized through a process non-linear in the electric field strength; a kind of rectification takes place – the conversion of an alternating electric field of extraordinarily high
frequency into a timeconstant or slowly varying photocurrent. This is the reason for the fundamental
3.2 Intensity and interference
difference in our aesthetic abilities of perception of tone on the one hand and of colors on the other. A phenomenon that is conditioned mainly by the wave nature of light is the interference effect.
From a formal point of view it is possible to say that the reason for the interference is the linearity of Maxwell’s equations, implying that the sum of two solutions is again a solution. Physically
speaking, this superposition principle means the following: for two incident waves, the electric (magnetic) field strength in the area of their overlap is given by the sum of the electric (magnetic)
field strengths of the two waves. Let us consider the simple case of interference of two linearly polarized (in the same direction) plane waves with slightly different frequencies and propagation
directions; for the positive frequency part of the electric field strength of the total field we have E(+) (r, t) = A1 e exp{i(k1 r − ω1 t + ϕ1 )} + A2 e exp{i(k2 r − ω2 t + ϕ2 )}, (3.11) where A j
are (real) amplitudes, e is a unit vector indicating the polarization direction, k j is the wave number vector, ω j is the angular frequency and ϕ j is the (constant) phase of the partial waves j (=
1, 2). Equation (3.11) can be rewritten as E(+) (r, t) = A1 e exp{i(k1 r − ω1 t + ϕ1 )}[1 + α exp{i(kr − ωt + ϕ)}], (3.12) where we have used the abbreviations α = A2 /A1 , k = k2 − k1 , ω = ω2 − ω1
and ϕ = ϕ2 − ϕ1 . Equation (3.12) describes a wave process differing from an ideal plane wave in that the amplitude is modulated spatially and temporally. As already noted in the introduction, the
electric field strength is not directly observable in the optical domain. What is in fact possible to see, photograph or register in other ways is the intensity (Equation (3.10)) which can be
expressed using Equation (3.11) as I (r, t) = A21 + A22 + 2A1 A2 cos(kr − ωt + ϕ),
and can also be written, because A2j are the intensities I j of the waves j (=1, 2), as I (r, t) = I1 + I2 + 2 I1 I2 cos(kr − ωt + ϕ). (3.14) Obviously, the third term on the right hand side of
either Equation (3.13) or Equation (3.14) is responsible for the interference. Light incident on an observation screen is detected by the eye as a system of equidistant bright and dark fringes. The
contrast becomes stronger as the difference between the intensities I1 , I2 , becomes
Basics of the classical description of light
smaller. In the special case of I1 = I2 , the intensity in the middle of the dark strips is equal to zero. We have to note that the described interference pattern shifts in time (it drifts away so to
speak) when the frequencies do not match exactly. Only for ω = 0 is a static – and hence also photographically observable – interference pattern present. In the case ω =0, the intensity, observed at
a fixed point, is modulated by the frequency ν = ω/2π. Thus, “beat phenomena” can be observed with the help of a photocell, as will be discussed in more detail in Section 5.2. 3.3 Emission of
radiation One of the most important results discovered (purely mathematically) by Maxwell is that the fundamental equations of electrodynamics allow wave-type solutions. The wave propagation velocity
was determined by Maxwell, with the help of his √ ether theory and a mechanical analogy, to be c = 1/ µ. This quantity had already been determined by Weber and Kohlrausch, for the vacuum, through
electrical measurements. The surprisingly accurate agreement between the found value and the measurement results of Fizeau for the velocity of light in the vacuum led Maxwell to his conclusions
regarding the electromagnetic nature of light (Faraday had already guessed at a connection between light and electricity), but it was Heinrich Hertz who generated electromagnetic waves using an
electric method for the first time, thus verifying directly through experiment an essential prediction of the theory. The simplest model of an emitter of electromagnetic energy into free space, is
the so-called Hertzian dipole. It is represented by two spatially separated point charges of opposite sign, one of which is able to move along the line connecting the two charges. Applying an
external force to one of the charges causes it to undergo a back and forth motion. This implies a time dependent change of the dipole moment D = Qa, where Q is the absolute value of the charge and a
is the distance vector pointing from the negative to the positive charge. The time derivative of the dipole moment represents an electric current taking over the role of a source term in Maxwell’s
equations. The theoretical treatment of the radiation problem leads to simple expressions for the electromagnetic field at larger distances from the source, the so-called far field zone. There the
electric and the magnetic fields are determined only by the second order time derivative of the electric dipole moment, i.e. through the acceleration of the moving charge, and it is crucial to note
that the value of the field strength at point P and time t is determined by the value of the acceleration at an earlier time by t = r/c (r is the distance between the source and the point of
observation and c is the velocity of light). This effect of “retardation”
3.3 Emission of radiation
illustrates the fact that an electromagnetic action propagates with the velocity of light. In particular it can be shown that E and H are mutually orthogonal and are also orthogonal to the radius
vector having its origin at the light source. In addition the field strengths are proportional to sin θ , where θ is the angle between the position vector (radiation direction) and the dipole
direction. The absolute value of the Poynting vector (pointing in the radiation direction) is S=
1 sin2 θ ¨ 2 D , 16π 2 0 c3 r 2
where is the dielectric constant of the vacuum. (See, for example, Sommerfeld (1949).) The 1/r 2 decrease of the energy flow is easily understood: it follows from this that the energy flow per second
through a spherical surface is always the same when the propagation of a certain wavefront is followed through space and time, as is required from the energy conservation law. The angle dependence in
Equation (3.15) is a typical radiation characteristic: a dipole does not radiate at all in its oscillation direction but has maximum emission in the orthogonal direction. Similarly, the sensitivity
of energy absorption by an antenna is dependent on the incident angle of the incoming wave. (We have probably all experienced this phenomenon when trying to improve a TV signal by means of
repositioning the external aerial.) The physical reason for the behavior of the receiver antenna is obvious. Only the electric field component in the dipole direction is able to start the oscillation
of the dipole. The interaction is strongest when the electric field strength coincides in direction with the dipole oscillation, and this applies also for emission. Because the electromagnetic field
is transverse, this implies an incident or emitted radiation directed orthogonally to the dipole oscillation. Let us return to the emission of a Hertzian dipole! Of particular importance is the case
of a sinusoidal time dependence of the external driving force. A typical example is a radio transmitter. Under such circumstances, the dipole executes harmonic oscillations, as does the emitted
electromagnetic field, which is consequently monochromatic. If we consider the energy, we see that the oscillating dipole is continuously losing energy – we refer to this as “radiation damping”,
which has an attenuating effect on the moving charge. The lost amount of energy must be continuously compensated for by work done by the driving force on the moving charge to guarantee the
stationary, monochromatic emission. Let us denote the frequency of the field by ν; then we obtain for the time averaged amount of energy emitted per unit time into the spatial angle
Basics of the classical description of light
d = sin θ dθ dϕ the expression S d =
π 2 D02 ν 4 3 sin θ dθ dϕ, 20 c3
where D0 is the amplitude of the dipole oscillation. Equation (3.16) does not hold only for macroscopic antennas emitting radioand microwaves, but is applicable also to microscopic oscillators like
atoms and molecules. We have in fact to conceive (non-resonant) scattering of light on atomic objects, so-called Rayleigh scattering, as follows: the incident radiation induces in the molecules
dipole moments oscillating with the light frequency, which in turn radiate not only in the forward direction but, according to Equation (3.16), also sideways (thus bringing about the scattering of
light). One consequence of the ν 4 law is demonstrated literally in front of our eyes on sunny days. The sky appears blue because of the stronger scattering of blue light compared with that of red
light on the molecules in the air. This explanation, stemming from Lord Rayleigh, is, however, not the whole story. Smoluchowski and Einstein realized that irregularities in the spatial distribution
of molecules play an essential role. These irregularities prevent the light scattered to the sides from being completely extinguished through the interference of partial waves from individual
scattering centers.
3.4 Spectral decomposition In contrast to the situation for a radio transmitter, in the Hertz experiment we are dealing with a strongly damped (rapidly attenuated in time) dipole oscillation
generated with the aid of a spark inductor. We find such processes also in the atomic domain, for instance in the spontaneous emission case. The simplest model that is applicable to this situation is
that of an oscillator, an object able to oscillate with frequency ν, which is put into oscillation through a short excitation (for instance, an electron kick). The emission associated with this
process causes an exponential attenuation of the dipole oscillation amplitude. The oscillation comes to rest after a finite time, and a pulse of finite length is emitted. If we observe such a pulse
from a fixed position, we see that the electric and the magnetic field strengths – after the wavefront has reached the observer – decrease exponentially. Such a pulse cannot be monochromatic. This is
easy to see if we consider the spectral decomposition of the electric field strength as a function of time, i.e. if we write for the positive frequency part E
−2π iν0 t− κ2 t
(t) = E 0 e
∞ = 0
f (ν)e−2π iνt dν,
3.4 Spectral decomposition
where we have assumed, for simplicity, linearly polarized light. E is the electric field strength in the polarization direction. Next we assume that the field E(t) is zero for t < 0. The Fourier
theorem applied to Equation (3.17) yields ∞ f (ν) =
E (+) (t)e2π iνt dt,
and explicitly for our case f (ν) =
2E 0 , κ + 4πi(ν0 − ν)
i.e. the radiation has a Lorentz-type frequency distribution | f (ν)|2 =
4|E 0 |2 , κ 2 + [4π(ν0 − ν)]2
with the halfwidth ν =
κ . 2π
We speak of an emission “line” of width ν. Noting that t = κ −1 characterizes the duration of the emitted pulse (measured by the intensity dependence at a fixed point), we can rewrite Equation (3.21)
in the form ν · t ≈
1 . 2π
In this form the relation has a general validity for pulses of finite duration; we must assume, however, that the phase of the electric field does not change significantly within t. If this should
occur however – there might even be uncontrollable phase jumps, for instance caused by the interaction of the dipole with the environment – the right-hand side of Equation (3.22) can become
considerably larger than 1/2π . In general, the relation has to be understood in such a way that it defines the minimum value of the linewidth for a pulse of finite duration. An interesting physical
consequence of Equation (3.22) is that the Fourier decomposition of a pulse in a spectral apparatus leads to partial waves which, due to their smaller frequency widths, are considerably longer than
the incident wave. How this is achieved in the spectral apparatus is easily illustrated by the example of a Fabry–Perot interferometer. The device consists of an “air plate” formed by two silver
layers S1 and S2 deposited on two parallel glass plates (Fig. 3.1). A beam incident on layer S1 under a certain angle is split by S1 into a reflected part and a transmitted part. The
Basics of the classical description of light
S1 S2
Fig. 3.1. Path taken by the rays in a Fabry–Perot interferometer (S1 and S2 are the silver layers). The rays refracted at S1 are left out for simplicity.
transmitted beam is incident on S2 ; the part of the beam reflected on S2 is split on the first layer again, and the process continues. The result is the formation of a whole sequence of partial
beams that have experienced different numbers of passages between the layers S1 and S2 . Two neighboring beams differ in their amplitudes by a constant factor, and the path difference s between them
is also the same and is determined by the geometry of the setup. The superposition of the partial waves yields the total outcoming radiation. Its amplitude reaches its maximum value – this means the
transmittivity of the interferometer is maximal – when s is an integer multiple of the wavelength of the light. The transmission curve of the Fabry–Perot etalon, as a function of frequency, shows a
sequence of maxima with a halfwidth δν determining the resolution of the apparatus. For the sake of simplicity, let us assume the spectrum of the incident pulse to be narrow enough to be localized
around one of the transmission maxima but to be broad compared with δν; in such a case the interferometer cuts a narrow frequency interval from the spectrum of the incident pulse. According to
Equation (3.22) this is associated with a stretching of the pulse. How this happens is now easily understood: each run of the light back and forth between the layers S1 and S2 leads to a time delay
and – in the case of a pulse – also to a mutual spatial shift of the partial beams: they are increasingly lagging behind the more often they have been reflected between S1 and S2 . The spectral
decomposition of light is achieved in such a way that the light to be analyzed is incident as a focused beam. Because the path difference s depends on the angle of incidence, the photographic image
of the outcoming light shows a ring structure, and there is a well defined relation between the ring radius and the light frequency. The formation of the mutually interfering partial waves requires a
finite time δt, which – for high resolution of the spectral apparatus, i.e. for drastic pulse stretching – is essentially equal to the duration of the outcoming pulse. Its
3.4 Spectral decomposition
linewidth satisfies Equation (3.22), and because it coincides with the transmission width ν Equation (3.22) can be interpreted in such a way that ν is the precision of the frequency measurement and t
is the minimum measurement duration. It is not a coincidence that Equation (3.22) in this interpretation represents a special case of the general quantum mechanical uncertainty relation between the
precision E of an energy measurement and its duration t (Landau and Lifschitz, 1965): h ≡ h¯ , (3.23) 2π as long as, according to the photon concept, we identify the quantity hν (h is Planck’s
constant) with the energy E of a single photon. Finally, let us mention that an atom can be “cheated” into experiencing a line broadened radiation field by limiting its interaction time. This occurs,
for instance, when the atom passes through a field filling a finite space R. The atom is “seeing” a pulse of finite length and, according to Equation (3.22), in which we have to identify t with the
time of flight through R, the spectrum appears to the atom to be broadened, even though it might be a monochromatic field. In such cases we speak of the time of flight broadening of an absorption
line. E · t ≥
4 Quantum mechanical understanding of light
4.1 Quantum mechanical uncertainty After reviewing the main characteristics of the classical description of light, let us discuss those aspects of the quantization of the electromagnetic field which
are of relevance for the analysis of the phenomena we are interested in. It seems a reasonable place to start to make clear the fundamental difference between the classical and the quantum mechanical
description of nature; we will come across this difference many times when discussing experiments, and it will often give us a headache. We have to deal with the physical meaning of what is called
uncertainty. The starting point of the classical description is the conviction that natural processes have a “factual” character. This means that physical variables such as the position or momentum
of a particle have, in each single case, a well defined (in general, time dependent) value. However, it will not always be possible to measure all the appropriate variables (for instance, the
instantaneous electric field strength of a radiation field); furthermore under normal circumstances we are able to measure only with a finite precision. Hence the basic credo of classical physics
should be given in the following form: we are justified in imagining a world with variables possessing well defined values which are not known precisely (or not known at all). In doing this we are
not forming any conclusions that contradict our everyday experiences. This is the fundamental concept of classical statistics: we are satisfied with the use of probability distributions for the
variables we are interested in, not from fundamental but purely practical reasons. The necessity is even turned into an advantage in the case of many-particle systems, for instance a gas; what would
be the benefit of an absolutely detailed description of, say, 1023 particles even if our brain could digest such an overwhelming amount of information? We can make the rather general statement that
the uncertainty of classical physics is equivalent to a certain amount of ignorance with respect to an otherwise well defined situation.
Quantum mechanical understanding of light
A detailed study of the micro-cosmos, however, reveals that the classical reality concept is not applicable. The “factual” must be supplemented with a new category: the “possible.” More precisely
speaking, in addition to the uncertainty known from classical physics, there is another, specifically quantum mechanical, uncertainty. It is deeply rooted in the foundations of quantum mechanics and
it is best known from Heisenberg’s uncertainty relation. That the quantum mechanical description of nature is not “intuitive” is most often attributed to this uncertainty. What is the essence of this
quantum mechanical uncertainty? The only precise statement we can make is what the uncertainty is not – namely plain ignorance. Quantum mechanics resorts to a mathematically precise formulation of
the new uncertainty concept (which completely satisfies the pragmatist), but it does not provide any clues as to how to form a concrete definition of it. Let us illustrate the problem with a simple
example. Consider an ensemble of identical atoms whose energy is not “sharp” in the quantum mechanical sense. (We can prepare such a state by applying a resonant coherent field, which is generated,
for instance, by a laser.) Let us assume, for simplicity, that only two levels, 1 and 2, with energies E 1 and E 2 are involved. Quantum mechanics assumes (rightly, as the excellent agreement between
prediction and experiment shows) that it is wrong to interpret the aforementioned uncertainty in such a way that a certain percentage of the atoms is in the upper level and the remaining part is in
the lower one. Instead we have to assume that there is no difference between the physical states of the individual atoms.1 The quantum mechanical language for such a situation is the term “pure
state.” Mathematically, the ensemble of all atoms is described by a single wave function in the form of a superposition: = α(t)1 + β(t)2 ,
where 1 and 2 describe the eigenfunctions corresponding to the energy levels 1 and 2, and α and β are complex numbers satisfying the normalization condition |α|2 + |β|2 = 1. When we try to translate
Equation (4.1) into plain language, we obtain either “the atoms are in the upper and lower levels simultaneously,” or “both of the energy levels are possible, but neither of them is a fact.” On the
other hand, the specifically quantum mechanical uncertainty is a prerequisite for us to be able to assign to the atoms a coherently oscillating dipole moment (in the sense of a quantum mechanical
expectation value) (Paul, 1969). The atomic dipole moment is in a kind of complementary relation to the energy: it vanishes when the atom is in a state of well defined energy. The individual dipole 1
More precisely, we should say: However the ensemble of atoms is split into two ensembles, the atoms always
behave in the same way with respect to the measurement of any physical variable (observable).
4.1 Quantum mechanical uncertainty
moments (in the laser, for example, they are induced by the electric field strength residing at the positions of the atoms) add up to a macroscopic polarization oscillating with the frequency of the
radiation field, which plays an important role as a source of the laser radiation. (It represents the exact analog of the antenna current of a radio transmitter.) The transition from the “possible”
to the “factual” or from “one as well as the other” to “one or the other” is realized when an energy measurement is performed, i.e. after a physical interaction. According to the axiomatic
description of the quantum mechanical measurement process, a “reduction” of the wave function takes place. It means that the considered ensemble of atoms is turned into an ensemble corresponding to
the classical uncertainty concept, which is characterized by a fraction |α|2 of atoms with energy E 1 while the remaining fraction |β|2 of atoms has energy E 2 . This new ensemble is – in contrast to
the original ensemble described by the “pure state” (4.1) – a “statistical mixture” of two sub-ensembles with different atomic energies. A real separation of the two sub-ensembles is obtained by
sorting the atoms according to the measured values E 1 and E 2 . As already mentioned, the expectation value of the dipole moments vanishes for such an ensemble. This means that the energy
measurement destroys completely the macroscopic polarization of the medium. Physically there is a considerable difference whether we deal with atoms prepared in a pure state (described by the wave
function of the form of Equation (4.1)) or with the abovementioned statistical mixture of atoms. The first case is realized in a laser medium (in laser operation), while the second case corresponds
to conventional (thermal) light sources. The difference between the two situations is clearly illustrated by the properties of the emitted radiation (for details, see Sections 8.2 and 8.3). Our
example of an ensemble of identical atoms illustrates nicely the disturbing effects of a measurement on a system’s dynamics. Let us assume that the atoms are initially in the lower state 1. The
intense coherent field induces a transition from 1 to 2, and the theory states that – in the case of resonance – at a certain moment t A , all the atoms are with certainty in the excited state 2.
However, what happens when, from time to time, we check exactly where the individual atoms are, i.e. which level they are in. Our curiosity has dramatic consequences: the corresponding measurement
leads to an abrupt interruption of the coherent interaction with the external field, and each of the atoms – according to the probability determined by the wave function at the corresponding instant
– is put into either the initial or the excited state. In the first case the interaction has to start from “scratch” while in the latter case an evolution sets in, with the tendency to bring the
atoms back into the lower energy state. The massive interruption of the evolution process has the consequence, as the quantum mechanical calculation shows, that at time t A not all of the atoms by
far will be found in the upper state. Moreover, the number of atoms
Quantum mechanical understanding of light
being excited at that time steadily decreases the more frequently the measurement is repeated, until it vanishes completely. Hence, the process of evolution is increasingly hindered the more often we
“check.” This phenomenon – experimentally verified recently using methods of resonance fluorescence (Itano et al., 1990; see Section 6.1) – resembles the well known Zeno paradox of the flying arrow
(which is at each instant located at a well defined position and “consequently” is always at rest) and is often called the quantum Zeno paradox. Based on the given arguments, we must conclude that we
have to take the specifically quantum mechanical uncertainty seriously. Our thinking, which has been conditioned by our preoccupation with classical physics, is completely helpless when faced with
the new uncertainty of quantum theory.2 However, there are certain analogies with classical electrodynamics, because the superposition principle is valid in both theories. For example, the radiation
field need not be monochromatic. Generally, the radiation field is a superposition of waves with different frequencies, and the frequency of the wave itself is undetermined in the sense that it can
be the frequency of this particular wave as well as that of the other wave in the superposition. Also, the superposition principle plays an important role in the polarization properties of light. Let
us consider, for instance, linearly polarized light and ask which circular polarization is present. Our answer is that because linearly polarized light can be considered as a superposition of a left
handed circularly and a right handed circularly polarized wave, both polarizations are simultaneously present with equal probability. We should bear in mind, however, that the uncertainty in the
classical description of the properties of the electromagnetic field does not contradict the principles of classical physics, which state that all physical processes are “factual.” In fact, the
electric field strength as the primary physical variable in the above example is well defined with respect to magnitude and direction at each moment, and we arrive at the conclusion that one physical
variable has simultaneously different values only when questions of the type: What happens when light is passing through a spectral apparatus? etc. are posed. We might ask whether a similar situation
is present in quantum mechanics. Is the quantum mechanical description in reality incomplete, as was advocated by Einstein, Podolsky and Rosen (1935)? Are its generally statistical predictions simply
the result of our ignorance of the exact details of the microscopic processes which we are not able to observe with our (necessarily macroscopic) measurement apparatus? Is it not possible, following
the example of classical physics, at least to imagine a more detailed theory from which the statistical predictions of quantum 2 Bohr had already pointed out that we are familiar with such a state of
uncertainty from our everyday life: when
we face a difficult decision, we have a clear feeling that the different possibilities are in a kind of “simultaneous presence,” from which just one is chosen to become real by our will.
4.2 Quantization of electromagnetic energy
mechanics would follow similarly to how statistical physics is derived from classical mechanics? The possibility of the existence of such a theory, supplemented with additional parameters not
accessible to observation and hence called “hidden variables,” was denied by the majority of quantum theoreticians. It was Bell (1964) who supplied a convincing proof that this opinion is indeed
correct. He managed to show in a surprisingly simple way that there are certain experiments for which quantum mechanical predictions will be distinctly different from predictions of any deterministic
hidden variables theory – at least when we exclude the possibility that physical action can propagate faster than light (for details, see Section 11.3). Hence, there is no hope that the specifically
quantum mechanical uncertainty can be done away with by a further development of the theory, however clever it might be. Rather, it shows a qualitatively new aspect of reality. Let us now turn to the
concrete predictions of quantum theory of radiation which are of particular interest to optics. 4.2 Quantization of electromagnetic energy Fortunately, it is possible to apply without great
difficulty the quantization methods for material systems to the radiation field if we consider the eigenoscillations, or modes, of the field. The eigenoscillations emerge quite naturally when we
consider a field confined in a resonator with ideally reflecting walls. Physical boundary conditions guarantee that only certain spatial field distributions can form – the infinite conductivity of
the resonator wall material forces the tangential components of the electric field strength to vanish, i.e. the field must have nodes at the resonator walls, – and such eigenoscillations oscillate
with discrete frequencies (called eigenfrequencies) determined by the geometry of the resonator. The excitation of an eigenoscillation is described by a phase and an amplitude, both of which can be
cast into a complex amplitude. Any arbitrary excitation state is generally given by a superposition of such oscillations. This type of “mode picture” is convenient for the quantum mechanical
description of the field because each mode can, by itself, be quantized. The individual modes correspond to independent degrees of freedom of the field; in particular, the field energy is given as
the (additive) sum of the energies of each of the modes. From a physical point of view, however, the picture of a field enclosed in a “box” is realistic only for the case of microwaves. In optics we
deal primarily with free fields which propagate undisturbed – hence they are running waves and not standing waves – and are only altered later through optical elements or by other interaction with
matter. To be able to use the convenient mode picture we resort to a trick: we apply fictitious boundary conditions to the free field. We require the
Quantum mechanical understanding of light
field to be periodic in all three spatial directions with a (large) period length L. The aforementioned resonator volume is replaced by a “periodicity cube” with an edge length L. The modes selected
by the boundary conditions are monochromatic running plane waves differing in propagation direction, frequency and polarization. (The end points of the wave number vectors form a cubic lattice with a
lattice constant 2π/L.) Because the length L does not have any physical meaning, we can eliminate it by calculating first with a fixed L value and then, once we have the final result, performing the
limit L → ∞. What is accomplished by this procedure is the description of the radiation field by an infinite countable manifold of degrees of freedom. This is realized by the selection of discrete
modes out of the actually present continuum of modes (represented by a continuous distribution of wave vectors). Our task now is to describe quantum mechanically a single radiation mode. It turns out
that this problem has already been completely solved quantum mechanically for material systems; the results can simply be taken over. A single mode of the radiation field is formally identical to a
single (one-dimensional) harmonic oscillator. This is easily seen because a field excited in a single eigenoscillation will oscillate harmonically. This means that the time dependent (complex)
amplitude has the form A(t) = A0 exp(−iωt) = |A0 | exp{−i(ωt − ϕ)}, and its real part x = |A0 | cos(ωt − ϕ) satisfies the equation x¨ + ω2 x = 0.
This is simply the equation of motion of a harmonic oscillator, with x being the displacement of the particle from the equilibrium position. Let us note that the radiation mode is the only known
exact realization of a harmonic oscillator; the mechanical harmonic oscillator – based on the validity of Hooke’s law of a restoring force proportional to the displacement – can be viewed as an
approximation valid only for small displacements. The fundamental result of the quantum mechanical description of the harmonic oscillator is that its energy is quantized in the form of equidistant
steps (Fig. 4.1). The same applies also to the energy of the radiation mode which we have to envisage as being distributed over the whole mode volume according to the spatial distribution of the
field intensity. Counting the energy from the lowest level, the energy E n of the nth excitation state is an nth multiple of a constant, hν: E n = nhν
(n = 0, 1, 2, . . .)
4.2 Quantization of electromagnetic energy
7 6 5 4 3 2 1 n=0
Fig. 4.1. Energy levels of a harmonic oscillator. The parabolic curve represents the potential energy as a function of the displacement x.
(see Section 15.1). We can say that when the quantum mechanical state with energy E n is realized, n “energy packets” of magnitude hν (photons) exist in the mode volume. According to the different
field modes, there are different sorts of photons. Hence, a state of sharp energy of the total field is characterized by an infinite set of photon numbers which are related to the particular modes.
When the photon number of one mode equals zero, we say that the mode is in the vacuum state. We should comment on the zero point energy of the electromagnetic field. In fact, the ground state of the
harmonic oscillator is associated with the energy 12 hν, whereby the zero point of the energy scale was chosen in such a way that the potential energy of the elastically bound particle in its
equilibrium position is zero (Fig. 4.1). Adding this energy to the energy of the radiation field, we find that the total energy (the sum of all mode contributions) is hopelessly divergent. This is,
however, not too disturbing because the energy cannot be exploited in any way. Were it possible to “tap” a part of this energy, then the whole field would obviously be transferred into a state with
lower energy; such a state cannot exist, however, because the vacuum state is, by definition, the lowest energy state of the field. It seems natural to dispose of the zero point energy of the
electromagnetic field simply by making it the new zero of the energy scale (as we did above). There is a situation, however, where the argument is not so simple. Let us consider the case of a real
resonator with a variable form – in the following we will focus on the space between two plane parallel plates separated by a very small distance z – then the zero point energy of the enclosed field
diverges, but it also changes with the separation length z by a finite amount, and the above “renormalization” of the field energy can be achieved only for a single value of z. In fact, the metallic
plates cause an upper cutoff in the wavelength spectrum of the allowed modes. Because the electric field strength must have nodes (points of zero value) on the metallic surfaces, the greatest
possible value of the wavelength is z. With increasing z, more modes are available, and hence the zero point energy increases.
Quantum mechanical understanding of light
Interpreting the zero point energy as the potential energy V (z) of the system, we can derive a force K using the well known formula K = −grad V . This force is orthogonal to the plates and is
negative because dV /dz > 0, i.e. it describes an attraction of the plates. The force was theoretically predicted by Casimir (Power, 1964), who derived the following value for it: K =−
π hc , 480 z 4
(where c is the velocity of light). The formula reveals that the Casimir force is a pure “quantum force” between electrically neutral bodies. It is proportional to Planck’s quantum of action, and
hence has no classical analog. It has been measured precisely more recently by Arnold, Hunklinger and Dransfeld (1979). In fact, it is simply a special case of the well known Van der Waals forces
which generally act between macroscopic bodies (with a very small separation) whose surfaces form something like (partially open) resonators. The forces are very weak and of very short range. In
addition, their strength depends on the dielectric properties of the respective material. Let us return to the radiation modes. These are physically defined only when we deal with resonator modes;
they are individually excitable and the mode volume equals the resonator volume. Due to this, the (spatial) photon density, which is of paramount importance for the interaction with matter, is
non-zero for finite photon numbers. This, however, does not apply to the radiation modes introduced with the aid of artificial periodic conditions which have the whole space at their disposal. In
this case we should identify the mode volume with the fictitious, in principle infinitely large, periodicity cube. It is obvious that such individual modes cannot be excited. This is not possible for
the simple reason that we would need an infinite number of photons (i.e. infinite energy) to obtain a non-vanishing photon density. The situation is basically no different from the one in classical
electrodynamics, where we also find that an ideal plane wave is not realizable. Realistic fields with a necessarily finite linewidth and a finite spatial (as well as temporal) extension are, from the
classical viewpoint, superpositions of infinitely many plane waves. Quantum mechanically they are represented in the simplest case by energy eigenstates of the field; the photon numbers are non-zero
for those modes which are compatible with the parameters of the light beam (mean frequency and linewidth, propagation direction and its uncertainty, and polarization). Then, the mean photon number
per mode obtained by averaging over all the excited modes indeed has a well defined finite value that is uniquely determined by the energy or photon density related to unit volume and frequency
interval and is independent of the mode volume V (assumed to be large). The reason is that the increase of the volume leads to a
4.2 Quantization of electromagnetic energy
linear increase in the density of modes, and hence the number of excited modes increases linearly with V . The preceding description of a realistic electromagnetic field is rather laborious. It would
be convenient if the resonator mode approach could also be used in the case of realistic fields. This is indeed the case when we develop a realistic mode concept. To do so, let us recall that the
characteristic of a mode is that it represents a system with a single degree of freedom. Such a requirement is obviously satisfied by a coherent pulse with a prescribed shape.3 What still remains
open to question are the concrete values of the amplitude and the phase of the pulse, which are usually cast into the form of a complex amplitude that represents, just as in the case of a resonator
mode or modes defined by periodicity conditions, the dynamical variables of the system which change due to interactions. On the other hand, in the case of continuous (usually stationary) irradiation
by an almost monochromatic (so-called quasimonochromatic) light, we can identify the mode with that part of the light beam that fills a volume of the size of the coherence volume. The coherence
volume may be understood as a part of the space (filled with the light field) with a length equal to the longitudinal coherence length in the radiation direction and equal to the transverse coherence
length in the orthogonal direction. The spatial change of the instantaneous amplitude and phase in such a volume is very small by definition, and the discussed variables remain constant during the
undisturbed time evolution (observed from a co-moving frame), i.e. during propagation, and so the single mode formalism is applicable at least approximately. This realistic mode formalism works for
running plane waves, but we must keep in mind that the description applies only to finite space-time regions. In fact, we follow the path used extensively in classical optics (Sommerfeld, 1950). A
central role in the quantum mechanical description of light is played by the concept of the photon number related to the mode volume and so, for the single mode description, a physical definition of
this volume is indispensable, and the most important result of the previous analysis is that we have actually given such a definition. A considerable difference exists between the photon concept
developed on a quantum mechanical basis and that described by Einstein: whereas Einstein conceived photons as spatially localized particles, the quantum mechanical photon concept is related to
macroscopic volume. As in the classical description, the energy is spatially distributed. The “photons of a theoretician,” as we might call them, are not at all the same thing as those described by
an experimentalist who reports that a photon was absorbed at a particular position or was “registered” by a detector. In fact, the grainy structure of the electromagnetic field Einstein had in 3
Experimentally, we need a sequence of nearly identical pulses, typically generated by pico- and femtosecond
Quantum mechanical understanding of light
mind appears only when an interaction with (localized) particles takes place. We will discuss this problem in detail later (Section 5.3). 4.3 Fluctuations of the electromagnetic field The states of
the electromagnetic field oscillating in a single eigenoscillation, with a sharp energy, mentioned in the previous subsection, are numbered by the index n, which we can interpret as the photon
number. We can say equally well that we are dealing with eigenstates |n of the photon number operator. The wave properties of the field are determined by the electric field strength. In the quantum
mechanical description, the field is described by a Hermitian operator. For the description of optical phenomena the formal fact that the photon number operator and the operator of the electric field
strength4 do not commute is of particular importance (see Section 15.1). The consequence is that for states with a sharp photon number the electric field strength is necessarily indeterminate – in
the quantum mechanical sense explained in Section 4.1. In particular, the quantum mechanical expectation value of the electric field strength at any time is zero (see Section 15.1). This does not
however imply that the electric field strength measured at time t on an ensemble of radiation fields characterized by the wave function |n is always zero. The situation is such that the
(instantaneous) electric field strength fluctuates around the zero value, and positive as well as negative (with the same modulus) measurement values appear with the same probability. Because the
measurement time is fixed, these fluctuations have nothing to do with the time evolution; they just indicate a complete uncertainty of the phase of the electric field strength. Because the system is
in a pure state, this uncertainty must not be confused with simple ignorance, rather it is based on fundamental reasons, as explained in Section 4.1. Of particular interest for us is the case n = 1:
according to what has been said above the phase of a single photon must be viewed as fundamentally uncertain. The electric field strength does not come to rest, even in the lowest energy state where
the photon number zero corresponds to the vacuum state |0. In this case we speak of vacuum fluctuations of the electromagnetic field. This fact necessarily follows from the quantization of the
radiation field (see Section 15.2), and is the exact analog of the zero point oscillations of the harmonic oscillator which also occur, according to quantum mechanics, in the ground state. The
particle cannot rest at the lowest point of the potential well as it does in the classical case because then, according to Heisenberg’s uncertainty relation for position and momentum, the uncertainty
in the momentum would be infinitely large. 4 Usually the magnetic field strength does not play a role in optics.
4.4 Coherent states of the radiation field
Quantum theory forces us to revise drastically our concept of the electromagnetic field which was previously formed by classical electrodynamics. Whereas in the classical description the
electromagnetic energy density (see Equation (3.4)), and hence also the energy stored in an arbitrary volume, is uniquely determined by the electric and magnetic field strengths so that the energy is
sharp, quantum theory declares that the energy is sharp only at the cost of an uncertainty in the field strengths. Conversely, a sharp value of the electric field strength requires, according to
quantum theory, an uncertainty in the energy and hence the photon number. Quantum theory, so to speak, loosens the “rigid connection” between electric and magnetic field strengths and energy,
postulated by the classical theory, in favor of a more flexible relation. The discrepancy between the classical and the quantum mechanical descriptions is particularly striking for the vacuum state.
In the classical picture the space is completely field free, i.e. actually empty, while quantum mechanics assigns to it a fluctuating electromagnetic field. A measurement of the (instantaneous)
electric field strength (there is no known way of performing such a measurement, but no one can forbid a Gedanken experiment) would detect in general, according to quantum mechanics, non-zero values
even when no photons are present with certainty. The vacuum fluctuations are not connected to electromagnetic energy, at least not to a usable one.
4.4 Coherent states of the radiation field Even though quantum mechanics forbids us to assign to an electromagnetic wave sharp values of both the photon number and the phase, we can certainly ask
which quantum mechanical state comes closest to representing the classical waves of well defined energy (respectively amplitude) and phase. A satisfactory answer can be obtained with the requirement
that the electric field strength should – averaged over time and for a fixed mean photon number – fluctuate as little as possible (see Section 15.2). In this way we arrive (uniquely) at the so-called
coherent states of the electromagnetic field, often called Glauber states after the American scientist who was the first to realize their extraordinary usefulness for the description of optical
phenomena. The explicit form of the states in question is |α =
∞ n=0
|α|2 2
αn √ |n. n!
Similarly to the states |n they describe the excitation of a single mode of the radiation field.
Quantum mechanical understanding of light
The arbitrary complex number α in Equation (4.5) corresponds to the complex amplitude of the wave (in proper normalization). The modulus squared of α gives us the mean photon number, and the phase of
α is the phase of the electromagnetic field, or, more precisely, its quantum mechanical expectation value. The electric field strength fluctuates also in coherent states, which results in a phase
uncertainty, the importance of which, however, decreases with increasing mean photon number. The photon number also fluctuates in the coherent state, as can easily be seen from Equation (4.5). The
squared moduli pn of the expansion coefficients giving the probability of finding, using photon counting, exactly n photons follow a Poisson distribution: pn = e−|α|
|α|2n n!
(see Fig. 8.6). As can easily be shown, the variance of the photon number n 2 ≡ (n − n)2 (the bar stands for the average) is, for a Poisson distribution, equal to n. The coherent states of the
electromagnetic field are distinguished by a balanced relation between the fluctuations of the phase on the one hand and the fluctuations of the photon number on the other, hence coming as close as
possible to the classical ideal of a sharply defined phase and amplitude without actually reaching it. Only in the limit of an infinitely large mean photon number are the uncertainties in phase and
photon number without importance. (From what has been said above it follows that the relative variance n 2 /n 2 behaves for n → ∞ as 1/n and so goes to zero.) The result proves for this case the
validity of Bohr’s correspondence principle. Of great practical importance is the fact that with the laser we have an instrument at our disposal with which we can produce Glauber states. The laser
radiation is by itself a very good approximation to a coherent state because the laser process has an amplitude stabilizing effect (see Section 8.3). Of equal importance for quantum optics is the
fact that the properties of laser radiation are preserved when it is attenuated by (one photon) absorption. In fact, it was shown early on (Brunner, Paul and Richter, 1964, 1965) that Glauber states
remain Glauber states during the process of damping (see Section 15.4). What is changed is merely the value of α. After this short visit into “gray theory,” let us return to our main aim, which is to
understand the photon as part of our physical experience. Because our physical experiences can only be quantified by measurement (including evaluation using our senses) it is appropriate to turn our
attention to the problem of the experimental detection of light.
5 Light detectors
5.1 Light absorption Whereas receiving radio waves is a macroscopic process and hence belongs to the area of classical electrodynamics – in a macroscopic antenna an electric voltage is induced
whereby a large number of electrons follow the electric field strength of the incident wave, in a kind of collective motion – the detection of light, so far as the elementary process is concerned,
takes place in microscopic type objects such as atoms and molecules. As a consequence, the response of an optical detector is determined by the microstructure of matter. In particular, it is
impossible – due to the enormously high frequency of light (in the region of 1015 Hz) – to measure the electric field strength. What is in fact detectable is the energy transfer from the radiation
field to the atomic receiver, and this allows us to draw conclusions about the (instantaneous) intensity of light. We might ask what we can say about the above-mentioned absorption process from an
experimentalist’s point of view. Among the basic experiences that provide an insight into the structure of the micro-cosmos is the resonance character of the interaction between light and an atomic
system. The atomic system, when hit by light, behaves like a resonator with certain resonance frequencies; i.e. it becomes excited (takes up energy) only when the light frequency coincides with a
value that is characteristic for the particular atom. Hence, an incident light wave with an initial broadband frequency spectrum that has passed through a gas exhibits in its spectrum dark zones, the
so-called absorption lines. This experimental fact was discovered for the first time by Fraunhofer for sunlight and it forms the basis of absorption spectroscopy which helps to detect reliably the
smallest amounts of substances. The resonator characteristics of the atoms are also evident in the process of emission: the frequency spectrum of the light emitted by excited atoms is built up from
discrete “lines” matching exactly the absorption lines.
Light detectors
The described observations present the atom as an object that is capable of oscillating with characteristic frequencies which show up in both absorption and emission. Confining ourselves to just one
frequency, we find the atom to be similar to a Hertzian dipole, which, in the resonance case, absorbs energy from the field (for a properly chosen phase relation between the dipole and the external
field) or emits energy when the external field is absent. A detailed study of the atomic excitation process reveals another rather peculiar aspect. In a famous experiment, Franck and Hertz (1913,
1914) found that an electron beam is able to transfer energy to atoms only when the kinetic energy is equal to or exceeds a certain minimal value. It soon became clear that the result fitted neatly
into the atom model developed by Niels Bohr in 1913. The first Bohr postulate assumed the existence of “stationary” states of an atomic system with certain, discrete, energies. This implies that it
is necessary to supply the atom with a certain amount of energy (corresponding to the distance to one of the higher energy levels) in order to alter its state. This is a peculiarity of microscopic
systems, having no counterpart in the classical world, which made a radical departure from classical physics unavoidable. Within the framework of classical physics it is really not possible to
understand why an electron bound in an atom should be “allowed” to move only in certain orbits. The explanation for the existence of such stationary states was one of the main objectives of the later
developed quantum mechanics. We might ask how two so different experimental facts are related: on the one hand, the resonance behavior of the atoms in the interaction with light, and, on the other
hand, the structure of atoms that manifests itself in the discrete energy levels. The answer (which was later explained in detail by quantum mechanics) was given by Bohr in his second postulate,
which stated that the atom can execute jump-like transitions from one level to another associated with the emission or absorption of light. Depending on whether the transition takes place from a
higher energy level to a lower energy level, or vice versa, we have emission or absorption. The frequency of the corresponding spectral line is ν=
1 (E m − E n ). h
Here, E m is the energy of the upper level, E n is the energy of the lower level and h is Planck’s constant. The equation constitutes a unique relation between the energy levels and the atomic
resonance frequencies. Actually, the fundamental relation Equation (5.1) had already been directly verified by Franck and Hertz, who measured the excitation energy of mercury atoms using the electron
collision method and the frequency of the fluorescence light produced.
5.2 Photoelectric detection of light
However, the process of absorption by an atomic system is by no means a measurement of light. Quite generally, we may only speak of a physical measurement when it leads to a macroscopically fixed
result (for instance in the form of a pointer deflection); the observed object has to induce in the final stage an irreversible macroscopic change. For an absorber such a process is the conversion of
the excitation energy of the individual atoms or molecules into thermal energy. (In the case of a gas, particle collisions ensure the conversion of the excitation energy into kinetic energy.) The
result is a temperature increase, which is detectable with conventional methods. Fortunately, the inclusion of the rather “inert” thermodynamic processes into the measurement can be circumvented – in
favor of electric or electrochemical processes – by using the photoelectric effect, in which the incident light causes not excitation but ionization of the atoms. The elementary step of the process,
the release of an electron from the atom, is by itself an irreversible process. The electron leaves the atom due to its acquired kinetic energy, and there is virtually no chance that it will be
recaptured. The situation is very similar to that of spontaneous emission (see Section 6.5). The necessarily macroscopic measurement process then requires only an appropriate amplification of the
microscopic primary signal. 5.2 Photoelectric detection of light We present first in this section a chronological discussion of the main types of photoelectric receivers. The first practical
application of the photoeffect was photography, in which the primary process is the release of a valence electron from one of the bromine ions in a silver bromide grain (taking the form of an ionic
crystal Ag+ Br− ) initiated by incident light. The electron moves freely in the crystal lattice (hence we speak of an inner photoeffect) and can be captured by one of the crystal defects. When this
happens, the defect becomes negatively charged and is able to attract one of the silver ions sitting in the interstitial sites (which explains why it can move freely) and neutralize its charge. This
process can be repeated several times at the same defect by a repeated capture of electrons. The result is the formation of a “nucleus” consisting of several silver atoms. It is the development, a
chemical treatment of the photographic layer in which a grain containing such a nucleus is reduced to a black, metallic silver grain as a whole, that makes the photographic picture visible, thus
producing a macroscopic “trace” of the light involved. One significant drawback of the photographic procedure is that the long exposure times required do not allow us to obtain any information about
the behavior of light quickly enough. It is possible, however, to measure brief changes of intensity over time by using the photoelectrically released electron directly in the detection
Light detectors
process. This principle is realized in the form of a photocell, where light releases electrons from an appropriately coated metallic surface. The electrons are emitted into the vacuum and are guided
with the help of an externally applied voltage to an anode. The strength of the resulting electric current in the external circuit is a measure of the intensity of the light incident on the
photocathode. To be more precise, there is (within the classical description valid for intensities that are not too small) a relation between the electric current J and the light intensity I of the
form 1 J (t) = α T1
I (t ) dt ,
where α is the sensitivity of the device. The time averaging in Equation (5.2) has its origin mainly in the fact that a single released electron generates a current pulse whose duration equals the
passage time of the electron between the cathode and the anode. The minimal value of T1 obtainable in practice – determining the resolution time of the photocell – is about 10−10 s. Thus, the
photocell is able to follow intensity changes that take place on a time scale t ≥ T1 . Correspondingly, the photocurrent is not constant in time but contains a direct current component as well as an
alternating current component at frequencies not exceeding the value 1/T1 . The creation of the photocell opened up to researchers a new dimension of experimental investigation: for the first time
they had in their hands an instrument allowing them to observe directly the fluctuation phenomena of light and hence its microscopic structure. The detection efficiency of the photocell, however, is
far from ideal. Obviously many electrons are needed to obtain a measurable current, hence the intensity of the incident light must not be too low. Important progress was achieved by the development
of the secondary emission multiplier, or photomultiplier. The principle of the device is that each released electron becomes the “ancestor” of an avalanche of electrons. The details of the process
are as follows. The primary electrons are accelerated by an electric field and impinge onto the first auxiliary electrode (a so-called dynode), releasing a group of secondary electrons. These are
again accelerated and hit the second dynode, where they generate new “offspring” and so on. The procedure is repeated several times till the generated electron avalanche is sucked up by the cathode.
The photomultiplier not only allows the conversion of very weak light fluxes into electric currents, it also makes possible the detection of individual photons, thus reaching the utmost frontier of
light registration set up by Nature through the atomistic structure of matter. A single primary electron causes an electron avalanche, which induces a current pulse in an external circuit, which is
5.2 Photoelectric detection of light
transformed into an easily measurable voltage pulse on an Ohmic resistor, which represents the macroscopic signal indicating the detection of a single photon. We call such a device a photodetector or
photocounter. We should point out that the visual process taking place in our eye relies also on the principle of a photodetector. Actually, Nature equipped us with the most sensitive detection
instrument available: it is a proven that an adapted eye is able to “see” just a few photons! Not all of the photons incident on the cathode induce a photoeffect (at least in the visible spectrum,
though in the ultraviolet part of the spectrum the requirement that each incident photon generates an electron can be satisfied), and hence an incident photon will not be detected with certainty but
only with a certain finite probability. Thus, the counter sensitivity, which depends primarily on the cathode material and the spectral range, takes different values. The probability of a
photoelectric release of an electron is proportional to the instantaneous light intensity, though, as with the photocell, passage time effects will limit the resolution of the apparatus. Hence, we
expect the response probability W of a photodector (per unit time) – in analogy to Equation (5.2) which is valid for the photocell – to be of the form t
1 W (t) = β T1
I (t ) dt .
The value of the constant β is determined by the counter sensitivity. The integration time T1 , the so-called response time of the detector, is approximately of the same order of magnitude as for the
photocell. Equation (5.3) is based on a classical description. However, this is no longer justified when we enter the field of very small intensities. Equation (5.3) is in flagrant contradiction to
experience when applied to wave packets with an energy smaller than hν – classically a completely legitimate assumption – because even then it yields a non-zero response probability. However, in such
a case photoionization is energetically impossible! A helping hand is offered by the fully quantized theory (see, for example, Glauber (1965)), replacing Equation (5.3) by 1 W (t) = β T1
(t )Eˆ (+) (t ) dt ,
where Eˆ (−) and Eˆ (+) represent the operators of the negative and the positive frequency parts, respectively, of the electric field strength (see Equations (3.8) and (3.9)). The quantum mechanical
expectation value in the integral depends crucially
Light detectors
on the ordering of the factors. The form given is the so-called “normal ordering” (see Section 15.1), which is distinguished by the fact that corresponding expectation values are free from vacuum
contributions. The vacuum fluctuations of the electric field described in Section 4.3 thus do not have, fortunately, any influence on the photoelectric detection process; in particular, there is no
response from the photodetector when (with certainty) no photons are present. Photoelectrons can also be detected optically. For this purpose a photoelectric image converter is used in which
electrons released from a photocathode by the incident light are, after being accelerated by a static electric field, incident on a fluorescence screen. (The whole process takes place in a vacuum
tube.) Electronoptical imaging of the cathode onto the screen makes it possible to visualize the intensity distribution of the incident light on the cathode. In this way it is possible to amplify
considerably the brightness of the primary (optical) image and, in addition, to adjust the scale of imaging. Recently, the method of photoelectric image conversion was applied with great success to
the measurement of the time dependence of extremely short optical signals. In this case, the troubling influence of electrons on the anode is absent, and hence the time resolution is no longer
limited by the time of flight of electrons in the vacuum tube but by the fact that electrons released in different depths of the cathode layer reach the surface at different times (they are
decelerated and scattered by interaction with the cathode material). With the help of so-called “streak cameras,” a resolution time of one picosecond has been achieved. The electron beam is spatially
resolved by applying as linearly as possible a rapidly increasing electric field orthogonal to the propagation direction of the beam, so that the temporal sequence of electrons in the beam is
translated into spatial separation on the fluorescent screen. Currently semiconductor materials are utilized more and more for photoelectric detection. The elementary process is in this case the
generation of an electron– hole pair by an incident photon, i.e. an inner photoeffect. So-called photodiodes are used most commonly. These can be combined in large numbers to form single array or
matrix configurations and so allow a spatial resolution of the optical signal. We speak in such cases of photodiode arrays. Because photodiodes allow a large amplification via internal avalanche
processes, such detectors are called avalanche photodiodes. In the following, we list the possible experiments realizable with the detectors discussed in this section. Mean intensity measurement Mean
intensity measurements can be carried out with the help of the photographic plate, the photocell and the photomultiplier. The blackening of a photographic
5.2 Photoelectric detection of light
plate is a measure of the light intensity averaged over the exposure time. In photoelectric detection the corresponding information is contained in the direct current part of the photocurrent. Using
a photodetector we sum up all the events for a longer period of time, and thus we are able to detect even extremely small light intensities. When we do not confine ourselves to the detection of the
intensity at a particular position but instead detect the intensity at different points, we can observe a spatial interference pattern.
Detection of beats The time analog of spatial interference, the beats discussed in Section 3.2, can be directly observed by letting two coherent light beams with slightly different frequencies
(delivered in an almost ideal form by two gas lasers for instance) impinge on a photocell or a photomultiplier. They are represented by the alternating current component of the photocurrent, and can
be measured accurately by high-frequency engineering. These experimental methods are of considerable practical importance for the high precision measurement of light frequencies, allowing, for
example, the measurement of the frequency of a laser with respect to the frequency of a second laser with extreme precision.
Measurement of intensity correlations We can measure spatial intensity correlations by placing two (or more) detectors at different positions and counting coincidences; i.e. we register only those
events for which all the detectors respond at the same time. Such measurements can also be made at higher intensities using photocells (or photomultipliers). The correlations between the
photocurrents of the two photocells are determined with the help of a “correlator” (for details, see Section 8.1). Analogously, temporal intensity correlations (at a fixed position) can be measured.
The experimental difficulty of positioning two detectors at the same place is solved in an elegant way following the procedure of R. Hanbury Brown and R. Q. Twiss (1956a), in which the light to be
analyzed is split by a beamsplitter (semitransparent mirror) into mutually coherent parts incident on two independent detectors (see Fig. 8.3). The measured variable is then the number of delayed
coincidences; i.e. only those events are counted in which a response of the first detector after a time delay of τ seconds is followed by a response of the second detector. This experiment can also
be carried out for sufficient intensity using photocells and photomultipliers.
Light detectors
Single-photon arrival time measurement For weak intensities of the incident light (when necessary it can be further attenuated) the photodetector can be used to observe the time “points” at which a
photoelectron was released, i.e. moments at which a photon was actually absorbed. Such measurement techniques can answer questions of the type: How long, after the onset of irradiation, does it take
before the first photoelectron shows up? The measurement data can also be used to determine the frequency with which two photons with a prescribed time delay τ are registered. The obtained
information is the same as in the case of delayed coincidence measurements. The practical difference between the single-photon arrival time measurement method and the method discussed in the
preceding paragraph is that we need only a single photocounter for the former.
Photon counting The voltage pulses supplied by a single photodetector (originating from the generation of a single photoelectron) can be electronically counted in time intervals of equal length. In
this way we obtain the number of photons registered by the detector within the individual time intervals. Of great physical interest are the statistical variations of the measured photon numbers,
which usually differ from case to case. Finishing this short review about the available technical possibilities for the observation of optical phenomena, let us ask the following question: To what
extent does the photoeffect deliver a watertight proof for the existence of photons as spatially localized energy lumps introduced in the Einstein light quanta hypothesis?
5.3 The photoeffect and the quantum nature of light The photoeffect is described within quantum mechanics very similarly to the process of light absorption discussed in Section 5.1. The reason is
that we can speak of well defined atomic states even when the electron is completely detached from the atom. These states form, so far as the energy is concerned, a higher energy continuum which is
attached to the discrete energy spectrum. Its energy is composed additively of two parts, the value E c marking the lower edge of the continuum and the kinetic energy of the essentially free
electron: 1 E = E c + mv 2 , (5.5) 2 where m is the mass and v is the velocity of the electron. The continuous character of the spectrum – the fact that in this region any value of the energy is
allowed – is caused by the fact that the kinetic energy of the free electron is not subject
5.3 The photoeffect and the quantum nature of light
to any type of “quantization rule,” and it can be safely assumed that the classical description applies such that any non-negative value of energy can be taken. The quantum mechanical description
makes no distinction between transitions between two discrete energy levels or between two states, one of which belongs to the discrete and the other to the continuous part of the spectrum. In
particular, Equation (5.1) relating a certain transition to a particular atomic frequency keeps its validity. Using Equations (5.1) and (5.5) we obtain for the resonance frequency of the ionization
process the relation 1 1 ν= E c − E 0 + mv 2 , (5.6) h 2 where E 0 is the energy of the initial atomic state (usually identical to the ground state of the atom). Because the energy difference E c − E
0 is simply the ionization energy, we immediately obtain in this way the Einstein Equation (2.1). The energy balance obtained from the quantum mechanical point of view, certainly does not imply the
photon picture, it just reflects the general resonance character of the interaction between radiation and matter. In fact, Equation (5.6) can already be found in the semiclassical theory, i.e. when
the atoms are treated quantum mechanically and the radiation is treated classically! What can be said with certainty is that the unrestricted validity of the energy conservation law for individual
processes, such as the release of a photoelectron, implies an extraction of energy from the radiation field of magnitude E c − E 0 + 1 2 2 mv , which equals hν according to Equation (5.6). Can it be
that the particle nature of light is only mimicked by the detector? The atom participating in the elementary process can, due to its structure dictated by quantum mechanics, only either take from the
radiation field of frequency ν the energy hν or not act at all. Is it not possible that the electromagnetic field does not have a grainy structure at all, but is instead similar to a soup which is
“quantized” only when it is eaten portion-wise with a spoon? Is classical electrodynamics right after all by ascribing to light a spatially continuous energy distribution? And is it true that an
atom, following the rules of classical theory, collects energy from the field until it has gathered enough energy (hν) required for the transition? Our experience tells us that the classical concept
cannot withstand scrutiny. Let us estimate the “accumulation time” required by the atom to collect the energy hν from the radiation field according to classical theory! (We follow an argument given
by Planck (1966).) Let us assume the atoms participating in the photoeffect to be “densely packed,” as in a solid. Obviously the atoms placed right at the surface and thus being hit directly by the
incident light are affected the most by the radiation field. In the most favorable case they can absorb the whole incident energy, and we will assume that
Light detectors
this is indeed the case to estimate the minimal duration of the accumulation process. Let us make the additional assumption – motivated by quantum mechanics – that all the atoms are initially (at the
moment when the irradiation begins) in the same state and, apart from boundary effects, that none of the surface atoms is in any way preferred. This means that their behavior under the influence of
the radiation field (we consider a plane, quasimonochromatic wave, incident orthogonally to the surface of the solid) must be identical. This is the general consequence of the deterministic character
of any classical theory. The surface atoms cannot take energy from each other. The energy at their disposal is (assuming dense packing) given by the flux through their geometrical cross section.
Introducing a photon current by counting the energy of the incident field in units of hν, we find the following: for a given photon flux density j, in total j F photons per second flow through the
atomic cross section F; i.e. it takes time Tmin =
1 jF
for energy corresponding to that of one photon to flow through the atom. This time is also an estimate for the shortest possible duration of the absorption process in which the atom swallows an
energy quantum hν; i.e. Tmin is the minimal value of the classical accumulation time. To form an impression of what might be the values of a realistic photon density flow, let us consider the most
natural source – the sun. Let us assume we cut out, using a filter, green light at a wavelength of 500 nm with a frequency width of 1% of the frequency. The corresponding photon flow density is of
the order of 1019 /(m2 s), which is easily calculated from Planck’s radiation formula and the known parameters for the sun (surface temperature =6000 K, angular diameter 32 arc minutes). Using these
parameters, Equation (5.7) gives a minimum duration of the accumulation process of 10−1 s for an atomic cross section of 10−18 m2 . This calculated time is relatively large; however, at considerably
smaller intensities we obtain fantastic accumulation times. Actually, the photoeffect sets in at much lower intensities. This is known from direct experience, namely our visual perception, where the
elementary process is of photoelectric nature. In fact, the response threshold of the human eye (for green light) corresponds to an energy flow of 5 × 10−17 W (see, for example, Wawilow (1954)),
which implies a photon flow of 120 photons per second. A photon flow density of, say, 1010 /(m2 s) is certainly sufficient to initiate the generation of photoelectrons. However, according to Equation
(5.7) it would take at least 108 s; i.e. we would have to wait longer than three years to see the first photoelectrons being detected – a rather bizarre result. These arguments show us that our
everyday experience of visual perception is a convincing argument in favor of the photon nature of light. Were classical theory
5.3 The photoeffect and the quantum nature of light
correct, we would not be able to perceive optically feeble objects at all – the overwhelmingly beautiful night sky full of stars, which has influenced so strongly the development of physics, would be
hidden from our eyes. In addition, the process of seeing would be very strange. The different brightnesses of objects would be perceived through the different times it would take to recognize them.
After that, however, each object (if we consider the idealized case that the intensity of incident light from the object is the same everywhere on the retina) would appear with the same brightness
because all the receptors affected by the light from the object would respond in the same way and at the same time. The vision process becomes understandable only with the photon picture, which is
similar to that of a shower hitting the retina, whereby a sensitive element (a rhodopsin molecule in the rod) remains either completely unaffected or receives the required energy “at one blow.” One
of the reasons for the erroneous estimation of the accumulation time given by the classical theory is the assumption of densely packed atoms. In this case, atoms block each other, so preventing the
atoms from absorbing the electromagnetic energy from a larger surrounding area. In fact, such a “suction effect” for atoms far from each other (or if, for high densities, only a few atoms participate
– for whatever reasons – in the interaction with the radiation field, which amounts to the same thing) is recognized from classical electrodynamics. We show in the following that classical theory
leads under such conditions to a contradiction with our experience; however, in this case the discrepancy between the theoretical prediction and the experimental results is not as dramatic as in the
previous case of densely packed atoms. We start again with the estimation of the accumulation time within the classical description. The simplest model of classical electrodynamics for the elementary
process of absorption (independent of whether it is excitation or ionization) is that of an electron elastically bound to an attractive center. Hence we are dealing with a harmonic oscillator that is
put into resonant oscillation by the radiation field of frequency ν. We are interested in the time T needed for the oscillating electron to accumulate an energy (given by the sum of the kinetic and
the potential energy) that equals hν, the atom thus having absorbed a whole light quantum. (We will not analyze the question of how the electron is released from the atom in the case of the
photoeffect.) A simple calculation made by Lorentz (1910) gives for the accumulation time the value √ 8mhν T = , (5.8) eE 0 where E 0 is the amplitude of the electric field strength E(t) = E 0 cos
(2πνt − ϕ) at the position of the atom, e is the electronic elementary charge and m is the mass of the electron. The atom theory of Bohr (in which we consider a hydrogen-like
Light detectors
atom) allows us to replace the variable 2πνm by h¯ /r02 , where r0 is Bohr’s radius. (Because we are interested only in the order of magnitude of T , we can assume for simplicity the principal
quantum number to be equal to unity.) Thus, Equation (5.8) can be replaced by √ 2h T ≈ . (5.9) π D E0 Here we have introduced the parameter D = er0 , which represents, at least in order of magnitude,
the atomic dipole moment. Expressing in Equation (5.9) the amplitude E 0 of the electric field strength by the time averaged photon flux density j, µ0 E 0 = 2hν j 4 , (5.10) 0 (µ0 is the vacuum
permeability), we obtain the final expression h 4 0 1 . T ≈ π D ν j µ0
Comparing with the previously obtained result of Equation (5.7), we see that the dependence on the photon flux density is much weaker. The absolute values for the accumulation time are indeed much
smaller than those calculated previously. For j = 1019 /(m2 s) (green spectral part) we find for T the rather small value 3 × 10−7 s (the earlier estimate was 10−1 s), while for j = 1010 /(m2 s) the
earlier absurd value of 108 s drops to 10−1 s. The dramatic decrease in accumulation time can be understood if the atom – when the environment does not hinder it – is drawing the energy from a volume
considerably larger than its own dimensions. The mechanism responsible for this effect is well known from antenna theory: the incident wave induces an atomic dipole moment which by itself emits a
wave. The wave emitted by the absorber, however, does not transport any energy away from the atom – on the contrary, the wave interferes with the incident wave (there is a fixed phase relation
between the two waves which is a characteristic for the absorption process) in such a clever way that even the energy flowing around the atom at larger distances is directed towards the atom. Figure
5.1 shows the shape of the energy flow, time averaged over several light periods (for details see Paul and Fischer (1983). The “suction effect” of the atom is clearly seen, and is surprisingly
strong: energy that has already passed by the atom is diverted back to the atom. We should point out that the described mechanism works only as long as the phase of the incident wave stays constant.
Any change (for instance a jump) in the phase destroys the optimal absorption phase relation between the incident wave
5.3 The photoeffect and the quantum nature of light kz 0.2
0.3 kx
(a) ky 0.2
0.3 kx
Fig. 5.1. Energy flow into an absorbing atom (a) in the plane spanned by the propagation direction of the monochromatic, linearly polarized plane wave (x) and the oscillation direction of the induced
dipole moment (z), and (b) in the x, y plane orthogonal to the former plane (k is the wave number). After Paul and Fischer (1983).
and the dipole oscillation, and the accumulation time becomes much longer. Equation (5.8) is valid only under the assumption that the coherence time of the incident wave is not shorter than the
accumulation time. In fact, the values of the accumulation time predicted by Equations (5.8) and (5.11) are still orders of magnitude larger than the values obtained experimentally. The two
fundamental experiments performed in this context are very original, and we do not miss the opportunity to describe them in some detail as they have been (unjustifiably) almost forgotten. The main
experimental challenge is essentially the measurement of the anticipated extremely short times that pass between the beginning of irradiation and
Light detectors
the appearance of the first photoelectrons. One might think that sophisticated ultrashort time measurements, which have only just recently been developed, would be necessary. However, a surprisingly
simple solution of the problem was found in the 1920s by the American researchers Lawrence and Beams (1927). They generated, with the help of a spark discharge, a light flash which was sent onto the
cathode of a photocell. The cathode was equipped with an additional grid. At the moment of the discharge there was a positive voltage between the grid and the cathode so that all the released
electrons were drawn away from the cathode. An appropriate circuit arrangement guaranteed that the discharge simultaneously caused a reversal of the grid voltage. The simple fact that an electric
pulse is propagating along a wire with the velocity of light was used to delay the voltage change by an interval τ , compared with the beginning of the irradiation of the cathode. This was
accomplished simply by using a wire that was longer than the optical path. The problem of time measurement was thus reduced to the measurement of length! The now negative grid voltage blocked the
cathode photoelectrons. The registered photoelectrons could have been released at a maximum time delay τ if compared with the beginning of the irradiation. Lawrence and Beans concluded that the
photoelectrons were released simultaneously with the beginning of the irradiation. They reported a precision of the apparatus with respect to the delay time to be 3 × 10−9 s. Hence, the accumulation
time – if there is any – can be 3 ns at maximum. The aim of the second experiment carried out by Forrester, Gudmundsen and Johnson (1955) was completely different, namely the detection of
interference – in the form of beats – between two thermally generated monochromatic light waves with slightly different frequencies, and it is by itself of fundamental importance. (In Section 7.4 we
will consider the experiment in detail.) What is of interest for us is the conclusion drawn by the authors, namely that the observed beat signal (in the photocurrent) at a difference frequency of
1010 Hz can be formed only when the photoelectrons follow exactly the fluctuations of the light intensity, which means that the accumulation time must be significantly shorter than 10−10 s. Because
the classically calculated accumulation time is highly dependent on the intensity of the incident light, we need, for a comparison with the predictions of the classical theory, data about the
intensity. Fortunately these were measured by Forrester et al. (1955). The energy flux density on the photocathode surface was found to be about 0.8 W/m2 , for which (for a wavelength of 546.1 nm) a
photon density flux of 2 × 1018 /(m2 s) follows. The accumulation time (given by Equation (5.11)) for this value is approximately 7 × 10−7 s. The obtained value is too large by at least four orders
of magnitude, and the statement about the incompatibility of the classical description with experience can be considered as proven.
5.3 The photoeffect and the quantum nature of light
Interestingly enough, a formula almost identical to Equation (5.9) can be obtained using a quantum mechanical description of the absorption process for a strong, monochromatic (and hence coherent)
field.1 (The important point is that the atom is treated quantum mechanically.) Actually, one obtains the relation Tqu =
h , 2D E 0
for the time when the atom is with 100% probability in its upper state, i.e. when it has absorbed with certainty an energy quantum hν from the radiation field starting from the lower state (see, for
example, Paul (1969)). The symbol D stands now for the quantum mechanical “transition dipole moment,” i.e. the non-diagonal element of the electric dipole operator with respect to the two atomic
levels participating in the transition, and we have assumed that the mean lifetimes of the two atomic levels (determined by spontaneous processes and collision processes) are considerable larger than
the “transition time.” In fact, such an agreement between the classical and quantum mechanical predictions is to be expected from the correspondence principle! One might question what is the
essential difference between the two types of descriptions. The answer is, it is the intrinsically statistical character of quantum mechanical predictions, as already explained in Section 4.1, making
a deterministic description of individual microscopic single systems impossible in principle (in contrast to classical theory). The difference is best seen when we consider times t shorter than the
accumulation times given by Equations (5.9) or (5.12). The classical as well as the quantum mechanical theory agree on the statement “The atom has absorbed up to time t only a certain fraction of a
single quantum hν,” but the meaning of it is completely different for the two theories. Within the classical description, the statement has to be taken literally in the sense that each single member
of the ensemble of atoms (being all initially in the same state) absorbed no more and no less than the particular energy amount. However, because such an amount is not sufficient to start ionization
(consider the photoeffect), no photoelectrons can be detected at this moment. The quantum mechanical statements, on the other hand, refer to the behavior of the atoms observable on average in an
ensemble; the individual atoms necessarily do not act in the same way, but instead each follows its own destiny. The mean value (say 14 hν) of absorbed energy actually comes about such that, at the
considered time t, 25% of the atoms have absorbed one whole quantum while the remaining 75% come out empty handed. 1 In principle, this is not surprising, as the quantum mechanical equations are just
a very clever “translation”
of the classical ones, and anyway the classical formula Equation (5.9) was derived with a loan from quantum mechanics (by employing Bohr’s model).
Light detectors
The separation of the whole ensemble into two sub-ensembles assumes, strictly speaking, the existence of a suitable measurement procedure which “finds out” the energy states of the atoms. The
photoemission itself can be already considered as a substantial element of such a measurement process. In fact, the release of a single electron is an irreversible process because the electron is
moving away from the atom, and from it, for example through an avalanche process (see Section 5.2), we can obtain a macroscopic signal. To recap, let us again state the fundamental difference between
the classical and the quantum mechanical descriptions of the photoeffect. In the classical description, the fate of the atoms is predicted in fine detail – from the initial conditions and the
external influences it can be uniquely determined, and hence it is identical for all the atoms that have the same initial conditions. In the quantum mechanical description, the life of an atom is a
lottery. Similar to the case of radioactive decay, where some “daring” atomic nuclei decay immediately while others take their time, some of the atoms are ionized after a very short time whereas
others are ionized later; the ionization events are distributed over the whole time scale. The sudden emission of electrons by atoms due to light irradiation setting in after the elapse of the
accumulation time in the classical picture – all the atoms eject an electron more or less simultaneously – does not take place. The weird quantum mechanical rules of Nature guarantee that the process
of ionization is “resolved” into a great number of single events taking place at different times. We would like to emphasize once again that this is a general property of the quantum mechanical
description of Nature; the “dice playing,” which Einstein believed he could not expect of God, actually takes place. We cannot resist pointing out the amazing consequence of this peculiarity of
quantum mechanics which changed so fundamentally our concepts about the micro-processes. We consider as an example a problem which is currently of interest in particle physics, the question of the
stability of the proton. Fundamental considerations led to the concept that the proton is not a stable particle but disintegrates on average after about 1031 years. This inconceivably long time,
exceeding the age of the universe by 21 orders of magnitude, seems to make the question highly academic – a speculation having no relation to physics as a science of experience. In fact, we should
realize that if we consider the huge number of protons contained, for instance, in one cubic kilometer of water, some – within days or weeks – might decay, and such a process would be detectable with
good measurement equipment and a lot of patience – at least it is within the realm of experimental possibility! The performed experiments indicate, however, that the decay time of the proton, if it
is really unstable, must be larger than 1031 years. This is, let us emphasize once again, a real experimental result!
5.3 The photoeffect and the quantum nature of light
Let us return after this short detour to the photoeffect! The difference between the classical and the quantum mechanical description explained in detail in the previous text manifests itself also
macroscopically, namely in the time dependence of the photocurrent. The classical theory predicts an alternating current, while quantum mechanics – in excellent agreement with experience – predicts a
direct current. This example shows that we must be very careful with the following frequently drawn conclusion: the correspondence principle applies to ensemble mean values; macroscopic measurements
always deal with this type of mean values, and hence, in the macroscopic regime, the quantum mechanical description goes over into the classical. In fact, we do not measure the average value of the
absorbed energy in the photoeffect (which is as explained the same in both cases). The conclusion applies only to the attenuation of the incident light as a result of the photoeffect. The arguments
presented in this section leave no doubt that one of the aspects of radiation is its particle nature. The electromagnetic energy of a radiation field (of small intensity) cannot be imagined as being
more or less evenly distributed in space because in such a case the atom would not be able to “collect” within such a short time the required amount of energy hν. Independent of all the theoretical
considerations, we can adopt the following pragmatic point of view: there are instruments (photodetectors) which reliably indicate that an energy amount of hν was taken from the field. Such an event
is called “detection of a photon.” Because the process inevitably leads to the destruction of the photon, we do not learn too much about the real “existence” of the photon. So it seems reasonable to
turn our attention to the “birth” of a photon. This takes place during the process of spontaneous emission.
6 Spontaneous emission
6.1 Particle properties of radiation One of the most important properties of macroscopic material systems is their ability to emit radiation spontaneously. According to quantum mechanics, the
emission process is realized in the following way: an atom (or a molecule) makes a transition from a higher lying energy level (to which it was brought, for example, by an electron collision) to a
lower lying energy level without any noticeable external influence (in the form of an existing electromagnetic field), and the released energy is emitted in the form of electromagnetic radiation. The
discrete energy structure of the atom dictated by the laws of quantum mechanics is imprinted also on the emission process (quantization of the emission energy), since the energy conservation law is
also valid for single (individual) transitions. Hence, a single photon, in the sense of a well defined energy quantum, is always emitted. The emitted quanta can be directly detected by a
photodetector. (Strictly speaking, identifying a registered photon with an emitted one is possible only when it is guaranteed that the observed volume contains only a single atom. (For details see
Sections 6.8 and 8.1.) Under realistic conditions, the experiment can be performed in the following way. First, a beam of ionized atoms is sent through a thin foil; the emerging beam then consists of
excited atoms. (This procedure is known as the beam–foil technique.) A detector is placed at a distance d from the foil to detect light emitted sideways by the atomic beam (Fig. 6.1). The setup is
used to measure the relationship between the frequency with which the detector responds and the distance d. (The detector is moved along the beam, providing different values for d.) Obviously a
single event registered by the detector represents the absorption of an energy quantum at time t = d/v (with v being the velocity of the atoms) counted from the instant of the excitation of the atom.
Because the response of
Spontaneous emission atomic beam
d diaphragm foil detector
Fig. 6.1. Beam–foil technique.
the detector indicates that the atom emitted all of its excitation energy,1 we can conclude that the atom at time t (and also later if no additional transitions follow) is with certainty in the lower
state; i.e. it has completed its transition. Another experimental method employed in the observation of the quantum-like emission uses the possibilities offered by modern laser technology, which
allows the generation of extremely intense and ultra-short light pulses (of the duration of picoseconds down to femtoseconds).2 Using pulses of appropriate frequency it is possible to excite atoms
which have initially been in the ground state at a chosen instant – more precisely, within a time interval much shorter than the mean lifetime of the excited level. With the help of a photodetector
we can then determine the time t (that has elapsed since excitation) at which the first photon is detected. By repeating the experiment many times we can determine the response frequency of the
detector as a function of time t. (To do so we need a detector with a response time much shorter than the duration of the whole emission process. In addition, the experiment must be performed for
small light intensities so the detection of photons that have been emitted later is possible.) In this way we obtain the same information as in the case of the beam–foil technique. These two
experiments present the atomic transition as discontinuous: as an instantaneous and jump-like process. In fact, this picture was introduced previously by Niels Bohr. It seems as if for a certain time
interval nothing happens; the atom remains in its initial state until abruptly, at an unpredictable moment t, it “decides” to make the transition. We know from our experience that the response
frequency of the photodetector, as a function of time t counted from the earliest possible arrival time t0 of a light quantum (where t0 = te + tl , te being the “instant” of excitation and tl being
the time needed by light to reach the detector from the emitter), has an exponential form similar to that describing radioactive decay. This means that the probability of the detector registering a
photon within 1 Strictly speaking, this conclusion is justified only when the experimental conditions guarantee that the detector
“faces” only one atom, during its response time; otherwise the absorbed energy could originate from several atoms. 2 In many cases we may also work with conventional sources such as mercury high
pressure lamps complemented with a chopper, for example a rotating disk with small holes which “chops” the emitted light into a sequence of pulses.
6.1 Particle properties of radiation
the time interval t . . . t + dt is given by dw = const. × e−t dt.
On integrating Equation (6.1), we find that, from all the registered events, that fraction for which a photon is detected at a time later than t equals exp{−t}. From this we can conclude that –
assuming a jump-like elementary emission process – a fraction exp{−t} of the initially excited atoms is, at time t, still in the upper (excited) state. This means that −1 can be interpreted as the
mean lifetime of the excited level. In fact, it recently became possible to visualize the quantum jumps of individual atoms. The observation uses resonance fluorescence induced by a (resonant)
incident laser wave. During the process a fair number of photons are emitted to the sides. We can imagine the process of scattering in the following way. An atom, which is initially in the ground
state, is excited by the intense radiation and reaches a higher energy level. The energy obtained by the excitation is radiated away in a random direction in the form of a photon.3 In complete
analogy to spontaneous emission, in this process the atom takes its time: the frequency distribution for the emission “instants” is exactly the same as in the previous case (see Equation (6.1)). On
average, it takes −1 seconds until an emission takes place, and it is of particular importance that the mean lifetime of the corresponding atomic level, −1 , is a parameter independent of the
intensity of the incident radiation. (In contrast, the atom is “pumped” faster from the ground state to one of the excited states as the intensity of the pump wave increases.) After the emission is
completed, the atom is again in the ground state and the process can start again. Longer illumination times result in a larger number of emitted fluorescence photons. Let us now consider the case
when the upper level can, in addition, decay into a third, lower lying, level through a transition associated with spontaneous emission. From this it can return through additional spontaneous
transitions to the ground state. Let us assume that the first transition is very weak (for example, it might be a quadrupole transition); in such a case, the competition between the transitions will
be decided in favor of resonance fluorescence – however, this will not last forever: at an unpredictable moment the weak transition “takes its turn.” The consequence is an abrupt interruption of the
fluorescence – this lasts until the atom has reached the ground state and thus can be “driven” again by the laser radiation. The result is that we can observe an irregular sequence of bright and dark
periods of resonance fluorescence (intermittent fluorescence; see Fig. 6.2), and the interruption of the fluorescence indicates that a spontaneous transition, the famous quantum jump, 3 We employ
without hesitation the photon picture because we assume that the scattered photons are detected
using photodetectors.
Spontaneous emission
counting rate (1000/s)
time (s)
Fig. 6.2. Observed alternation of dark and bright periods in resonance fluorescence (from Sauter et al. (1986)).
has just happened. Because during the bright phase a large number of laser photons are scattered, the fluorescence radiation can be viewed as a macroscopic signal, and its “switch off” can be
reliably detected. In this way we are able to detect, in an indirect way, the spontaneous emission of a light quantum with certainty, whereby we can even use low efficiency detectors. Hence, the
described technique is well suited to determine the mean lifetime of an excited level with respect to the weak transition. To do so we have to measure the dependence of the fluorescence interruption
rate on the intensity of the fluorescence radiation (Itano et al., 1987). However, this method for observing quantum jumps is applicable only when we deal with a single atom. (The random character of
the bright and the dark phases of the fluorescence radiation leads to an essentially constant total radiation when the contributions from several atoms are superposed.) Fortunately, modern state of
the art detectors allow us to fulfil this precondition almost ideally. It is possible to capture an individual ion in an electromagnetic trap, the so-called Paul-trap, and keep it there, in
principle, for an infinitely long time (Neuhauser et al., 1980), an essential trick being the electromagnetic “cooling” of the ion. This means that the back and forth oscillations of the ion are
damped away by irradiating the ion with laser light of frequency slightly below the corresponding resonance frequency of the ion, thus utilizing the radiation pressure. In fact, the fluorescence
technique can be further improved. To this end, a second laser is used to drive the weak transition 1 ↔ 2 which is coupled to the ground state (the so-called V -configuration; see Fig. 6.3). Then the
absence of
6.2 The wave aspect
L1 2 L2 1
Fig. 6.3. Two coupled atomic transitions induced by laser radiation L1 and L2. Intermittent resonance fluorescence is observed on the strong transition 1 ↔ 3.
the resonance fluorescence associated with the strong transition 1 ↔ 3 indicates that the ground state has been emptied by an induced weak transition 1 → 2 (also jump-like). After a while, the atom
returns, either through an induced or spontaneous transition, to the ground state, and the fluorescence is again “switched on.” The described technique might be relevant for the construction of
extremely precise frequency standards. With the weak transition we have at our disposal an extremely narrow emission or absorption line. A properly chosen line of this form could be used for the
definition of a frequency standard. Its realization by a laser beam could be achieved simply by tuning a suitable laser to the weak transition, whereby resonance would be indicated by the appearance
of dark periods in the resonance fluorescence. Finally, let us mention that, with the help of the described experimental method, quantum mechanical energy measurements may also be carried out. To
this end, instead of continuous laser light resonant with the strong transition we use a sequence of very short pulses, each of them so intense that it generates a large number of fluorescence
photons. If it is otherwise known that the atom is in level 1 or 2 (of special interest is the case of a superposition of the corresponding two energy states), then each of the pulses “interrogates”
the atom to discover which energy level it is in. The appearance of fluorescence indicates the atom to be in level 1, while its absence allows the conclusion that it is in level 2. A recent
successful experimental demonstration of the so-called quantum Zeno effect, explained in Section 4.1, was accomplished using this method (Itano et al., 1990).
6.2 The wave aspect Instead of counting the emitted photons (during the detection process the photons vanish and we cannot say anything more about their physical properties), we can also perform
experiments which reveal their wave aspect.
Spontaneous emission
For example, it is possible to perform interference experiments, in principle, for arbitrarily small intensities, as will be explained in detail in Section 7.2. (The exposure time of the photographic
plate has to be long enough to obtain an interference pattern.) This implies that a single photon must be able to interfere (with itself). Let us consider, for example, Young’s interference
experiment (see Section 2.3), in which each of the photons somehow finds out that the diffraction screen has two holes because it acts accordingly, as can be seen from the interference pattern on the
observation screen. A physical explanation of this effect cannot exist without making use of the picture of an extended wave (in the transverse direction). The interference pattern allows us also to
draw conclusions about the pulse length corresponding to a photon. Let us look at the region on the observation screen where the interference fringes are at the boundaries of visibility. There, the
difference between the two path lengths (counted from the first and second hole, respectively) approximately equals the length of the “elementary wave.” (The head of the wave coming from one hole
meets the tail of the wave coming from the other hole so that interference is still possible.) A convenient method for measuring the length of the elementary wave is offered by the Michelson
interferometer. One has to vary the length of one of the interferometer arms until the visibility becomes poor. From the experimental point of view, there is no doubt that under certain experimental
conditions the photon must be treated as a spatially extended wave. However, such a wave packet needs for its formation a finite time because the duration of the emission process t determines,
according to the formula l = ct, where c is the velocity of light, the length l of the pulse (in the propagation direction).4 This fact is, at least in the classical description, obvious. The quantum
mechanical description does not invalidate this conclusion because the relation between l and t in question is a direct consequence of two principles, valid in the classical theory as well as the
quantum mechanical theory: (a) the interaction between an electric dipole (idealized as point-like) and the electromagnetic field is local (i.e., the dipole oscillation and only the simultaneous
electric field strength residing at the same position affect one another directly); and (b) the electromagnetic excitation propagates in space with the velocity of light. The observed wave properties
of spontaneously emitted light are understandable only by assuming a continuous emission process of a certain finite duration. Information about the duration can be obtained from the aforementioned
interferometric measurement of the elementary wavelength or from the frequency spectrum 4 Usually, the wave propagates simultaneously along different directions; in the most frequent case of an
dipole transition, it does this according to the direction characteristics of the Hertzian dipole radiation (for details see Section 6.5).
6.2 The wave aspect
of the emitted radiation. The experimental as well as the theoretical analysis shows that the following relation between the natural linewidth and the mean lifetime T = 1/ of the excited state holds:
1 ν T = . (6.2) 2π Comparing this result with the fundamental relation Equation (3.22) between the linewidth and the duration of a pulse (which remains untouched by the quantization of the radiation
field), we come to the conclusion that the duration of the emitted elementary pulses – and necessarily also of the radiation process itself – is given by the mean lifetime of the upper level.
Equation (6.2) is often seen as a consequence of the general quantum mechanical relation Equation (3.23) between the energy and the time taken to measure it. The presence of an uncertainty in the
frequency of the emitted light (according to Einstein’s relation between energy and frequency of a light quantum E = hν, this is equivalent to an energy uncertainty E = hν) is interpreted in such a
way that the upper level has the same energy uncertainty and the latter is directly transferred to the emitted radiation field during emission. A more detailed argument is as follows. Because of the
finite lifetime T of the upper level we cannot, in principle, perform energy measurements (on average) with a duration longer than T . According to the uncertainty relation Equation (3.23) we cannot,
as a matter of principle, determine the energy (on an ensemble of atoms to which all quantum mechanical statements are related) with infinite precision, but only up to a possible error of the order E
= h¯ /T . As a consequence, we can think of the energy uncertainty as a “real property” of the atomic level. It finds its way into the energy of the photon, which is what is stated by Equation (6.2).
The described approach has two great advantages. First, it does not make explicit use of the assumption that the “decay” of the excited state is caused only by the emission process. Hence, it is also
applicable to cases when, besides spontaneous emission, additional mechanisms (for instance inelastic collisions) are also at work which shorten the mean lifetime. Nevertheless, Equation (6.2) should
be valid also under these circumstances. It predicts the appearance of a broader linewidth, and, in fact, this effect has long been known as the pressure broadening of spectral lines – caused by
collisions, between atoms or molecules in gases, that become more frequent with increasing pressure. (A detailed picture of this process will be given in Section 6.3.) Secondly, the above argument is
easily generalized to the case when not only the upper but also the lower level is instable. The atom can leave this level either by a coupling to a lower lying level (associated with spontaneous
emission) or by another, additional, possibility. Then the energy uncertainty at the disposal of the photon, E = hν, is given as the sum of the
Spontaneous emission
uncertainties of the the two levels; i.e., with the two levels labeled 1 and 2, the relation 1 1 1 1 (6.3) + ν = (E 1 + E 2 ) = h 2π T1 T2 holds. This result is also in very good agreement with our
experience. Let us mention one additional fact, known from the physics of microwaves, which supports the assumption that spontaneous emission is associated with continuous radiation of an
electromagnetic wave. It deals with spontaneous emission of microwaves, but the radiating atom or molecule is not placed in free space (as assumed up to now) but is put into a resonator whose
dimensions are adjusted to the wavelength of the radiation. This means that the eigenfrequency of the resonator coincides with the middle frequency of the emitted radiation. The presence of
(reflecting) resonator walls influences drastically the radiation process: the mean lifetime of the excited state is shortened – compared with radiation into free space – by several orders of
magnitude! Let us consider as an example the microwave emission of an ammonia molecule at wavelength 1.25 cm (it earned respect because the first maser action was achieved using this molecule; see
Gordon, Zeiger and Townes (1954)), where we find a reduction of the order of 107 s (approximately 115 days) to 0.1 s. This effect can be physically understood in such a way that the front of the wave
is reflected when it hits the resonator wall and, when it passes the molecule again, it stimulates the process of emission (accelerates radiation). It is important that the reflected wave arrives at
the position of the molecule with an appropriate phase, and this is guaranteed by tuning the resonator to the transition frequency of the molecule. What applies to the wavefront naturally applies
also to the field emitted later, and as a result the molecule experiences an increasing stimulation while the resonator is gradually filled with electromagnetic energy (a standing wave is built up).
However, we are still dealing with spontaneous emission, i.e. a radiation process starting from the vacuum state of the field. On the other hand, when the phase of the reflected (backwards running)
wave is not favorable with respect to the radiating dipole, the emission process is not stimulated but inhibited. Such a situation is present when the emission frequency is considerably different
from the neighboring resonance frequencies. In the ideal case of a resonator with perfectly reflecting walls, the emission would be completely suppressed. Under realistic conditions a drastic
increase of the mean lifetime of a Rydberg level5 , compared with spontaneous emission in free space, was observed (Hulet, Hilfer and Kleppner, 1985). 5 Rydberg states are highly excited
hydrogen-like states of atoms characterized by a very large principal quantum
number n. Due to the large excitation the atoms are strongly “bloated” and have a very large (transition-) dipole moment. The consequence is a strong interaction with the electromagnetic radiation,
and they also decay spontaneously very fast (in a transition n → n − 1). Because the Rydberg levels are very close to each other, the transitions are associated with the emission of microwave
6.3 Paradoxes relating to the emission process
In fact, the stimulating as well as the inhibiting influence of the surroundings on the emission process was also observed in the optical frequency domain. The first experiments demonstrated that a
reflecting wall placed at a distance d (of the order of few hundred nanometers) from the emitter has a measurable effect on the emission process in the form of a change in emission duration
(Drexhage, 1974). The effect is further enhanced when the emitter is placed between two mirrors. The experiment was performed using a metal mirror onto which several monomolecular layers of fatty
acid molecules were deposited, followed by the emitters – also in the form of a monomolecular layer. In this way the desired distance d between the emitters and the metal mirror was adjusted. The
boundary between the emitter layer and the air acted as a second (partly reflecting) mirror. When the distance d was altered, a significant change in the mean fluorescence lifetime of the excited
level of the emitter molecule (a dye molecule was used) was observed; depending on the phase relation, larger as well as smaller values (compared with the normal situation of an uninfluenced emission
process) were measured. Recently microscopic optical resonators have been successfully constructed using flat mirrors and a piezoelectric element which reduces their mutual distance down to a
separation of the order of a light wavelength. Using this setup a pronounced dependence of the mean lifetime of the excited level of an emitter (radiating in the optical domain) on the mirror
separation, i.e. on the resonator tuning, could be observed (De Martini et al., 1987). Let us note finally that the wave character of light spontaneously emitted by an atom can be considered as
already proven by the fact that a superposition of many such elementary waves represents an electromagnetic wave process which is classically well described. This is demonstrated beyond any doubt by
classical optics because light from conventional (thermal) light sources is of this type.
6.3 Paradoxes relating to the emission process The experimental facts described in Sections 6.1 and 6.2 force us to imagine the emission process – depending on the experimental conditions – either as
a jumplike process connected with the release of an energy quantum, i.e. a localized energy bundle, or as continuous radiation of an electromagnetic wave. When we try to describe continuous radiation
classically, we face great difficulties. These difficulties are rooted in the fact that classically – due to Equation (3.4), which describes the relation between the field strength and the energy
density – the emission of a wave also means simultaneous emission of energy. In particular this implies that the total available energy of the atom (given by the difference in energy of the energy
levels between which the transition takes place) is converted into electromagnetic energy only when the radiation process is fully completed; i.e. only after a
Spontaneous emission
time which is larger than the mean lifetime of the upper level. This statement, however, is in contradiction with photoelectric measurements, during which we detect much earlier events in which the
whole energy hν is fed to the detector via the field. The discrepancy is even more dramatic in nuclear physics. For the emission of γ quanta from atomic nuclei, lifetimes of the order of years are
reported. According to the classical picture, after exciting the particular nuclei we would have to wait a number of years until the nuclei emit the corresponding energy of the γ quanta (this would
yield wave trains of the length of light years), while, according to quantum mechanics – in agreement with experience – γ quanta are immediately detectable. The peculiar property of quantum mechanics
that energy is “traded” only in finite minimum portions, can be found also in inelastic collisions. When such a collision happens between an excited atom A and a (not excited) different atom B, then
the whole excitation energy of A, i.e. a whole energy quantum hν, is converted into kinetic energy of the collision partners. The only other possibility is that no excitation energy is exchanged
during the collision. Were only part of the excitation energy converted, atom A would apparently not know what to do with the rest. Fractions of hν cannot be emitted for fundamental reasons! This
aspect of the collision process has surprising consequences for the effect of pressure broadening of spectral lines, as mentioned in Section 6.2. We argued that such collisions are responsible for
the shortening of the mean lifetime of the upper atomic level. However, it turns out that collision processes in which the excited atoms really lose their excitation energy, and thus reach the lower
level earlier, do not have any influence on the properties of the emitted total radiation – except on its intensity – because the atoms involved do not radiate at all! In contrast to this argument,
the collision broadening is easily understandable from the classical point of view: an excited atom radiates for a certain time and loses part of its excitation energy; at a certain moment a
collision happens, and the remaining available energy is converted into kinetic energy of the collision partner. The emission process is abruptly terminated, and the whole emitted pulse is shorter
than “standard,” which implies a broadening of the frequency spectrum (as explained already in Section 3.4). This is not what the atom experiences from the point of view of quantum mechanics. To
obtain an agreement with experience, we have to assume that also such collisions happen which are not associated with energy transfer but disturb the process of radiation and hence cause a frequency
broadening. Quite a natural conception of the process – in analogy to the classical description – is that, due to the collision, the emission of the electromagnetic wave (considered as a continuous
process) is terminated but the wave nevertheless contains a whole quantum of energy.
6.4 Complementarity
A possible interpretation of the collision between atom A and a different atom B is that it is a kind of energy measurement on atom A: when atom B (and also atom A) increase their kinetic energy, it
means that atom A was found in the excited state and hence had not emitted a photon; otherwise it was found in the ground state, which implies, at least from the energy point of view, that the
process of emission was complete at the moment of collision. A more precise quantum mechanical substantiation of this interpretation is given by the Weisskopf–Wigner solution for spontaneous emission
(see Section 6.5). The result is that, on average, the quantum mechanical and the classical description of the effect of inelastic collisions on the emission process agree (see also Lenz (1924)). In
the classical picture, fractions of the energy hν are emitted, but as a whole the ensemble of atoms emits the same energy as it does according to quantum theory, and the mechanism leading to the
broadening of the spectral line – the shortening of the emitted pulses – is the same in both cases. However, the quantum mechanical description causes considerable difficulties when we try to ascribe
to the emitted field a reality in the same sense as in classical physics. Because the atom cannot “know” when (or whether at all) it will suffer a collision, it is certain that the emission process
will start as in the undisturbed case. This implies that the atom will continuously emit an electromagnetic wave. A sudden collision which robs the atom of its whole excitation energy implies that
the radiated pulse is all at once “without” energy, which is, from the classical point of view, an absurdity. One might be tempted to assume that the atom should “reel back again” the field emitted
“in good faith.” However, excluding supernatural influences, this must happen not faster than with the velocity of light. This might, under unfortunate circumstances (recall the example of the γ
quanta emission taking years), be completely absurd because in the mean time almost anything can happen to the atom! It really looks as if the previously emitted field collapses instantaneously when
it turns out that the “expected” energy supply will not be available (similar to a research project which dies instantly when the promised funding is at once cut). As in the case of the accumulation
time, we are forced to question the validity of the classical relation Equation (3.4) between field and energy density. It seems that, according to quantum theory, fields can exist without energy and
then instantaneously collapse when we “notice” the absence of energy. 6.4 Complementarity The observations described in Sections 6.1 and 6.2 clearly indicate a wave–particle dualism for the process
of spontaneous emission of radiation. It is of importance, depending on the concrete experimental conditions, that only one or the other of the complementary sides of light are revealed at any time.
In the following we
Spontaneous emission
would like to present a detailed explanation of the fundamental impossibility of measuring simultaneously the precise moment of emission and the frequency of the photon. In the case of the beam–foil
technique the situation is immediately clear. To guarantee that the detector functions correctly, it is necessary to ensure, by appropriate optical imaging, that it is reached only by light emitted
from a certain part of the beam. The atoms flying by are coming into the field of view of the detector only for a short time. When we consider the excited atoms as oscillators radiating continuously
for a certain time (in the form of a Hertzian dipole), then only a small part of the wave reaches the detector. When we replace the detector by a spectrometer, we find an unrealistically large
linewidth, which (corresponding to Equation (3.23)) is caused by “cutting” a small piece from the whole wave, and so does not have anything in common with the original frequency spectrum. In the case
of the second method described in Section 6.1, in which the time instant when the detector responds was measured directly, we notice that a frequency measurement leaves the “arrival time” of a photon
undetermined, and this uncertainty increases with increasing resolution of the spectrometer. Let us clarify the situation by analyzing a realistic frequency measurement arrangement, consisting of a
spectral apparatus (for example the Fabry–Perot etalon described in Section 3.4) with photographic registration of the exiting light. (Modern methods use an array of photodiodes for detection.) Light
of different frequencies is typically imaged onto different positions (usually having the form of rings or stripes). However, a single photon can, at best, cause one single blackening spot or the
response of a single localized detector, respectively. This makes it quite clear that we cannot make any statements about the frequency spectrum of a single photon. We have to repeat the measurement
on many photons and obtain from the detected photons – dependent on the position where the photons have been detected – a frequency spectrum, which must be understood as a characteristic of the
respective ensemble. Of physical importance in this connection is the fact that we cannot draw any conclusions about the instant when the photon entered the apparatus from the moment at which the
photon was detected. In Section 3.4, we discussed at length, using the example of a Fabry–Perot etalon, that the filter effect is primarily due to the multiple reflections between the two silver
layers. It causes a stretching of the incident pulse, and, because the photon can be detected at any point with non-zero light intensity, the detection time is spread over the interval corresponding
to the length of the outgoing pulse. Basically, from the experimental point of view, we can accept the dualistic picture of spontaneous emission by realizing that it is impossible, in principle, to
observe a microsystem “as it is.” In fact, as P. Jordan would say, we can see only the traces it has left in the macro-world, and these traces, we must accept,
6.5 Quantum mechanical description
point in one case towards a particle and in the other towards a wave character of radiation. In the following section we explain how the theoretical description of quantum mechanics succeeds in
putting the complementary aspects “under one roof.” 6.5 Quantum mechanical description A satisfactory treatment of (collision free) spontaneous emission within quantum mechanics was presented by
Weisskopf and Wigner (1930 a, b) (see also Section 15.3). These two researchers found an approximate solution which, in contrast to results obtained using perturbation methods, is not limited in its
validity to short times. In the following we will concentrate on the physical implications of the Weisskopf–Wigner theory. Quantum mechanically we describe the spontaneous emission process as a
transition from the initial state, characterized by an atom in the excited state and no photon present, to the final state, i.e. the situation where everything is already “settled;” this refers to
the situation that exists after the photon has been emitted and the atom has lost its excitation energy (completely). We write this as 2 vacuum → 1 photon ,
where 1 and 2 are the wave functions corresponding to the upper and lower levels of the atom, respectively, vacuum describes the vacuum state of the electromagnetic field (cf. Section 4.2) and photon
describes a state with exactly one photon in free space. An important achievement of the Weisskopf–Wigner theory is that it is able to give an explicit expression for the wave function photon . With
it we have, at our disposal, the maximum information about the radiation field that is possible according to the laws of quantum mechanics. What can be learnt from the wave function about the
physical properties of the emitted photon? It can be expressed as a superposition of energy eigenstates of the whole field characterized by the property that exactly one photon is present in a
certain mode of the radiation field. The modes correspond, as explained in Section 4.2, to plane waves with a well defined propagation direction, frequency and polarization. Such a result is, in
fact, not surprising because basically all the modes of the radiation field couple to the atom. In this way, quantum theory “unites” two contradicting (from the classical point of view) properties.
On the one hand, the emission does not take place in a fixed direction; the propagation direction – corresponding to that of a classical dipole wave – is undetermined, making interference experiments
possible. On the other hand, we find, when surrounding the atom at a greater distance with (ideal) detectors, that only one detector responds, thus indicating that it has absorbed the whole
excitation energy of
Spontaneous emission
the atom. In such an experiment, the photon looks like a “light particle” which is ejected by the atom in a definite (random) direction. From the weight with which the individual modes participate in
the superposition photon , we can immediately read out the direction dependence, the polarization properties and the frequency spectrum of the radiation (Section 15.3). It turns out that the
direction characteristics are identical to that of classical dipole radiation6 (see Equation (3.15)), and the same applies also to the polarization properties. Next, the radiation has a Lorentz-like
line profile, w(ν) =
const. (ν − νr
+ (1/4π T )2
with νr =
1 (E 2 − E 1 ) h
being the resonance frequency and E 1 and E 2 being the lower and upper level energies, respectively. From the frequency distribution, Equation (6.5), follows the value (2π T )−1 for the linewidth ν;
i.e. Equation (6.2), which was anticipated in Section 6.2. The presence of a finite linewidth ν implies a finite uncertainty of the emitted energy E = hν. The wave function photon allows us to gain
knowledge about the spatial extension of the emitted pulse and its propagation in space. The corresponding information is obtained by calculating quantum mechanical expectation values of field
variables (operators) for this wave function. ˆ of the electric field Let us start with the simplest, the expectation value E strength operator. The result is zero, which may come as a surprise.
However, the result does not imply, as one might at first conclude, that the electric field strength itself vanishes; i.e. that an appropriate measurement (which is only hypothetical anyway because
no feasible procedure is known) gives in each single case a zero result. If this were so, the measurement of Eˆ 2 would also give the value zero, which contradicts the fact (discussed below) that the
expectation value of Eˆ 2 does 6 We have to keep in mind that a two-level system has a spatial orientation. To make a dipole transition possible,
the (total) angular momenta of the two levels must differ by one (in units of h¯ ), which means that at least one of them is a sub-level of a degenerate level. To separate the sub-levels from each
other so that one of them can be chosen, we need a static, homogeneous magnetic field inducing a Zeeman splitting of the sub-levels, and its direction simultaneously defines the dipole axis. More
precisely speaking, this applies to a transition for which the magnetic quantum number (the angular momentum component with respect to the magnetic field direction) does not change. However, in the
case of a change by +1 or −1, the motion of the radiating electron corresponds to that of a ring current. Also, in this case there is, concerning the direction characteristics as well as the
polarization properties of the radiation, a complete agreement between the quantum mechanical and the classical descriptions.
6.5 Quantum mechanical description
not vanish. Because the quantum mechanical expectation value should be understood as a mean over many individual measurements always performed under the ˆ = 0 can be understood only in such same
physical conditions, the statement E a way that the phase of the electric field strength is completely random; i.e. a corresponding measurement would yield any result between 0 and 2π with equal
frequency. Since we are dealing with a pure state represented by a wave function, the uncertainty, as discussed in Section 4.1, is of purely quantum mechanical nature; i.e. it cannot be understood in
the sense of classical statistics that “in reality” each single photon has a certain phase that we just do not know. The expectation value of Eˆ 2 is, over the whole space, larger than zero but is
not the same everywhere. In a certain space-time region G, its value is above the noise level caused by vacuum fluctuations (see Section 4.3). This simply ˆ (+) , which is free of the vacuum
influence means that the (mean) intensity Eˆ (−) E (see Section 5.2), is non-zero (positive) in G. Because the intensity, according to Equation (5.4), determines the response of the photodetector, we
can expect that at one of the space-time “points” belonging to region G a photon will be detected by chance when a detector is placed at such a position. The space-time region indicates the
space-time structure of the emitted pulse. Fortunately, the dependence of the quantity Eˆ (−) (r, t)Eˆ (+) (r, t) on space and time corresponds exactly to the predictions of classical electrodynamics
for the space-time dependence of the intensity of a wave train from a Hertzian dipole (once “kicked” and then left to itself). Observing the radiation along a certain direction (as it is perceived by
an observer at a certain distance from the atom), we find the following picture. The intensity is that of a shock wave with a vertical wavefront and an exponentially decaying tail, propagating away
from the atom; when traced back, it can be seen that the shock wave began at the instant when the system was in the abovementioned initial state. Fig. 6.4 shows an “instantaneous picture” of this
process. The intensity falls off to the e-th part at a distance R = cT from the wavefront, where c is the velocity of light and T is the mean lifetime of the upper level. From the intensity
dependence, the duration of the pulse is found to be T , which is exactly what we expect according to the classical description. The result shows that the Weisskopf–Wigner solution accurately
reflects the wave aspect of the emitted light. However, for the real measurement of the spacetime structure of the wave – as well as of the frequency spectrum – photodetectors are the only apparatus
available which make the photon vanish as a whole. This implies that the spatial distribution of the emitted elementary wave can be determined only from an ensemble – each single measurement produces
only one space-time “point” at which the photon was found, and it takes many such measurements to form gradually the space-time picture of the wave process. In fact, all the real observations
demonstrate the particle nature of light. Paradoxically, we
Spontaneous emission I
Fig. 6.4. The intensity I of a spontaneously emitted wave train as a function of the distance R from the emitter at a fixed time.
can make statements about the wave properties of photons by performing experiments involving their detection as localized particles; this is the only possibility at our disposal. The disagreement
with the classical description is, to recap, that classically it is impossible to measure the whole energy contained in a pulse at one position because it is, roughly speaking, distributed over a
continuously expanding sphere. The Weisskopf–Wigner theory offers in addition a detailed description of the transition, Equation (6.4), itself by enabling us to write the wave function of the whole
system formed by the atom and the field as a function of time. That it is a wave function, and hence a pure state, is a quite general property of the quantum mechanical formalism. We start from a
wave function (in our case from 2 vacuum ); it changes according to the Schr¨odinger equation, but still remains a wave function. Only through interaction with a macroscopic system acting as a kind
of measurement apparatus can the pure state be converted into a statistical mixture. We can conclude that the exclusion of external perturbations enables the whole system to remain in a pure state.
The discussed wave function has the following form:
(t) = e− 2 t 2 vacuum +
(1 − e−t )1 photon (t)
(see Section 15.3), where the wave function photon (t) describes, as before, a whole photon (the corresponding energy equals the total amount of the atomic excitation energy). It is this wave
function that undergoes a time variation (indicated by the argument t) beyond the “undisturbed” time evolution characteristic of free fields. As is to be expected, this variation is of importance
only for those times when the atom and the radiation field mutually interact (t ≤ −1 ). The properties of the photon described by the wave function photon (t) change during the emission process; the
particular pulse differs, in agreement with the classical concept, from the one appearing at t −1 by not being completely “hatched” from the
6.5 Quantum mechanical description
atom (a part of the tail “sticks inside.”) Nevertheless, it contains the whole energy hν of a photon. Such a “crippled” photon can appear only when the emission process is disturbed from the outside.
Such a perturbation is the energy measurement on the atom. According to the laws of quantum mechanics, the so-called “collapse” of the wave function results; i.e. depending on the result of the
measurement, the wave function in Equation (6.7) reduces to one of the two terms in the sum (which then has to be normalized again). From a physical point of view, this means that, with certainty, no
photon is present when the atom was found in the excited state – this is simply a consequence of the validity of the energy conservation law for single processes; on the other hand, a “crippled”
photon was emitted when the atom was found in the lower level. This picture of a perturbed emission process was already proposed in Section 6.3 to explain the collision broadening of spectral lines.
There, we interpreted inelastic collisions as a mechanism for the energy measurement of the atom. At this point we encounter for the first time the important quantum mechanical feature that the
measurement on part of a system gives us detailed information about the state of another part of the system, under the condition that the whole system previously existed in an entangled quantum
mechanical state, as, for example, represented by the superposition in Equation (6.7). We will discuss this interesting quantum mechanical feature in more detail in Section 11.1. The most important
physical information about the time dependence of the radiation process is contained in the time dependence of the expansion coefficients in Equation (6.7), which obviously expresses the exponential
“decay law.” However, we have to be cautious with this formulation. According to the laws of quantum mechanics, we should say: when at time t a measurement is performed which “checks” whether the
atom is in the upper or the lower state, then we find it with the (relative) frequency exp{−t} in the upper state and with the frequency 1 − exp{−t} in the lower state. Such a measurement process can
be, as mentioned several times already, an inelastic collision converting the whole excitation energy into kinetic energy of the collision partners. From such an event we conclude that just before
the collision the atom still “possessed” its energy and was “found” by the measurement in the upper state. On the other hand, in order to be able to bring the result into correspondence with the
observed collision broadening of the spectral lines, we feel obliged to interpret the absence of an energy transfer in the inelastic collision as an indication that the atom is in the lower state.
Modern laser technology allows the generation of very intense ultrashort laser pulses and thus presents a new opportunity to “find out” the actual level of the atom. In particular cases we can apply
a picosecond light pulse of such a frequency that the atom, when in the upper state, will be ionized. The (momentary) occupation of the lower state can be discovered by “pumping” the atom, using a
Spontaneous emission
similar pulse, to a short-lived level, from where its spontaneous decay – revealed by detecting the emitted light quantum – indicates a positive result of the measurement. The role of spontaneous
emission can also be played by resonance fluorescence, as explained in Section 6.1. Knowing, in addition, that the atom can be found in only two well defined states, it is sufficient to check the
occupation of one of the levels. When the measurement fails to yield a signal, we conclude that the atom is in the other level. The response of a detector – under the condition that the detector can
be reached by the radiation from only one single atom – is a kind of “self-acting” measurement process: we cannot choose the instant of the measurement; we have to leave it to the atom to “decide”
when it gives away its energy hν to the detector. The important point is that we enable the atom to dispose of all its excitation energy in “one shot.” Let us emphasize at this point that, under
normal circumstances, the environment plays the role of a measurement apparatus. Each single absorption which is followed by the dissipation of the deposited energy represents a measurement process,
even when no “reading out” takes place. Consider an atom completely surrounded by a detector in the form of a spherical shell having 100% detection efficiency: we would conclude from the absence of a
response that the atom – at the earlier (retarded) time instant at which a wavefront would have been emitted to reach the detector surface at the observation time – is still in the upper level. This
measurement setup keeps the atom under constant surveillance. It has to report continuously – or, more precisely, every t seconds where t is the time resolution of the detector – what the energy
state of the atom is. Even though the formal description is, in this case, considerably different from that which is valid for the undisturbed radiation (leading to the Weisskopf– Wigner solution) –
we have to perform a reduction of the wave function every t seconds – the result is again the same exponential decay. Of course, now we cannot make any statements about the emitted photon. It is
clear that, using the described measurement apparatus, its properties cannot be observed. Let us now interpret the wave function in Equation (6.7) according to the known rules of quantum mechanics.
Because we are dealing with a superposition state, we come to the following paradoxical statement: the atom is neither (for t ≤ −1 ) in the upper nor in the lower level, and neither is the photon
present nor not present. The two situations represented by the two terms in Equation (6.7) are simultaneously existing possibilities, but neither of them is “factual.” This “fuzziness” of the
description provides quantum mechanics with the opportunity to “evade problems” when the classical concept of reality leads to a dilemma, as discussed in the case of collision broadening of spectral
lines in Section 6.3. When the atom gives all its excitation energy away during a collision and nothing is left to the radiation
6.6 Quantum beats
field, the atom need not “reel back” any field because, until that moment, no real emission took place. The latter emission happened only virtually – whatever that might mean! It is interesting that
the virtually present field gradually (as time elapses) becomes real. According to Equation (6.7), the system for t −1 goes over by itself, just by leaving it undisturbed, to the final state 2 photon
. We face one of the rare cases in which the Schr¨odinger equation describes an irreversible process without including a measurement process. (The photon leaves the atom forever.) The formal reason
for this behavior is that the atom and the radiation field form a system with infinitely many degrees of freedom (represented by the frequencies, propagation and polarization directions). The
situation is completely different when the radiating atom is placed into a resonator with dimensions of the order of the wavelength of the emitted radiation, as discussed in the example of microwave
emission in Section 6.2. In this case only a few resonator eigenmodes (in the ideal case only one) are available for the interaction with the atom, and the emission process does not have an
irreversible character because the emitted wave is reflected by the resonator walls, so it again reaches the atom, making possible a reabsorption of radiation; this can be followed by another
emission, and so on. Finally, we ask: what is the quantum mechanical picture of a realistic photon spontaneously emitted by an excited atom or molecule? We come to the following conclusion: the
photon is a spatially extended object, similar to a classical dipole wave, which is extended in each propagation direction over a distance of the order of c/ . It is this spatial extension which
makes the known interference effects understandable. However, the situation must not be interpreted in such a way that the energy of the photon is distributed over the aforementioned spatial region
because a measurement (using a photodetector) always finds the whole energy at a particular point, i.e. in a localized form. The spatial extension of a photon is ˆ (−) (r, t)Eˆ (+) (r, t), and the
detector formally described by the mean intensity E response probability is proportional to it. Let us repeat again, the photon appears as a localized particle – in the sense of Einstein’s photon
concept – only in the act of detection, and the naive idea that it is already in this localized state before detection is in open contradiction to experiment. 6.6 Quantum beats Let us discuss a
peculiarity of spontaneous emission which enriches our picture of the photon. This peculiarity is observable when, in contrast to what has been assumed up to now, the “pump mechanism” excites
simultaneously two (or more) closely lying atomic levels. More precisely, the initial excitation state of the atom is given by a superposition of wave functions corresponding to the different energy
Spontaneous emission
n 10 000
t (ns)
Fig. 6.5. Experimental evidence of quantum beats (n = number of registered photons; t = time passed since the excitation). After Alguard and Drake (1973).
levels. This superposition – a pure state – is related to all atoms present, and we speak of a coherent excitation of the ensemble of atoms. Experimentally such a specifically quantum mechanical
state is achieved either with the aid of the beam–foil technique described in Section 6.1 or by employing short but intense laser pulses. The simultaneous excitation of two different atomic energy
levels has the consequence that the two possible transitions into the same lower level take place simultaneously. This leads to the fact that the emitted photon – it is, as before, only a single one
– has imprinted on it the structure of the atomic excitation: its frequency spectrum consists of two separate lines of finite width. We have to revise our previous concept that the photon has only a
single frequency of a certain uncertainty. Obviously there are photons “oscillating” simultaneously with two – or even more – frequencies. Of experimental importance is the effect of this coherent
excitation of the atom on the “decay” of the initial atomic state. When, as described in Section 6.1, we measure the number of photons registered by a detector as a function of the time that has
elapsed since the moment of excitation, we find – in agreement with theory – strange quantum beats: a sinusoidal oscillation with a frequency given by the distance between the two simultaneously
excited levels (in units of h) is superimposed on the exponential decay. In this way, we have a practical method for measuring very small level splittings at our disposal. Fig. 6.5 shows as an
example a measurement result obtained using the beam–foil technique. We should mention that the described beat effect may be understood classically when we assume that two pulses of different
frequencies are emitted simultaneously. The intensity of the total radiation, as the result of interference, shows a modulation at the difference frequency (see Equation (3.14)), and this can be
detected using photodetectors. The essential point is that we always deal with a single photon consisting of two mutually interfering parts.
6.7 Parametric fluorescence
6.7 Parametric fluorescence When discussing spontaneous emission we have, up until now, always considered the radiation of atoms or molecules. Recently, another, completely different, spontaneous
emission process was observed. It belongs to the field of non-linear optics, an area that has evolved only because of the development of powerful lasers, and is known as parametric fluorescence or
spontaneous parametric down-conversion. It takes place when an intense (monochromatic) wave passes through an appropriately chosen non-linear medium (a crystal). To explain this process, we require a
few additional facts. The non-linearity – or, more precisely, the non-linear dependence of the polarization of the medium on the electric field strength – is of such a form that it allows interaction
between three waves with different frequencies; it is this that makes the following process possible. (We will use classical electrodynamics extended by phenomenologically introduced non-linear
polarization terms.) By appropriate adjustment of the phases of the three (monochromatic and approximately plane) waves, we can ensure that the wave with the largest frequency, the so-called pump
wave, is attenuated during its passage through the crystal, while the other two, usually called the signal wave and the idler wave, are amplified. It is important that the process satisfies the
following two conditions (see, for example, Paul (1973a)): νp = νs + νi ,
kp = ks + ki .
The subscripts p, s and i refer to the pump, signal and idler waves, respectively. Equations (6.8) and (6.9) will become physically understandable when we take a closer look at the physical mechanism
responsible for the interaction between the three waves. This is characterized by the property that two waves always induce, on the atoms of the crystal, electric dipole moments oscillating with the
sum or the difference frequency of the two waves. In this way a macroscopic polarization is formed in the medium (defined as the sum of dipole moments distributed over a unit volume). In particular,
the cooperation between the signal wave and the idler wave gives rise to a polarization oscillating with the sum frequency νs + νi , and the pump wave can do work A on it provided its frequency
equals that of the polarization, which is, in fact, guaranteed by Equation (6.8). The sign of A depends on the phase of the pump wave with respect to the polarization. Let us assume in the following
that conditions are set such that A is positive and, in addition, takes its maximal value. Equation (6.9) can be explained in such a way that the abovementioned polarization has the character of a
plane wave with the wave vector ks + ki . It implies that the pump wave and the polarization wave propagate with the same phase velocity
Spontaneous emission
and in the same direction, so that the phase relation between the two waves controlling the energy conversion is preserved along the whole length of the crystal. Hence, Equation (6.9) is called the
phase matching condition. When it is not fulfilled, an area in which the pump wave is depleted is followed by an area with the process running backwards; i.e. the pump wave is amplified again, then
another area follows in which it is again depleted, etc. and the overall result is that the interaction between the waves is very small. Equation (6.9) is, however, too strict. In fact, it is
sufficient to require that the relative phase between the pump wave and the polarization wave is approximately constant within the crystal volume. Assuming that the pump wave propagates along the z
axis and that the length of the crystal is L, we can write the weakened condition for the z component of the wave number vectors in the form (z) k − k (z) − k (z) ≤ π . p s i 2L
There are basically two different possibilities satisfy the phase matching condition, namely exploiting parametric interaction of type I or type II. Type I refers to the situation when the signal
wave and the idler wave are both the ordinary (or the extraordinary) wave in the crystal. Interaction of type II is present when the signal wave is the ordinary wave and the idler wave is the
extraordinary wave, or vice versa. The two waves are then mutually orthogonally linearly polarized. The process considered up to now – the parametric interaction between pump, signal and idler waves
– obviously does not have too much to do with photons. However, the situation changes when the experiment is performed in such a way that we illuminate the crystal with the pump wave but not with the
signal wave and the idler wave. According to classical theory, nothing should happen: because two waves are always needed to induce non-linear polarization, at least one additional wave must be
present (for example the signal wave, even though its intensity can be arbitrarily small). Such a “nucleus” would then be amplified extremely quickly; “driven” by the polarization generated by the
cooperation between the pump wave and the signal wave, an idler wave would build up simultaneously. The process is, in fact, a spontaneous one; i.e. only the presence of the pump wave is required.
During the process, all the pairs of signal and idler waves satisfying Equations (6.8) and (6.9) are excited (for details see, for example, Paul (1973b)). This phenomenon, called parametric
fluorescence, is hence of quantum mechanical nature. We can view the corresponding elementary process as the “decay” of a pump photon into both a signal photon and an idler photon. Equation (6.8)
multiplied by h then expresses simply the energy conservation. From the point of view of the photon picture, it is important that Equation (6.9) or Equation (6.10) holds for individual spontaneous
processes. This
6.7 Parametric fluorescence
is, however, understandable only in the wave picture – in particular, the photon takes notice of the crystal dimensions! – and we cannot help also relating the photon to a wave process in this case.
Because the quantity h¯ k can, on the other hand, be interpreted as the momentum of the photon (see Section 6.9), we can view Equation (6.9) (after multiplication by h¯ ) as the momentum conservation
law. Of particular physical interest is the fact that during each elementary process two photons, in general with different frequencies and propagation directions, are simultaneously generated.
Hence, when the signal wave and the corresponding idler wave are incident on separate detectors and photons are registered, we find pronounced correlations between the obtained measurement sequences.
Using ideal detectors we would, in fact, always find coincidences. The temporal extension of the photons (observed at a fixed point) is – as always in optics – determined by the available linewidth
(see Equation (3.22)), which is usually rather large for parametric fluorescence. An experiment which is able to determine the length of the signal or idler photons, respectively, will be described
in Section 7.6. Apart from the aforementioned space-time correlations, there are also correlations in the frequency domain. The energy conservation relation, Equation (6.8), holds, and, when we
assume the pump frequency to be very precise (which is indeed the case for a laser), it implies that the measured values of νs and νi (which fluctuate strongly) are uniquely determined in their sum,
i.e. strongly correlated. Finally, using the parametric interaction of type II, we can generate photon pairs with polarization states correlated in a very specific quantum mechanical way. These pairs
are well suited for the realization of Einstein–Podolsky–Rosen experiments (Chapter 11). The photon pairs generated by parametric fluorescence are an instructive example of a system prepared in an
“entangled” quantum mechanical state. By performing measurements on one part of the system, we can, it seems, manipulate the other part of the system, a fact that is the essence of the famous paradox
of Einstein, Podolsky and Rosen. (For details, see Section 11.2.) In fact, measuring, with the help of a photodetector, the position of the idler photon, for instance, we can predict with certainty
the position of the signal photon; however, by measuring the idler frequency with a spectrometer we know also that the signal frequency has a precise value (known when the pump frequency is known).
When we impose a limitation on the bandwith of, say, the idler wave – a frequency filter can be placed in front of the detector for this purpose – then this limitation is also transferred to the
signal wave. What has been said above might create the impression that the properties of the signal photon can be changed at will. Because the underlying quantum mechanical “mechanism” is the well
known “collapse” of the wave function, which acts “instantaneously”, the described effect on the signal photon would take place with
Spontaneous emission
superluminal velocity and hence contradict strongly the principle of causality. (We should keep in mind that the signal and the idler photons can, at the moment of measurement, be arbitrarily far
apart!) The discussed “action” is hence impossible. What actually occurs in the experiment is something else: we select, through the measurement, from the original ensemble of individual systems
prepared in the same way, a sub-ensemble which is physically different from the original ensemble. For example, we measure the idler frequency and only perform experiments with those signal photons
for which the measurement on the idler photons has yielded a prescribed value. It seems interesting to us to point out that the appearance of parametric fluorescence might be viewed as an indication
of the existence of vacuum fluctuations of the electric field strength (mentioned in Section 4.3), and, from our point of view, this is a more convincing argument than the often made reference to
spontaneous emission of excited atoms or molecules. The fluctuations are well suited to supply the “nuclei” which are necessary in the classical description to initiate the process of parametric
interaction. 6.8 Photons in “pure culture” A light beam, emitted, for example from a conventional light source, can be viewed as a stream of photons. On the one hand, we know that the radiation
originates from individual, mutually independent, elementary processes, during each of which a single photon, an energy packet of magnitude hν, is emitted. On the other hand, the response of the
detector tells us that an energy quantum hν was removed from the radiation field. Within a naive photon picture, we would assume that the registered photon is identical to that previously emitted by
an atom. However, such a picture is incorrect. As will be discussed in detail in Section 8.2, the events indicated by two detectors (for thermal light) are correlated in space and time, which can be
understood only if the elementary waves emitted by individual atoms superpose with statistical phases, and photons are found preferentially at the maxima of the intensity of the total field. Hence,
the photons do not have an individuality which would – not even in principle – allow us to track their individual “courses of life;” rather, they become completely indistinguishable in the
collective. We arrive at a similar conclusion when we analyze the frequency spectrum of radiation. Usually it is very broad when compared with the natural linewidth characteristic of the elementary
process of emission. The effect is called inhomogeneous atomic line broadening. It results from the fact that the individual atoms emit at different center frequencies caused, for example, by the
Doppler effect, according to which the motion of the emitter induces a frequency shift. However, it is
6.8 Photons in “pure culture”
fundamentally impossible, from the measurement of the total radiation, to gather any information about the spectral properties of the emitted individual photons, in particular the natural linewidth.
(This is already so in the classical description: the length of the elementary wave trains constituting the radiation field does not appear in the description of the total radiation in the case of an
inhomogeneously broadened atomic line.) It is possible to speak of an individual photon only when a single atom is its “generator.” This means that in experiments we have to take care that the
observation volume is filled with just one excited atom. When, after a certain time, a photon is observed, it must have come from the atom – we are seeing, so to speak, a photon in “pure culture.” As
already discussed in Section 6.1, such a situation is almost ideally achievable with current trap techniques. We can also come close to the desired ideal case when we work with atomic beams, but we
have to use a lens to collect radiation from only a small area of the beam and then send it to the detector. The feasibility of this procedure was demonstrated by American scientists in an experiment
in which they observed “photon antibunching” (Kimble, Dagenais and Mandel, 1977; Dagenais and Mandel, 1978; see Section 8.4). By choosing the density of atoms and the beam velocity appropriately, the
time averaged number of atoms in the observation volume dropped below one. However, the method does not exclude that at certain times the observed volume is occupied by two or even more atoms.
Individual photons can also be prepared (approximately) by sufficiently attenuating light from a conventional source or laser radiation (using an absorber or a weakly reflecting or weakly
transmitting mirror). In such a situation the probability of detecting a photon within a given (short) time interval is small, but the probability of finding two photons is, in comparison, negligibly
small, though not zero. The experimental situation can be further improved by using generating processes in which two photons are emitted more or less simultaneously. The best candidate for such a
process is parametric fluorescence because – in contrast to the successive emission of two photons in an atomic cascade process (see Section 11.1) – the propagation directions of the two photons are
strongly correlated, as explained in Section 6.7. When we detect one of the photons in a certain direction, we can be sure that its “partner” set off at the same time and along a known direction. The
result is that the motion of the other photon is exactly predictable, as if it were a classical particle. In this way we “generate” a localized photon which is available for further experiments. The
information we have about the second photon can be used, for example, to increase considerably the detection sensitivity of an absorption measurement: from the fact that a photon which is expected to
arrive at a detector positioned
Spontaneous emission
behind an absorber at a certain time does, in fact, not arrive, we must conclude that it was absorbed. An electronic gate is usually used in experiments (Hong and Mandel, 1986). The detector signal
indicating the registration of the first photon is used to gate the second detector, which is designed to observe the other photon. As mentioned in Section 6.7, parametric fluorescence can also be
used for the “generation” of single photons with a prescribed frequency. 6.9 Properties of photons We explained in Section 6.5 that spontaneously emitted radiation propagates in all directions (in
the form of a dipole wave). In real experiments we usually work with light beams with a more or less well defined propagation direction, which can be prepared (unless a laser is used) with the help
of diaphragms, i.e. non-transparent screens with a small hole. This direction selection means classically that a piece of the dipole wave is cut out. At very small intensities the particle nature of
the photon again comes into play: an incident photon is either completely absorbed by the screen or passes completely through the hole. The outgoing photon is then characterized, not only by the
frequency spectrum (determined mainly by the lifetime of the atomic levels), but also by the propagation direction. A momentum is also related to these two quantities. Formally, the value of the
momentum is found by using Einstein’s famous equivalence relation between energy E and mass m: E = mc2 (where c is the velocity of light). Thus, we calculate the photon mass to be hνc−2 and multiply
it by the propagation velocity c. The absolute value of the photon momentum is then hν/c, and, because it points in the propagation direction, we have finally P = h¯ k,
where k is the wave vector. In fact, it is known from classical electrodynamics that the electromagnetic field is characterized not only by an energy but also by a momentum density. Equation (6.11)
can be derived from the momentum density by considering, according to the photon concept, a plane wave as being formed from energy packets of magnitude hν all propagating in one direction. The
momentum of the photon is experienced by the medium when it is absorbed or reflected by the medium (in the reflection case, twice the momentum is transferred), and this gives rise to the light
pressure. The existence of light pressure had previously been anticipated by Kepler in 1617 based on an ingenious interpretation of the observation that comet tails are always directed away from the
sun and so it looks as if the particles forming the tail were repelled by the radiation emitted from the sun. Measurements of light pressure on objects on Earth were
6.9 Properties of photons
successfully performed at the beginning of the twentieth century (see, for example, Lebedew (1910)). More recently, light pressure was used to considerably slow down atomic beams (see, for example,
Prodan, Phillips and Metcalf (1982)). The photon momentum is noticeable not only in absorption but also in the process of spontaneous emission. Owing to the validity of the law of momentum
conservation, the atom suffers a measurable recoil when it emits a photon, as was predicted by Einstein (1917). Here, it is assumed that the photon is emitted in a well defined, though random,
direction (Einstein speaks of “needle radiation;”) however, how can this result be brought into accord with the results of Section 6.5, namely that the atom radiates like a Hertzian dipole in all
directions, so making possible the interference between partial waves emitted (by the same atom) along different directions? As will be explained in detail in Section 7.5, quantum theory provides the
following answer: it is impossible to observe in one experiment both an atomic recoil (indicating the particle nature of light) and interference (which is understood to be a consequence of the wave
nature of light). Besides momentum, the photon also possesses angular momentum – known as spin. This is closely related to the polarization properties of light. It is well known that a light beam can
be linearly or circularly (or, more generally, elliptically) polarized. Basically there are two independent polarization states which are conveniently chosen to be either linearly polarized in two
mutually orthogonal directions or left and right handed circularly polarized, respectively. These states define an (orthogonal) basis, and any polarization state can be expanded with respect to it.
In particular, a circularly polarized wave can be expressed as a superposition of two linearly polarized waves and, vice versa, a linearly polarized wave can be expressed as a superposition of a left
handed and a right handed circularly polarized wave. It is well known that, for a circularly polarized wave, the oscillation direction of the electric field strength, observed at a fixed point,
rotates around the propagation direction. Hence we expect intuitively that it has a non-zero angular momentum. A more accurate analysis leads to the statement that the spin (in the propagation
direction) of the photons, corresponding to these (classical) waves, takes the possible values s = ±h¯ .
A quantum theorist calls the positive case right handed, the negative case left handed circular polarization. (We emphasize that the classical convention is opposite to this because the sense of
rotation is judged by an observer who sees the wave propagating towards himself.) Similar to the transfer of momentum, the spin of a photon is also transferred to the absorbing medium. The effect is
made stronger when circularly polarized light is sent through a λ/2 plate (a disk consisting of anisotropic, transparent material
Spontaneous emission
whose length is chosen in such a way that the path difference between the ordinary and the extraordinary beam equals half a wavelength). It turns left handed circularly polarized light into right
handed circularly polarized light, and vice versa, and so each photon transfers an angular momentum of 2h¯ . The total angular momentum transferred to the plate by many photons could, in fact, be
successfully measured using a torsion meter (Beth, 1936). Let us point out that, within the quantum mechanical description, the circularly polarized states are eigenstates of the photon spin operator
(more precisely, of the spin component in the propagation direction). The spin values ±h¯ are the corresponding eigenvalues (and hence are sharp). This does not, however, apply to linear
polarization. In this case the spin (in the propagation direction) is zero on average. Because – in analogy to the decomposition of a linearly polarized wave into a right handed and a left handed
circularly polarized component – the quantum mechanical state vector of a linearly polarized photon can be written as the sum of two state vectors representing a left handed and a right handed
circularly polarized photon, the measurement of the spin component yields the result +h¯ or −h¯ , but never zero. Finally, let us mention that the angular momentum conservation law plays an important
role in spontaneous emission. Because the atomic levels have defined angular momenta, the conservation law implies certain selection rules for the total angular momentum of the emitted photon (this
applies to the absolute value of the angular momentum as well as to the component in the “quantization direction.”) However, we have to keep in mind that the total angular momentum is composed of the
orbital angular momentum and the spin. The total angular momentum can take values larger than unity (in units of h¯ ); in this case, we deal with electric or magnetic multipole fields which are
observable in the emission of γ quanta by atomic nuclei, for example. However, in optics we deal only with (electric) dipole radiation. In this case the total angular momentum equals unity.
7 Interference
7.1 Beamsplitting Interference phenomena are certainly among the most exciting phenomena in the whole of physics. In the following we will concentrate mainly on interference of weak fields; i.e. the
beams contain, on average, only a few photons. The principle of classical interference is as follows: a light beam is split by an optical element, for example by a semitransparent mirror or a screen
with several very small apertures, into two or more partial beams. These beams will take different paths and are then reunited and form interference patterns. The first step, the splitting of the
beam into partial beams, plays a decisive role; light beams coming from different sources (or from different spatial areas of the same source) do not interfere with each other! We start our
discussion of interference with an analysis of the action of a beamsplitter. To form a realistic idea of this device, let us imagine a semitransparent mirror. (Our considerations apply equally well
to a screen with two apertures. We could also generalize to cases of unbalanced mirrors, with reflectivity different from 1/2, or screens with apertures of different size.) The classical wave picture
can describe interference phenomena without any great effort: the incoming beam is split into the reflected and the transmitted partial wave, and each of these waves contains half of the energy. The
process of splitting becomes conceptually difficult only when we think of the beam as consisting of spatially localized energy packets, or photons. Then, fundamental questions arise. What happens to
the individual photon when it hits the mirror? Does it split or does it remain as a “whole?” There is almost no doubt that photons are – in the sense of energy quanta – indivisible. To understand
this we actually need not perform any experiments, it suffices to review all the consequences a divisibility of the photon would imply. Assuming that there is no light frequency change in reflection
or refraction,
“half photons” (energy packets with an energy content of 12 hν) would exist in nature. Such objects – when we do not challenge our insight into microscopic processes summarized in Bohr’s second
postulate, Equation (5.1) – cannot be in any way absorbed again because the energy of a “half photon” is not enough for an atom to make a transition; they would – taking their energy along with them
– disappear from the observable world. With the world facing an energy crisis, this is not a very pleasant idea! Even worse, one of the supporting columns of physics, thermodynamics, would be
shattered. Let us look at cavity radiation. The insertion of the smallest mirror into the cavity would prevent the formation of thermodynamic equilibrium because the half photons would penetrate the
cavity walls and would remain unaffected. The system would therefore lose energy continuously and irreversibly. We should point out at this point that this thermodynamic “catastrophe” would occur
already when a fraction (in principle arbitrarily small but finite) of the photons incident on the mirror were split. The thermodynamic argument can be further refined. Another principle valid in
thermodynamics is the principle of detailed balance, which, in the case of cavity radiation, means the following. In thermodynamic equilibrium, the atoms of the walls at each frequency absorb exactly
the same amount of energy as they emit (spontaneously and by stimulation). This principle would be violated, however, when the photons split because the half photons no longer contribute to the
absorption rate. Even the daring hypothesis that the atom, when not able to absorb half a photon, will have to “swallow” two of them at once, does not improve the situation. Such an absorption
process could only take place when at least two half quanta arrive more or less simultaneously at the same position, which is extremely improbable at low intensities. The corresponding transition
probability would be proportional to the light intensity squared. The acts of absorption would be too rare to be able to compensate for the decrease (due to the splitting of photons and the
associated loss of photons) of the normal (one-photon) absorption rate, with its probability proportional to the intensity. Finally, let us mention that the splitting, were it real, would not be
confined to cutting a photon “in half.” First, there exist reflecting surfaces with quite different reflection properties which would cause splits into all possible ratios, and second, a reflection
(or transmission) is often followed by a second, third, etc., so that the electromagnetic energy would become increasingly “fuzzy”. Photon splitting seems to us to be sufficiently disproved by
consideration of these arguments. Ideal experimental conditions for a direct check of the indivisibility of single photons would provide single photons (sent one after the other) incident on a
semitransparent mirror with the reflected partial beam and the transmitted partial beam monitored by detectors. The response of only one of the detectors would, without any doubt, indicate that the
registered photon took one route as an energetic
7.1 Beamsplitting
whole. Such an experiment seems to require quite an amount of experimental effort. In fact, measurements with feeble light from conventional sources allow us to make a decision in favor of or against
the classical concept of photon splitting via reflection on a semitransparent mirror. In such a situation we have also to expect rare cases when two (or in even rarer cases three or more, etc.)
photons arrive more or less simultaneously at the mirror. This corresponds classically to a wave packet with energy 2hν, which is split into equal parts by the mirror. Each part can (with a
probability determined by the detection sensitivity) cause a photodetector to respond. However, in most cases, the mirror is hit by at most one photon within a time interval of the order of the
response time of the detector. Such single photons would be – if split by the mirror – missed by the detectors, which are able to handle only whole quanta. On the other hand, a measurement on the
incident beam would detect them (according to the sensitivity detection). This would imply a drastic violation of the energy balance incident intensity = reflected intensity + transmitted intensity
as long as photodetectors are used for the measurement. Equation (7.1), which can be understood only in such a way that the photons as a whole are either reflected or transmitted, is fully verified
by experiment (J´anossy, 1973). On analyzing all the experiments performed to check the indivisibility of the photon, we should keep in mind that, due to the finite measurement precision, we cannot
completely exclude a possible splitting of the photon, though this would have a very small probability. Indeed, an experiment can always reveal only a finite upper bound but never an exact zero
value. Hence, we prefer the thermodynamic argument, which is free of the experimental insufficiencies; we see it at least as a not unimportant supplement to the experimental facts. We now take the
problem of beamsplitting a few steps further. For large intensities of the incident beam, the following question arises. How are n (> 1) incident photons packed in a short pulse1 divided between the
two partial beams? A naive way of treating the problem of the interaction between light and a beamsplitter would be to resolve (at least theoretically) the process into a sequence of independent
individual processes which always involve just a single photon. Assuming, in addition, the photons to be classically distinguishable particles, we can apply classical probability theory and present
the following argument. A mirror with reflectivity r (the mirror is ideal – no losses whatsoever) and transmittivity 1 In principle, it is possible to prepare such a light “flash” of a sharp photon
number by bringing a certain
number of excited atoms into a small volume and taking care, with the aid of concave mirrors, that the outgoing radiation is directional (Mandel, 1976).
t = 1 − r , reflects a photon with probability r and transmits a photon with probability t. The probabilities of k(≤ n) photons being transmitted and n − k photons being reflected, out of a total of
n incident photons, is given by n k n−k (n) wk = t r (k = 0, 1, 2, 3, . . . , n). (7.2) k The first factor accounts for the fact that the event – due to the possibility of distinguishing between the
photons – can be realized in different ways. (While an interchange among reflected or transmitted photons does not lead to a new case, the interchange between a transmitted and a reflected photon
does.) Equation (7.2) is a binomial distribution for the photons. Since it is very difficult to prepare n-photon states with n > 1, an experimental verification is presently limited to the case of
photon pairs (Brendel et al., 1988), which can be quite easily produced using parametric fluorescence. However, the assumptions used for deriving Equation (7.2) are wrong: since the photons arrive at
the mirror more or less simultaneously, the mirror “feels” (due to the superposition principle of classical electrodynamics) their resulting electric field, and so they act together. In addition,
because they can be viewed as particles of integer spin, they follow Bose statistics, and are therefore indistinguishable. Surprisingly, the quantum mechanical calculation also leads to Equation
(7.2) (see Section 15.4), which therefore can be assumed to be correct. The above derivation allows us to say that the photons, in the process of beamsplitting, behave as if they were distinguishable
and as if each of them interacts independently with the mirror. However, it would be erroneous to try to objectify these properties. When the number of photons is not sharp but follows a probability
distribution pn , the number of transmitted photons will be given (in accordance with Equation (7.2)) by the distribution n pk = t k (1 − t)n−k pn . (7.3) k n (An analogous relation holds for the
reflected photons.) Equation (7.3) is known as the Bernoulli transformation. It has the important property that the factorial moments are simply multiplied by the respective power of t; i.e. the
following relation holds: k(k − 1) · · · (k − l) = t l+1 n(n − 1) · · · (n − l)
(l = 0, 1, 2, . . .). (7.4)
Applying Equation (7.3) to a Poisson distribution of the incident photons over the mode volume (as explained in Section 4.2, we identify it with the pulse or coherence volume, respectively), we find
that the reflected and the transmitted photons follow a Poisson distribution. The result is of particular physical interest because
7.2 Self-interference of photons
the Poisson distribution, as described in Section 4.4, is characteristic of (quantum mechanical) coherent states of the electromagnetic field. The result indicates that beamsplitting transforms a
coherent state back into a coherent state,2 and this is what we expect from the correspondence between the classical and the quantum mechanical description. Coherent states correspond to classical
waves of sharp amplitude and phase, and, according to classical optics, the transmitted as well as the reflected wave have sharp values of amplitude and phase when the same applies to the incident
wave. Let us mention that the splitting of a polarized light beam into two beams with different polarizations (for example a linearly polarized light wave, with the help of a birefringent crystal, is
split into two waves, mutually orthogonally polarized in different directions) is of the same kind as beamsplitting with a partially transmitting mirror. Equations (7.2) and (7.3) apply also in this
case. Another consequence of Equation (7.3) is that thermal light remains thermal after splitting. This corresponds to our expectations. Finally, let us point out that the partially transmitting
mirror can serve as a model of a (one-photon) absorber; in fact, the quantum mechanical description is, in both cases, formally identical. We will use this fact in Chapters 8 and 10. 7.2
Self-interference of photons All known interference phenomena can be naturally explained by representing light as waves (see Section 3.2). Let us demonstrate the interference effect using the example
of a Michelson interferometer (see Fig. 7.1). An incident light beam is split by a semitransparent mirror into two partial beams. The beams propagate along the interferometer arms, are reversed by
mirrors and are finally reunited by the semitransparent mirror, whereupon they enter the observation telescope. Because the mirrors at the ends of the arms are usually not exactly orthogonal to the
light beams, we observe – similar to the case of a wedge – interference fringes of equal thickness. How do we understand the formation of an interference pattern in the photon picture? As discussed
in detail in Section 7.1, we have to accept that the photons (in the sense of energy packets) are not split by the semitransparent mirror – in contrast to wave packets. It seems that the photons can
take only one of the two paths and hence can “know” the length of only one of the interferometer arms. On the other hand, the position of the interference fringes is determined by the 2 Coherent
states are represented by a wave function (see Equation (4.5)), and it is not sufficient, as has been
done up to now, to characterize them only by the squared absolute values of the expansion coefficients (in the photon number basis) determining the photon distribution. Only with the help of the
transformation, describing beamsplitting, of the expansion coefficients can the above conclusion be rigorously drawn (see Section 15.4).
Interference M
Fig. 7.1. Beam paths in a Michelson interferometer. S = semitransparent mirror; M = totally reflecting mirror.
length difference between the two arms. We might conjecture that several photons must necessarily cooperate to produce interference. This would however imply a drastic intensity dependence of the
interference effect: with decreasing intensity, the interference pattern should become more and more blurred because a photon taking one of the interferometer arms would rarely find a second photon
taking the other path. Finally, in the ideal case of single photons incident one after the other, there should be no interference, which means that the black spots on the photographic plate should be
distributed completely at random. In contrast, in the classical picture, wave splitting into partial waves is completely independent of the intensity and always takes place in the same way; i.e.
according to classical optics, the visibility of the interference pattern s=
Imax − Imin , Imax + Imin
where Imax is the maximum value and Imin is the minimum value of the intensity distribution in the interference pattern, is the same for all intensities. What do we learn from the experiment?
Interference experiments with extremely weak intensities had already been performed early in the twentieth century (Taylor, 1909; Dempster and Batho, 1927). To see anything at all on the photographic
plate under such conditions, extremely long exposure times had to be chosen. The longest measurement in the classic experiment by Taylor (1909)
7.2 Self-interference of photons
took three months! There was no appreciable deterioration in the interference pattern visibility, and hence the wave theory was proved to be correct. More recent experiments using photoelectric
detection methods confirmed this result (J´anossy and N´aray, 1957, 1958; Reynolds, Spartalian and Scarl, 1969; Grishaev et al., 1972; Sillitoe, 1972). J´anossy and N´aray (1958) (compare also
J´anossy (1973)) also analyzed in their experiments the question of whether the length of the (optical) paths that the partial beams have traversed before they become reunited has any influence on
the fringe visibility. The authors used a Michelson interferometer with an arm length of 14.5 m, which was significantly larger than the coherence length of the analyzed light. The apparatus was
placed in tunneled-out rock, 30 m below the surface, to guarantee the required stability of the experimental setup. The measurements were fully automated to avoid disturbing the thermal equilibrium
by human observers, since temperature fluctuations of just 0.001 K caused a shift in the interference pattern. The authors were disappointed because, even for strongly attenuated incident light
intensities, no fringe visibility deterioration could be observed. (To appreciate fully the experimental accomplishment, we have to understand that it is a simple matter to cause the disappearance of
the interference pattern – all that is required is to take no care when performing the experiment!) Another original experiment was carried out by Grishaev et al. (1972), who analyzed the ability of
synchrotron radiation, generated by electrons orbiting in a storage ring, to interfere. The measurement of the intensity of the emitted radiation was used to determine the number of electrons stored
in the ring, and this allowed the registration of an interference pattern for a fixed number of electrons N . It turned out that the visibility of the interference pattern remains the same for N
varying between 16 and 1. In the case of a single electron, about 1.3 × 105 photons entered the interferometer per second, which implies a mean temporal distance of 0.8 × 10−5 s, orders of magnitude
larger than the interferometer passage time of 10−9 s. Finally, let us mention that time domain interference (the beats mentioned in Section 3.2) for low intensities, as in the aforementioned
experiment, has been observed (Davis, 1979). Dirac’s statement that “a photon interferes with itself” (Dirac, 1958) can therefore be considered to be experimentally proven. This fact is completely
incomprehensible when the photon picture – in the form of localized energy packets – is made objective in accordance with the classical concepts of reality. This is, however, forbidden by quantum
mechanics! It is forbidden (by quantum mechanics) to draw conclusions about certain physical properties before the measurement of those properties is complete. In the case of beamsplitting, this
means that it is possible to conclude from the click of a counter detecting a photon as a “particle” in
one of the partial beams that the corresponding path was taken, but not that this would have been so in the absence of a counter. Indeed, the behavior of the photon must change considerably, as the
observable “interference with itself” proves, when the beamsplitter is put together with other devices to form a Michelson interferometer. The experimentally “posed question” is then completely
different: we are not at all interested in the path the photon has “really” taken, but in the precise point of its arrival at the focal plane of the telescope. Under such circumstances the wave
picture is the proper one (until the act of photon detection). We also have to accept in this case the wave–particle duality of light as a matter of fact: depending on the experimental conditions,
the particle or the wave nature of light manifests itself. Light is neither a particle nor a wave; it is something more complicated, which sometimes shows its particle side and sometimes shows its
wave side. Quantum mechanics succeeded in synthesizing these two contradicting aspects, but the price to pay was the classical reality concept. Quantum mechanics describes the state of the total
field consisting of a transmitted wave and a reflected wave (in the following we label the two waves 1 and 2) by a wave function, which in the case of a single incident photon is a superposition, 1 |
ψ = √ (|11 |02 − |01 |12 ), 2
of the states “the photon is in beam 1 and not in beam 2” and “the photon is in beam 2 and not in beam 1” (see Section 15.4). This means in particular that the assumption that the photon is either in
the one or in the other beam is wrong (in such a case there would be no interference), and so we face a fundamental uncertainty (going far beyond simple ignorance and hence classically not
interpretable) about the path of the photon. The minus sign in Equation (7.6) takes into account the reflection induced phase change of π. The casual saying, the photon is both in one partial beam as
well as in the other partial beam, comes closest to reality: when the photon wants to interfere with itself, it must somehow “find out” the distances between both mirrors and the beamsplitter (see
the Michelson interferometer in Fig. 7.1). We cannot envisage, however, the simultaneous “presence” of the photon in both beams as a split of the photon energy between the two beams where it would be
objectively (in the sense of a classical description of nature) localized. In such a case we run into a dilemma when trying to describe the experiment complementary to interference, the
(photoelectric) detection of the photon in one of the two beams. The photodetector responds within a very short time (see Section 5.2), and, due to the finite propagation velocity of the
electromagnetic energy, there would be not
7.2 Self-interference of photons
enough time left to “get back” the remaining required energy from the other beam, at least not when the corresponding path is too long. The path is not subjected to any fundamental limitation; in
practice, extremely long paths can be achieved using optical fibers, avoiding divergence of the light bundle (as it takes place in the vacuum). We come to the same conclusion as we did for the case
of spontaneous emission (Section 6.3): the classical concept of continuously distributed electromagnetic energy (according to the values of the electric and the magnetic field strengths) in space
must be abandoned in the case of single photons. It is important to note that the statement regarding a self-interfering photon must not be taken literally in the sense of being verifiable with a
single measurement. If we consider the detection of a possible interference pattern using a photographic plate, we see that a single photon delivers a single black spot on the plate which can equally
well be an “element” of an interference pattern or a member of a completely random distribution of dark spots. Only in the case where the photon produces a black spot at a position for which an exact
zero of the intensity is predicted can a single measurement provide definitive conclusions regarding the interference properties of the photon, namely that the predicted interference pattern does not
agree with the facts. (To falsify a statement, we only need a single contradicting experiment.) In reality, such cases are of academic importance only because the ever present imperfections (such as
the finite bandwidth of radiation, deviations in the reflectivity of the semitransparent mirror from 1/2, etc.) hinder the minima of the intensity in reaching zero. As always, to verify quantum
mechanical statements we have to perform many individual experiments (under identical physical conditions). In the case of interference, the black spots obtained in this way all together form the
interference pattern (photographed one on top of the other). It is important that the time difference between two individual experiments can be chosen to be arbitrarily large so that each of the
experiments involves only a single photon. Hence, the appearance of an interference pattern is always observable only on an ensemble of (independent) individual systems. On the other hand, an
interference pattern can obviously be formed only when the photons follow certain “rules.” These rules state that certain positions (the areas where interference maxima are formed) are preferable
“addresses” for the photons to reside at, while other positions (corresponding to minima) are avoided. From this perspective, the photon indeed interferes with itself, but we recognize this specific
property of a single photon only by the behavior of a group of photons. Let us note that the quantized theory of the electromagnetic field encompasses the particle equally as well as the wave aspect.
In particular, beamsplitting can
be described in such a way that (in complete correspondence to classical theory) the electric field strength – now described by an operator – of the incident wave is decomposed into parts
corresponding to the reflected wave and the transmitted wave (see Section 15.4). We find then the surprising (at least at first glance) result that the classical interference pattern is quantum
mechanically exactly reproducible independent of the (perhaps even non-classical) state of the incident light (see Section 15.5). We should point out that the validity of the classical description of
interference in the domain of microscopically small intensities means that conventional spectrometers, all using the interference principle, are functioning even when a single photon is incident. The
photon will be spectrally decomposed; i.e. as a wave it is split into various partial waves corresponding to different frequencies. The measurement can be considered complete only when a result is
indicated. For this purpose the outgoing light must be registered. The photon will be found in one of the partial beams, and we measure a certain value of the frequency (with a precision defined by
the resolution of the spectrometer). Repeating the experiment frequently under identical conditions gives us a frequency spectrum. Finally, it is appropriate to comment on the concept (mainly
attributable to Louis de Broglie and Albert Einstein) of a guide wave for the photon – conceived as a localized energy packet. Some authors hoped to make the interference behavior of the photon
comprehensible by invoking this theory. According to the concept, the photon would “in reality” (we might think of a Michelson interferometer, for example) take only one path and just the “guide
wave” would be split by the semitransparent mirror. One part of the “guide wave” would travel along one arm together with the photon, while the other would have to travel alone along the other arm.
After reunification of the two parts, the “guide wave” would possess enough information to direct the photon to a position in agreement with the classical description of interference. In this
picture, the “guide wave” behaves exactly as an electromagnetic field in classical theory. The existence of the electromagnetic field is beyond any doubts – at least when many photons are
simultaneously present – and hence there is, in our opinion, no reason to introduce a new physical quality in the form of a “guide wave.” Although we satisfy our desire to objectify the motion of the
photon in space, the price to pay is, as Einstein put it, a world inhabited by “ghost fields.” Each beamsplitting without later reunification would lead (in the case of a single incident photon) to a
guide wave “being without a job.” The wave would split again at a second beamsplitter, etc. and all these “jobless” guide waves would “haunt” us until the news about the “death” of their lost
prot´eg´e ended this horror!
7.3 Delayed choice experiments
7.3 Delayed choice experiments As already mentioned in Section 7.2, we can build an interferometer by supplementing the beamsplitter with additional mirrors. On doing so, the photon behaves as a wave
and splits into two equal parts, whereas on direct observation of the process of beamsplitting we see that the particle remains indivisible. We can conclude that the photon behaves as either a
particle or a wave depending on the particular experimental setup used. We can “cheat” the photon by delaying the decision on the choice of experimental setup until after it has passed through the
beamsplitter. If the photon found at its arrival at the beamsplitter a situation requiring it to “reveal” its particle properties, it could confidently “choose” to take one of the paths. But what
would happen if, after a time, it were asked to participate in the formation of an interference pattern, and hence to take both paths? Admittedly, such a point of view is rather naive – the photon
would have to be equipped with clairvoyant abilities to be able to obtain the necessary information about the complete experimental setup at the (first) beamsplitter because its field distribution at
this time would be unable to “feel” the additional mirrors, which, in principle, could have been placed arbitrarily far away. Nevertheless, the problem stimulated several experimentalists to
construct so-called “delayed choice experiments.” We will briefly discuss one of them below (Hellmuth et al., 1987). The experiment was based on a Mach–Zehnder interferometer setup (see Fig. 7.2).
The appearance of interference is detected by the different light intensities at the outputs. The actual intensity ratio between the outputs depends on the setting of the interferometer (the
difference of the interferometer arm lengths). In particular, the interferometer can be tuned so that all the light leaves through the same output. The subsequent single incident photons are detected
in one or other of the outputs; the different frequencies of these two kinds of events indicate the interference of the photon with itself. The details of the particular experiment were as follows
(Fig. 7.2). A good approximation of a single photon source was achieved by strongly attenuating pulses of 150 ps duration, so that the mean photon number per pulse was only 0.2. A Pockels cell was
placed in the upper arm of the interferometer, forming, together with a polarizing prism, an electro-optical switch. On applying a voltage to the Pockels cell, birefringence was induced, and the
polarization direction of the incident (linearly polarized) light was rotated by 90◦ and was thus deflected by the prism from the interferometer; the interferometer arm was blocked. When the voltage
was removed the polarization rotation did not take place and the light passed through the polarizer without changing its direction. The Pockels cell was operated in such a way that the upper
interferometer arm was blocked and was opened only after
Fig. 7.2. Delayed choice experiment. PC = Pockels cell; POL = polarizer; D1, D2 = detectors.
the photon had passed through the beamsplitter at the entrance. More precisely, the photon was in the glass fiber between the beamsplitter at the entrance and the Pockels cell. Delay lines in the
form of a 5 m long single-mode optical fibre had been inserted into the optical paths of both interferometer arms to allow enough time for the Pockels cell to be switched off. This device presented
the photon with the dilemma discussed at the beginning of this section, but the experiment showed that the photon was not impressed by it in any way: it chose its behavior only in accordance with the
experimental conditions it found at its moment of arrival at the given place, and identical interference patterns were observed independent of whether the upper interferometer arm was always open or
only opened when the photon had just passed through the beamsplitter.
7.4 Interference of independent photons It is a well known fact in optics that light waves emitted from different light sources (or from different points of the same source) cannot be made to
interfere. The reason can be traced back to fluctuations of the phase of the electric field strength. (The emitted light is formed by contributions from individual independently emitting atoms –
which therefore have statistically random phases – and, because the elementary emission process, as described in Section 6.2, has a short duration, both the phase and the amplitude of the total field
changes in an uncontrollable manner.)
7.4 Interference of independent photons
The phase fluctuations do not matter in the standard interference experiment (in which partial waves originating from the same primary beam interfere) because the phase changes in the partial beams
are exact copies of the phase fluctuations of the master beam. Consequently, the relative phase between two different beams determining the position of the interference pattern according to Equation
(3.14) remains unaffected by the phase fluctuations and is determined only by the geometry of the setup. Only when the path difference between the partial waves exceeds the coherence length of the
used light, the phases do not generally “match” each other, because at a given time the (individual) phases have on average an approximately constant value only over distances of the order of the
coherence length; over longer distances the phase changes randomly. Thus, in the case of a Michelson interferometer, the interference pattern vanishes when the arm difference exceeds the coherence
length. For independently generated optical beams the phases fluctuate in a completely uncorrelated way and the interference pattern is shifted perpetually by random amounts so that it is completely
“washed out.” So, can we say that the reason we do not observe an interference pattern in the experiment is the large observation time? However, because the individual phases observed at a fixed
point do not change significantly within times of the order of the coherence time (by this we mean the coherence length divided by the velocity of light), an interference pattern should be detectable
at a time scale of the order of the coherence time. Indeed, it is not too difficult to fulfil the above requirement in the present experimental state of the art. For this purpose an electro-optical
shutter or an image converter which is gated on for a short time by an electric control pulse could be used. This is not the end of the story, however. The number of photons must be great enough to
be able to draw definitive conclusions about an interference pattern from the blackened spots on a photographic plate (or from photoelectrically obtained measurement data). It turns out that the
requirement cannot be met in practice with thermal light sources (gas discharge lamps, etc.). Let us emphasize, however, that there are no obstacles in principle. According to Planck’s radiation law,
the spectral energy density, and with it also the number of photons incident on a detector within the coherence time, increases with increasing temperature. However, for the discussed interference
experiment, the required temperature would be unrealistically high! There is still hope, however, as we can turn to novel light sources – lasers – which can deliver light with fantastically high
spectral densities. Spatial interference between two light pulses – with slightly different propagation directions – was observed in the early 1960s (Magyar and Mandel, 1963). The pulses were emitted
Fig. 7.3. Interference of two laser beams. IC = image converter; L = laser; M = mirror. From Magyar and Mandel (1963).
in an irregular sequence (in the form of so-called “spikes”) by two independent ruby lasers (Fig. 7.3). Because the phases of the laser pulses change randomly, the interference pattern changes its
position from shot to shot. The high intensity of the laser light created sufficiently many photoelectrons in each run – Magyar and Mandel used an image converter, which was electronically gated on
for a short time only when both lasers simultaneously emitted a pulse – such that an interference pattern was formed. A year before this, quantum beats between two laser waves with slightly different
frequencies had been demonstrated (Javan, Ballik and Bond, 1962). This experiment is easier to achieve because one can work in the continuous regime. The photocurrent of a photomultiplier follows the
time evolution of the total intensity of light and hence contains, according to Equation (3.14), a contribution oscillating at the difference frequency of the lasers, and this represents the beat
signal. Beats are very easy to observe using gas lasers because their frequencies – in each of the excited eigenoscillations or “modes” – are very sharp (the linewidth is of the order of 10−3 Hz).
Under normal working conditions, and over rather long time scales, we can observe frequency shifts caused primarily by mechanical instabilities of the resonator setup (they are typically of the order
of 100 kHz/s). Precisely this frequency “drift” can be monitored in beat measurements with high accuracy, which was in fact the primary aim of the experiment.
7.4 Interference of independent photons
Interestingly enough, such a beat experiment was performed before the laser era (Forrester et al., 1955), and we do not want to miss the opportunity to comment in some detail on this pioneering work.
Not only did the authors have to master enormous experimental problems, but they also had to counter the doubts presented by theorists that the desired interference effect was even possible. A
microwave excited electrodeless gas discharge in a tube filled with the mercury isotope Hg202 was used as a light source. The green mercury line at a wavelength of 546.1 nm was chosen for the
measurement, and it was split into different components by an external magnetic field, so exploiting the Zeeman effect. The components are, when observed in the direction orthogonal to the magnetic
field, linearly polarized either parallel to the magnetic field (π component) or orthogonal to it (σ component). The authors intended to detect beats between two components (polarized, of course, in
the same way). At the rather modest intensities Forrester et al. had to work with, shot noise proved to be the biggest obstacle to observation. (Because white shot noise was present, there was also a
noise component oscillating at the beat frequency.) During a beat period, however, enough photons impinged onto the cathode (about 2 × 104 ) to allow the formation of the beat signal. However, the
experimental conditions were such that the phases of the light waves (at each moment) were constant only over very small areas of the illuminated cathode surface, the so-called coherence areas, and
changed from one area to another in a completely random way. Hence, the contributions from the individual coherence areas to the total alternating current of the photocathode, i.e. to the beat
signal, join together with random phases, making the signal almost vanish in comparison to the shot noise (originating from the direct current part of the photocurrent). The estimation of the signal
to noise ratio produced a value of about 10−4 . In such a situation, only “marking” the signal could help. The authors used a well known technique: they modulated the signal in its intensity (at an
unchanged noise level) and then amplified it with a phase sensitive narrowband amplifier to distinguish it from the noise. The required modulation was accomplished with the help of a rotating λ/2
plate with a polarization foil positioned behind it. The rotating plate caused a rotation of the polarization direction of the two σ components whose interference was to be measured, the speed of
rotation being twice the speed of the plate. The polarization foil, which transmits only light of a certain polarization, converted the rotation of the polarization into a periodic intensity
modulation. The other σ components and the π components were also influenced; however, the intensities of the π components are minimal when the σ component intensities become maximal. When the light
of the analyzed spectral line is in total unpolarized (to achieve this polarization already present had to be compensated for) the total intensity of the spectral line impinging on the
photocathode, and hence also the receiver noise, remains constant in time, as required. Using the described modulation technique, Forrester et al. managed to enhance the signal to noise ratio from
10−4 to a value of 2. However, even under these circumstances, in the authors’ words, “a great amount of patience was needed to obtain data.” The observed beat frequency had to be chosen to be
relatively high to be able to attain the required separation of the Doppler broadened Zeeman components. In the experiment it was 1010 Hz, i.e. in the microwave region. Therefore, a microwave
resonator was used to measure the beat signal excited by the electrons coming from the photocathode. From the agreement between the measurement data and the result of calculations performed by them,
Forrester et al. concluded that they had succeeded in observing the interference. Let us now return to spatial interference which demonstrates so clearly the interference phenomenon. As previously
explained, the interference between two intense independent light beams is an experimental fact. For radio engineers this comes as no surprise because in the early days of radio communication they
quickly learnt that radio waves coming from different sources have the unfortunate property that they interfere. Researchers in optics have had to wait to find an analog to the radio transmitter, and
this wait ended with the invention of the laser. What happens to the ability of the waves to interfere at very low intensities? Do laser beams interfere after they are strongly attenuated? At first
glance it seems that the answers are no, for fundamental reasons. Laser beams (we mean continuous laser operation) have a finite linewidth and hence a defined coherence length. When the intensity is
decreased to such an extent that the coherence volume (i.e. a cylinder with its base equal to the beam cross section and its height equal to the coherence length) contains only a few photons, we run
into the same difficulties as in the case of thermal radiation. In contrast to this, however, there is now a way out of the dilemma. The intense light beams, from which we “split off” a tiny part for
the interference experiment, can be used to obtain information about the phase of our attenuated beam. Having this information at hand, we can control a shutter so that the photoelectrons are
detected only when the phase between the two interfering beams has a prescribed value. The experimental setup would basically have the following form (Paul, Brunner and Richter, 1965; Paul, 1966).
Light waves emitted from two identical lasers each impinge on a beamsplitter which reflects only a small fraction of the incident radiation (Fig. 7.4). The reflected beams (being only slightly
different in their directions of propagation) – after being further attenuated by an absorber when necessary – are made to interfere on an observation screen S1 . Before this, however, they pass an
electronically controlled shutter which is opened only when the momentary interference pattern on screen S2 generated by
7.4 Interference of independent photons
103 S2
M L
Fig. 7.4. Experimental setup for the observation of interference between small numbers of independent photons. CD = controllable diaphragm; L = laser; S1 , S2 = screens; M = weakly reflecting mirror.
the transmitted (intense) laser beams has the prescribed position. In this way it is also possible to detect interference patterns for very low intensities – the exposure time must be just long
enough. The described experiment was successfully achieved in the 1970s by Radloff (1971). (Radloff had earlier observed beats between two independent strongly attenuated laser beams (Radloff,
1968).) The interference was observed for beams generated by two He–Ne lasers operated in the single-mode regime. Good mechanical stability was achieved by using a cylindrical quartz block with two
longitudinal drills. To one face of the quartz block were fastened two ceramic hollow cylinders (for the purpose of piezoelectric length change). On the ends and the other face of the quartz block
were the laser mirrors. Both laser tubes were inserted into lateral slots of the quartz block. To obtain the control signal for the diaphragm (formed by an electro-optical shutter) the beat signal
between the two (intense) laser beams was detected.3 The procedure was such that when the beat frequency was within the range 3 to 70 kHz, the beat maxima were amplified and converted into
rectangular pulses, which opened the electro-optical shutter. The mean photon current in the attenuated laser beams was 105 photons/s. The opening time of the shutter varied between 10−4 and 10−5 s
so that only a very few photons arrived at the photographic plate. Nevertheless, in this case – for a measurement time of 30 min – a well defined interference pattern could be observed. 3 As already
noted, the finite linewidth of the laser radiation (for the gas laser) is due to the long-term frequency
drift originating from the mechanical instability of the resonator. The drift leads to a frequency difference between the two lasers, and this is the main reason for the displacement of the
interference pattern. Observing the time evolution of the interference at a fixed position it appears as a beat, and obviously the interference pattern always has the same position when the beat
amplitude takes the same value (for example its maximum).
In fact, a quantum mechanical description of this interference phenomenon (see Section 15.5) leaves no doubts that the visibility of the interference pattern is independent of the intensity.
Interference appears for (in principle) arbitrarily small intensities. The mean number of photons reaching screen S1 within the coherence time can, in fact, be much smaller than one! Then, during
most of the time intervals when the diaphragm is opened, nothing happens, but, from time to time, a photon is registered that – like an individual piece of a mosaic – contributes to the interference
pattern. Instead of making the interference pattern directly visible in this way, we can “reconstruct” it afterwards. To do this we dispense with the diaphragm, which naturally causes the blackened
spots on screen S1 to be distributed completely at random, i.e. no interference can be recognized. For each blackened spot an observer4 registers the time of registration, while a second observer,
using the interference pattern formed by the intense laser beams, registers the relative phase between the two waves as a function of time. After finishing the experiment the first observer can still
“recover” the interference pattern when the measurement data of the second observer are available. It just remains to select those blackened spots which correspond to moments when the relative phase
had the same (prescribed) value. Let us note, by the way, that it is not necessary for the observers to make simultaneous measurements. Using a sufficiently long delay line (a glass fiber for
example) the second observer can, at least in principle, perform observations after the first observer has finished taking measurements. In this case, the reconstruction of the interference pattern
after both measurements are completed is possible; however, we must know the exact time spent by the light in the delay line to be able to count back reliably. All of this is not very surprising
because the interference between two intense laser beams represents a macroscopic process and the classical concept of reality applies. This means that we can ascribe a definite position to the
interference pattern in the sense of an objective fact, independent of whether (and when) we measure it or not. A possible measurement is then nothing more than taking notice of a previously existing
fact. Thus we can also equally well reverse the time order of the measurements performed by the two observers. How should we imagine the interference between independent photons? Obviously the photon
concept completely fails. Because the position of the interference pattern is determined through parameters of both beams (propagation direction and phase while assuming coinciding frequencies and
polarization directions) we would have to assume that always one photon (at least) from one beam is cooperating some way with (at least) one photon from the other beam. However, 4 The term observer
is understood quite generally to include automated measurement devices storing data (such
as magnetic tape).
7.4 Interference of independent photons
this is extremely difficult to imagine, even for higher intensities, when we are dealing with spatially localized light particles. At very small intensities this is simply not possible because it is
extremely rare that a photon from one beam and, simultaneously, a photon from the other beam pass the shutter during its opening time. We can illustrate our considerations using a drastic example.
Let us assume that during the opening time of the shutter each laser sends, on average, 1/1000 of a photon through the diaphragm, then only in two cases out of 1000 does one photon (either from the
first or the second beam) hit the photographic plate, while only once in 106 cases does the desired coincidence take place. We have to resort to the wave picture! The discussed interference is then
immediately understandable within the framework of classical electrodynamics, as explained in Section 3.2, and the intensity does not matter at all. It becomes problematic again only when we try –
for the discussed small intensities – to bring the wave picture into harmony with the quantization of the radiation energy. Indeed, all the problems with comprehending this situation arises from the
inadmissibility of making the photon concept “objective.” The concept that each wave pulse has in it in all cases a well-defined photon number (taking, for example, the values 0, 1, 2, etc.) is
unjustified according to the insight provided by quantum mechanics. Let us confine ourselves to the case when the photon number can take either the value zero or one; quantum mechanics, apart from
the states “there are certainly no photons present” (represented by the symbol |0) and “there is exactly one photon present” (represented by |1) allows for a whole range of possibilities, namely the
(arbitrary) superpositions a|0 + b|1 of the two states (a and b are complex numbers satisfying the normalization condition |a|2 + |b|2 = 1). In particular, the states can be of such a kind that the
probability |b|2 is arbitrarily small. Nevertheless, such states are fundamentally different from the corresponding mixture describing an ensemble, the elements of which are either in state |0 or –
with the correspondingly small probability – in state |1. For example, the phase of the electric field strength is, in a superposition state, more or less well defined, whereas for states with a
sharp photon number – and hence also for a mixture of such states – it is completely uncertain (compare with Section 4.3). Indeed, the quantum mechanical description of interference between
independent photons is based on the assumption that the two light beams are each in a superposition state of the aforementioned type. In particular, it was proven (Paul, Brunner and Richter, 1963)
that coherent states of the electromagnetic field,5 as mentioned in Section 4.4, give rise to an interference pattern which – independent of the intensity – corresponds exactly to the predictions of
classical theory using the 5 For very small mean photon numbers (|α|2 1) the “admixture” of more-photon states |2, |3, etc. does not
practically play any role.
concept of waves having definite phases and amplitudes (see Section 15.5). Actually, laser light can be described, as discussed in Section 4.4, to a very good approximation by coherent states, even
after it has been (in principle) arbitrarily strongly attenuated. Let us look into the special case of an extremely small intensity so that the coherent state in Equation (4.5) can be approximated by
|α ≈ |0 + α|1, and the direct product of the two interfering beams can be written in the form |α1 1 |α2 2 ≈ |01 |02 + α1 |11 |02 + α2 |01 |12 .
The first term does not play any role in describing the response of the detector – the detector does not react to the vacuum state of the field. The term relevant to the interference has exactly the
same structure as that in the case of self-interference of the photon (see Equation (7.6)). The position of the interference pattern is determined by the relative phase of the two complex numbers α1
and α2 . (Equation (7.7) is more general than Equation (7.6) because the intensities of the two beams can also be different.) As far as the formal description is concerned, there is no difference
between the self-interference of a photon and the interference of two independent photons! However, we have to emphasize that the photons “admitted” to interference are not statistically independent.
They are, in reality, phase correlated – more precisely, this is valid for the corresponding fields – and because only one photon is ever detected, the physical conditions are indeed quite similar to
those present in the case of the interference of a photon with itself. The difference is only evident in the way in which the required phase correlations are generated: conventional experiments use
beamsplitting for this purpose, while in the case of independent laser beams the above described “preparation” through the technical arrangement of the measurement selects proper “pieces” from the
total radiation field. The almost ideal correspondence between the quantum mechanical and classical descriptions shows that classical electrodynamics comes much closer to quantum mechanics than
classical mechanics, which is able to reproduce quantum mechanical predictions only approximately at best. The reason for this is that electrodynamics is already a wave theory and, within its area of
applicability – including interference phenomena – is able to perform equally well as wave mechanics or, in other words, quantum theory. This fact may seen surprising especially for a physicist
having preconceptions of quantum mechanics. Someone like this can be easily led (and this relates to the author’s own experience) into the dark by the quantum mechanical uncertainty relation for the
phase ϕ and the photon number n already derived by Dirac (1927) (compare with Heitler (1954)): 1 n ϕ ≥ . 2
7.4 Interference of independent photons
Since a prerequisite for the formation of a well visible interference pattern is the presence of sharp phase values of the (independent) partial beams, one is tempted to draw from Equation (7.8) the
conclusion that, with decreasing mean photon number and consequently decreasing dispersion n, the phases should fluctuate more strongly and thus wash out the interference pattern more and more. This
erroneous conclusion is based on the fact that the quantum mechanical phase (defined by the corresponding operator) contains a contribution representing the vacuum fluctuations of the electromagnetic
field mentioned in Section 4.3. However, a photodetector does not take any notice of them. The result is that the interference pattern loses none of its visibility in the correct quantum mechanical
description, even for smaller and smaller mean photon numbers. Let us return, after this short detour, to the question of how to imagine the interference between independent photons. From the above
discussion it follows that it is uncertain in principle how many photons (if any) are in each of the beams. (The physical situation is in this respect basically different from that of a photon
interfering with itself. In that case we could safely assume we were dealing with a single photon.) When a photon is detected on the observation screen, it has obviously come from the total field
formed by the superposition of the two light waves. The “localization probability” of the photon (on the screen) follows the maxima and minima of the intensity of the superposed field. It is
impossible (in principle) to “read out” from which laser the detected photon originated! Photons are not individuals with a “curriculum vitae” that can be traced back! What happens in the experiment
is simply nothing more than an act by the detector of taking the energy hν from the field, and the question of where it came from is already physically meaningless in the classical theory (which, in
fact, deals only with waves). The facts could be stated in the language of photons as follows. A photon is detected, and we can state that this, with absolute certainty, is something other than a
photon in a single beam because its behavior is determined by the physical properties of both beams. When the “identity” of the photon is checked by a suitable measurement (in the sense of belonging
to one of the two beams, which have different propagation directions), we find that the interference pattern is destroyed. We could use also the following vague formulation: the fundamental
uncertainty relating to which of the two beams the photon came from is an essential element of the interference process when we are dealing with independent photons. Similarly, the interference of a
photon with itself is associated with a fundamental uncertainty about the path that the (one!) photon took. Finally, let us not ignore the fact that additional confusion in the discussion about
interference between independent photons was introduced by an often cited statement by Dirac (1958). Dirac not only asserted the interference of a photon
with itself, but also declared that it is the only possible kind of interference. (His formulation was: each photon interferes only with itself; interference between different photons never occurs.)
It follows from the context, however, that Dirac had a conventional interference experiment in mind. Some researchers found it difficult to ignore this apodictic statement of such an authority as
Dirac. There were attempts to “save” Dirac’s statement by postulating that the emission from the two lasers cannot be guaranteed to be, in reality, independent; rather that each of the photons is
generated simultaneously in both lasers and so interferes just with itself. The inadequacy of such an approach can be easily proven with the help of a delay line, for example, which can be used in
such a way that two photons generated at different times interfere.
7.5 Which way? We explained in the preceding section that experimental conditions under which single photon interference phenomena can be observed are quantum mechanically characterized by the
impossibility of predicting anything about the photon paths. But why is this impossible? Let us first analyze the self-interference of the photon. In the Young double slit experiment we could use as
the light source an excited atom and let the photon interfere, and afterwards we could “calmly” measure the recoil of the atom caused by the emission. The propagation direction of the photon is, due
to momentum conservation, opposite to the recoil, and in this way we could find out after the event which of the slits the photon “really” passed through. The weak point of this argument is that
Heisenberg’s uncertainty relation for position and momentum of the atom is not taken into consideration, as was shown by Pauli (1933). Let us go into some detail. To be able to observe interference
the atom must be well localized. We analyze classically how the interference pattern will change when a point-like light source is shifted parallel to the interference screen by a distance δx (Fig.
7.5(a)). We will consider for simplicity a screen with two almost point-like holes, and we will assume further that the holes, the emitter and the point of observation lie in the same plane. Finally,
we will assume for convenience that the emitter is positioned symmetrically between the two holes at position E1 . The intensity at the point of observation is determined by the difference between
the two optical paths from the emitter through the holes H1 and H2 , respectively, to the observation point. By shifting the emitter to position E2 we change the two optical paths s1 and s2 from the
emitter to the corresponding holes, and their difference (which is easily calculated) increases from zero to the value d s2 − s1 ≈ 2 √ δx, 2 l + d2
7.5 Which way?
where we have assumed |δx| d; d is half the distance between the holes and l is the distance between√the emitter and the interference screen. From Fig. 7.5(a) follows the relation d/ l 2 + d 2 = sin
α. Finally, dividing Equation (7.9) by the wavelength, λ, we obtain the following expression for the path difference change at the point of observation: δg =
s2 − s1 2 = sin α δx. λ λ
Repositioning the emitter has a minor influence on the position of the interference pattern only when δg is small compared to unity. We thus find that the atom must be localized with an accuracy x
λ 2 sin α
(in the x direction) to avoid the washing out of the interference pattern. Let us now discuss the experimental conditions for the measurement of the atomic recoil. The desired information about the
direction of the emitted photon is delivered by the x component of the photon momentum (Fig. 7.5(b)). Due to momentum conservation, the atomic momentum change is equal in its absolute value to the
photon momentum but has opposite sign. The photon momentum is given by hν/c (see Section 6.9), and from Fig. 7.5(b) we find the momentum change of the atom in x direction to be δpx(2,1) = ±
hν sin α, c
where the plus sign applies to the lower and the minus to the upper light path. The difference of the two momenta is δpx(2) − δpx(1) = 2
hν sin α. c
To distinguish between the two optical paths, we must measure the atomic momentum change δpx with a greater accuracy than that given by Equation (7.13). We can imagine that we measure the atomic
recoil with an arbitrary precision; however, what we are really interested in is the change in atomic momentum due to spontaneous emission. This means that our precision condition also concerns the
atomic momentum before the emission: its uncertainty – we think again quantum mechanically – must be subjected to the constraint h px 2 sin α λ
(and we assume that the mean value vanishes to get a static interference pattern). When we observe interference and also wish to measure atomic recoil, we run into a conflict with Heisenberg’s
uncertainty relation. Multiplying the two requirements
Interference OS
IS H1 s1 E2 d
dx E1
d H2
dp (2 )
a a
) (2
P P
( dp
Fig. 7.5. Young’s interference experiment. (a) Displacement of a point-like light source by δx. H1 and H2 = pinholes in the interference screen (IS); OS = observation screen. (b) Photon momenta P (1)
, P (2) and the corresponding atomic momentum changes δp (1) , δp (2) .
given in Equations (7.11) and (7.14), we obtain the inequality x p h,
which is in strict contradiction to Heisenberg’s uncertainty relation x p ≥
h . 4π
7.5 Which way?
The conclusion is that we can observe either interference or the emission direction thus determining the trajectory of the photon, as was claimed at the beginning. Along similar lines, we can refute
another objection which – among other penetrating attempts of Einstein to disprove quantum mechanics – was the subject of the famous Bohr–Einstein debate in 1927 at the Fifth Solvay Conference.
Einstein’s argument was based on the assumption that it is possible “in principle” to measure the recoil the interference screen suffers when the photon is passing because the photon changes its
direction of propagation (see Fig. 7.5). To be able to ascertain from such a measurement the trajectory of the photon, we require, as in the previous case of recoil measurement, the initial momentum
of the screen to be sufficiently well defined. According to Heisenberg’s uncertainty relation, the position of the screen, and with it also the position of the two holes, is then not defined exactly,
and a simple calculation reveals that it is uncertain to such an extent that the interference pattern is completely washed out. While the experiments up to now have been only Gedanken experiments,
recently experimentalists have become interested in the subject. A beautiful experiment was recently performed by American scientists (Zou, Wang and Mandel, 1991), who analyzed interference of light
from different sources. The sources were non-linear crystals, each pumped by a strong coherent pump wave (laser radiation) exciting in them parametric fluorescence (discussed in Section 6.7); see
Fig. 7.6. In this process, both a signal wave and an idler wave are generated. First, the two signal waves were made to interfere. This is by itself a surprising result. The observation of the
interference pattern – resulting from many individual M D
s1 M
L F
Fig. 7.6. Interference between two signal waves s1 and s2 generated through parametric fluorescence in two crystals C1 and C2 . The interference disappears when the path of the idler, i 1 , is
blocked. i 2 is the second idler wave. L = laser beam; BS1 , BS2 = beamsplitters; M = mirror; F = frequency filter; B = blocker; D = detector.
events – indeed required certain precautions to be taken: both crystals had to be pumped with the same laser radiation, and the geometry had to be set so that the idler waves coincided. Let us start
with the discussion of the first condition: that the crystals had to be pumped with the same radiation appears to be absolutely necessary because otherwise the emission processes in the crystals
would be completely independent and no interference could be observed. One might also ask whether the phase correlations generated between the pump waves by the first beamsplitter are sufficient to
generate the phase correlations between the signal photons that are needed for interference. The process of parametric fluorescence is definitely a spontaneous process! Indeed, the second condition,
that the paths of the idler waves coincide, is necessary to guarantee the appearance of interference. Let us look at the problem from a classical viewpoint. The crucial point is that, in the
parametric process, the phases of the participating waves are related to each other in a certain way. Within the classical description we have to imagine that the crystal is illuminated by an intense
pump wave together with a weak idler wave (or a weak signal wave) with a random phase. In contrast to quantum mechanics, where vacuum fluctuations of the signal and the idler wave are enough to
“initiate” parametric fluorescence, we need in the classical theory a real wave (whose intensity can, in principle, be arbitrarily small) as a kind of nucleus, because the real wave must act jointly
with the pump wave to generate in the medium a non-linear polarization oscillating at the signal wave frequency and hence acting as a source for the signal wave. (With the formation of a signal wave
the idler wave is also amplified.) The phase relation in question comes about in the following way. Let us denote by ϕp and ϕi the pump and idler phases, respectively. The polarization induced by
these two waves oscillates with the phase difference ϕp − ϕi , and because it is the source term for the emitted signal wave its phase is transferred to that wave (apart from a π/2 jump). The phase
of the signal is hence given by ϕs = ϕp − ϕi −
π . 2
This relation applies to the processes taking place in both crystals. Assuming now that the propagation directions of both idler waves coincide, we deal with just a single idler wave. The statistical
nature of its phase does not disturb the formation of a well defined phase relation between the signal waves, which can hence be made to interfere. Writing Equation (7.17) for the crystal C1 at a
time t1 we see this quite clearly. The instantaneous phases propagate with the speed of light, hence (2) the idler wave phases at C1 and C2 , respectively, obey the simple relation ϕi (t) = (1) ϕi (t
− τ12 ), where τ12 is the propagation time taken by the light to travel from C1 to C2 . In an analogous way, we can count back the pump wave phases at the crystal
7.5 Which way?
113 (µ)
locations to the pump wave phase before the beamsplitter; we may write ϕp (t) = (0) ϕp (t − τ0µ ), where τ0µ is the propagation time6 between the beamsplitter and the crystal Cµ (µ = 1, 2). According
to Equation (7.17), we can write π , (7.18) 2 π (1) ϕs(2) (t2 ) = ϕp(0) (t2 − τ02 ) − ϕi (t2 − τ12 ) − , (7.19) 2 from which the signal wave phase difference follows as (1) (1) ϕs(2) (t2 ) − ϕs(1)
(t1 ) = ϕp(0) (t2 − τ02 ) − ϕp(0) (t1 − τ01 ) − ϕi (t2 − τ12 ) − ϕi (t1 ) . (1)
ϕs(1) (t1 ) = ϕp(0) (t1 − τ01 ) − ϕi (t1 ) −
(7.20) As could have been expected, the correlation between the signal waves is most pronounced when the two times differ by exactly the propagation time between C1 and C2 (t2 = t1 + τ12 ). As in any
conventional interference experiment the coherence length offers additional freedom for the interference observations. (The coherence length of the pump wave is much larger than that of the signal
and idler waves, so we do not need to worry about it.) In the experiment, the expected periodic intensity dependence on the path difference of the two signal waves was indeed observed. (This was
achieved by repositioning BS2 .) It is without doubt that the correct way of describing this experiment must use the language of quantum mechanics. How do we discuss the problem quantum mechanically?
First of all, we describe the pump wave classically, since it is an intense laser wave, and we represent it by an amplitude stabilized plane wave. Because the coherence length of laser radiation is
large, another idealization is allowed, namely the phases of the two pump waves can be considered to be the same at the positions of the two crystals and, in addition, constant in time. The wave
function of the radiation field emitted by each of the crystals is, in the lowest order of perturbation expansion (Ou, Wang and Mandel, 1989), |ψ = |0s |0i + β A|1s |1i ,
where β is a (positive) constant proportional to the non-linear susceptibility of the crystal and A is the complex pump wave amplitude. By doing this we have idealized the signal and idler waves as
single-mode fields; this is justified by the fact that in the experiment a frequency filter was placed in front of the signal detector, making the signal waves quasimonochromatic. Due to the energy
conservation for parametric processes expressed by Equation (6.8) and the sharp frequency of 6 Any possible phase jump caused by beamsplitting is included in the propagation time.
the pump wave, the same applies to the idler waves. Equation (7.21) indicates that the information about the pump wave phase is “conserved” also in the quantum mechanical wave function. Since in the
discussed interference experiment the two idler modes coincide (i 1 = i 2 = i), the total wave function |ψtot is not simply a product of two wave functions of the type (7.21), but takes the following
form (under the assumption made above about the phase of the pump wave): |ψtot = |0s1 |0s2 |0i + β A(|1s1 |0s2 |1i + |0s1 |1s2 |1i ).
The vacuum term in the above equation is irrelevant for the photoelectric detection, as has already been mentioned. Only the second term is of importance for the description of the experiment. The
common factor |1i of the two sum terms in the bracket is of no importance for the measurement on the signal waves, and so we come to the conclusion that the superposition |ψ = |1s1 |0s2 + |0s1 |1s2
is responsible for the appearance of interference. However, this is exactly the same form of wave function that we encountered in the case of self-interference of a photon (see Equation (7.6)) and in
the case of the interference of independent photons (see Equation (7.7)), which implies that the interference principle is always the same: interference is possible due to the fundamental
impossibility in finding out which way the detected photon went, i.e. which source emitted it. In contrast, when such information can be obtained, the interference pattern will disappear. The
experiment can illustrate this in an impressive way. The “birthplace” of the detected photon (C1 or C2 ) can be determined indirectly by performing a coincident measurement on the corresponding idler
photon. Observing, for example, that the idler photon has emerged from C1 implies that the same must apply to the signal photon. To be able to make such a measurement, we have to modify the
experimental setup shown in Fig. 7.6. Either we have to place a detector into the idler path between the two crystals (assuming the detector to be ideal, we can conclude from its non-response that
the idler was emitted from C2 and hence the second detector is unnecessary), or we misalign the crystals so that the idler paths become separated, enabling measurements by two separate detectors. In
fact, the first setup does not even require a readout of the detector signal, it is enough to block the idler path by inserting an absorber. In the second case, the positioning of detectors is not
required; even the insertion of blockers into the optical paths is unnecessary. The mere “threat” of performing such a measurement at any time is sufficient to make the interference disappear. This
is easily seen theoretically. From the classical viewpoint, in both cases the phase correlation between both idler waves is destroyed – we have to deal with
7.5 Which way?
two idler waves, instead of a single one, with randomly fluctuating phases, which, according to Equation (7.20), implies a similar behavior in the phase difference of the two signal waves. In the
quantum mechanical description we have to work with two different idler modes, and Equation (7.22) is replaced by |ψtot = |0s1 |0s2 |0i1 |0i2 + β A(|1s1 |0s2 |1i1 |0i2 + |0s1 |1s2 |0i1 |1i2 ), (7.24)
from which it is easily seen that the detector signal is no longer described by the superposition state in Equation (7.23) but by the corresponding mixture of states. It does not contain any phase
information, and thus indicates that no interference takes place. In the experiment by Zou et al. (1991), the interference pattern was destroyed by blocking the idler path. These researchers
proceeded in fact in a more clever way. Instead of blocking the idler wave emitted from C1 completely, they attenuated it gradually and observed a proportional decrease in visibility of the
interference pattern. The particular appeal of this experiment is that the destruction of the interference pattern is accomplished by affecting the idler waves only, leaving the signal waves
undisturbed. Finally, the experiment offers a good opportunity to introduce the concept of a “quantum eraser” (up to now it has remained as a Gedanken experiment; see Kwiat, Steinberg and Chiao
(1994). The basic idea is as follows. The interference is first destroyed by obtaining information about the path of a self-interfering photon. When we “erase” this information, the interference
pattern appears again. In our particular case we can extract the information about the path of the signal photon by inserting a λ/2 plate between the two crystals into the path of the idler wave i 1
(for aligned crystals). The polarization of the idler wave emitted by the first crystal is rotated by 90◦ , and this wave can therefore be easily distinguished from the idler wave i 2 emitted by the
second crystal. The determination of the polarization is realized with a polarizing prism (with detectors at each output) oriented such that its transmission directions coincide with the polarization
directions of the two waves. The interference pattern which was present before thus disappears. The reason for this is clear: the two idler waves are statistically independent. How do we eliminate
(“erase”) the which-way information? It is surprisingly simple: we just rotate the polarizing prism by 45◦ . In each output the projections of the electric field strengths of the two idler waves onto
the respective transmission directions are superposed, and we cannot conclude anything about the polarization state of the incident field and hence its place of origin. When we rotate the polarizing
prism nothing happens to the signal photon, which was emitted a long
time before, and probably does not exist anymore, having been absorbed by the detector. Hence we do not see a “revival” of the interference pattern. However, we can observe an interference pattern
when conditional measurements are performed. We mean by this the following. We select those clicks made by the signal detector for which also, for example, the detector in the first output of the
polarizing prism responded simultaneously. These clicks form an interference pattern, but only half of the signal photons contribute to its build up. The other half are recorded through coincidence
measurements with the detector in the other output of the polarizing prism. The interference pattern is, in this case, shifted by half a fringe period from the one obtained previously. The
superposition of the two patterns leads to its destruction – a maximum always meets a minimum. This must be so, since then the readings from the detectors – and hence the detectors themselves
including the polarizing prism – can be dispensed with. How can the selection carried out on the idler wave influence the signal wave? There cannot be an action back onto the emission process which
has already ended! The key to an understanding of this problem is the correlation between the phase relation between the two idler waves and the phase relation between the signal waves. (See Equation
(7.19), where ϕi(1) (t2 − τ12 ) should be replaced by the orig(2) inal value ϕi (t2 ).) The phase relation between the idler waves decides which output port of the polarization prism is taken by the
idler photon. According to classical optics, for the special value 0 of the phase difference all the incident energy is concentrated in one of the outputs, while for a phase difference of π all the
energy is concentrated in the other output. Hence, a selection of the relative idler phase is connected with the measurement of the idler photon, and because this phase was already fixed (randomly)
in the emission process, we can make in this way a post-selection of the relative signal phase. This makes the appearance of interference at least qualitatively understandable and a change of the
relative phase by π just leads to the shift of the interference pattern by half a fringe period, as predicted by quantum mechanics. Finally, let us mention that atomic optics, which has undergone
significant experimental progress in recent years, offers a very elegant example of the incompatibility between interference and which-way information. Due to the particle–wave dualism, material
particles, such as electrons, neutrons or atoms, are also able to interfere with themselves. It is indeed possible to set up diffraction and interference arrangements for atomic beams based on
optical models (Sigel, Adams and Mlynek, 1992). The place of light wavelength is taken by the De Broglie wavelength = h/ p, where h is Planck’s constant and p is the particle momentum.
Monochromaticity now means well defined velocity. The corresponding condition can be satisfied with high accuracy using the radiation pressure of an intense laser wave, which makes it possible,
depending on the frequency detuning,
7.6 Intensity correlations
to accelerate or slow down atoms. Atoms offer the great advantage of having an internal structure that allows, in particular, for selective excitation leading to spontaneous emission. The result is
that the ability to interfere is either lost, or at least reduced (Sleator et al., 1992). Indeed, a (possible) measurement on the emitted photons using a microscope would enable us to localize the
emission center, i.e. the atomic center of mass (with precision limited by the wavelength of the emitted light). When the error of the position measurement is smaller than the distance between the
interfering beams, we know the trajectory the atom had taken. Also in this case it is not necessary to perform the measurement. We can use the following argument (Paul, 1996). The spontaneous
emission causes, through the recoil of the atom, a mechanical effect on the atomic center of mass motion. Let us consider for the moment only those events where the photon is emitted in a certain
direction (the environment acts as a detector); then the atomic propagation direction becomes tilted (compared with the original horizontal direction). This causes an additional phase difference
between the partial beams (let us consider, for instance, the interference on a double slit), resulting in a shift of the interference pattern. The total (final) interference pattern is composed from
individual patterns shifted in different ways (corresponding to different emission directions of the photon), leading in every case to a deterioration of its visibility and, in the worst case, to its
disappearance. In contrast to the aforementioned optical experiment, the interfering particle suffers a massive disturbance so that the deterioration in its ability to interfere is not surprising.
However, the atomic center of mass motion can be influenced in a much more subtle way, namely through microwave excitation of hyperfine levels. The transferred momentum is too small to influence
noticeably the center of mass motion. Nevertheless, a clever manipulation through microwave pulses leads to the destruction of interference (D¨urr, Nonn and Rempe, 1998). The information regarding
which of two possible paths the atom had taken was stored in hyperfine structure states. The interference destruction was ultimately based on the fact that an atom, realized by a standing intense
laser field of an appropriately chosen frequency, reflected from a beamsplitter and suffered a π phase shift exactly when it was in its lower hyperfine level.
7.6 Intensity correlations The experimental technique described in Section 7.4 allows for direct observation of interference between strongly attenuated laser beams. There is, however, also an
indirect method which we would like to describe in more detail. The basic idea behind the method is that two counters separated by a certain distance indicate the presence of an interference pattern,
even when the pattern itself is running back
and forth. First, let us assume that the detectors are positioned in such a way that the second counter is shifted – along the direction orthogonal to the (expected) interference fringes denoted as
the z direction – with respect to the first one by a whole fringe separation (or an integer multiple of it); then the counting rates will show strong correlations. This is not surprising, as the
intensity incident onto each counter is the same, which also implies (for each moment) identical response probabilities. (It is as if both counters were at the same position.) On the other hand, when
the separation of the counters, z = z 2 − z 1 , is an odd multiple of half of the interference fringe separation, we will observe strong “anticorrelations.” When the response probability of one of
the counters becomes large, it is always small for the other one. The reason is clear: in such a case, one of the counters is near to an interference maximum while the other is near to an
interference minimum. A measure of the strength of the discussed anticorrelations is found experimentally in the following way. In a time interval of a prescribed length, we record the number of
photons n 1 and n 2 detected by the first and the second counter, respectively; after repeating the experiment many times we obtain a data sequence with varying values of n 1 and n 2 . From these
data the so-called correlation coefficient k is determined, which is defined as follows: n 1 n 2 k= . n 21 n 22
The bar stands for the average over the measured data, and n j ≡ n j − n j ( j = 1, 2) is the deviation of the individual measured value n j from the mean value n j . The variance n 2j is, as usual,
defined as n 2j = (n j − n j )2 . The parameter k takes its maximum value for z = n (n = 1, 2, . . .) and for z = (n + 12 ) it takes its minimum value. The latter value of k is, in contrast to the
first, negative (hence the term anticorrelation). That this is indeed the case shows a simple classical analysis. We will assume that within the measurement interval the two interfering beams can be
described by plane waves with well defined amplitudes and phases. To simplify the problem further, we will assume that both amplitudes are the same and constant and that only the phases change
(uncontrollably) from one measurement to the other. The response probability of the photodetector is, in the classical description, proportional to the intensity, and we can use intensities instead
of photon numbers. According to Equation (3.14), the intensities at positions z 1 and z 2 of the two detectors are (for I1 = I2 = I0 ), I1 = 2I0 [1 + cos(2π z 1 / + ϕ)],
I2 = 2I0 [1 + cos(2π z 2 / + ϕ)],
7.6 Intensity correlations
where we have replaced the z component of k by 2π/ . For the mean value of I1 I2 with respect to ϕ, we find 1 2 I1 I2 = 4I0 1 + cos[2π(z 1 − z 2 )/ ] . (7.28) 2 The variable I1 I2 can be written as
I1 I2 ≡ (I1 − I1 )(I2 − I2 ) = I1 I2 − I1 I2 .
Inserting Equation (7.28) for I1 I2 and the value 2I0 for I1 and I2 following from Equations (7.26) and (7.27), we obtain the expression I1 I2 = 2I02 cos[2π(z 1 − z 2 )/ ],
confirming all the above statements about k. Measurements of this kind were realized by Pfleegor and Mandel (1967, 1968) using two independent, strongly attenuated laser beams. They solved the
problem of simultaneous photon counting at different places in a very elegant way by using a set of glass plates which were cut and arranged such that light incident on the first, third, fifth, etc.
plate was directed towards the first detector, and light incident on the second, fourth, sixth, etc. plate was directed towards the second detector. To detect the desired anticorrelations, they had
to measure n 1 and n 2 only within such time intervals for which the interference pattern changed very little. Information about the speed of drift of the interference pattern is naturally given by
the frequency difference between the two interfering beams (see footnote 3 in Section 7.4). The procedure then was as follows. The unattenuated laser beams generated a beat signal in a
photomultiplier, and this was used to control an electronic shutter; it was opened for 20 µs only when the beat frequency dropped below 50 kHz. During such a time interval each detector registered,
on average, about five photons. The anticorrelation effect was indeed found in accordance with the theory; in particular, it turned out that the effect was maximal under the condition that the
thickness of the plates was equal to half the fringe separation. In this way another, though indirect, proof of the interference between independent photons was given. Similar conditions are present
when the lasers are replaced by localized, excited atoms, each spontaneously emitting a photon. Certainly, both emission processes are mutually independent; in particular, there is no phase relation
between the emitted waves. Under such circumstances anticorrelations of the intensity should be observable when adjusting the distance between the counters z to half the fringe separation (with
respect to the fictitious interference pattern generated by classical emitters radiating with a fixed phase difference). The measurement would
proceed in such a way that coincidences, i.e. those events when both detectors simultaneously respond, are counted. The principle of the effect, from a classical point of view, is easy to understand:
when we identify the photons with classical waves with random phase values, then for the ensemble average – obtained from many repetitions of the experiment – the above Equation (7.28) applies. This
implies that the coincidence rate (the number of registered coincidences per second) will decrease considerably when the distance between the two detectors changes from to /2. Surprisingly, the
effect becomes even more pronounced in the quantum mechanical description. The quantum mechanical treatment (Mandel, 1983; Paul, 1986) results in a modification of Equation (7.28), the factor of
one-half in front of the cosine being replaced by unity (and the prefactor of four by two). While the classical theory predicts a drop of the coincidence rate to half of its mean value, according to
quantum theory the coincidence rate completely vanishes for z = (n + 12 ) (n = 0, 1, 2, . . .). It is impossible to find two photons at a distance of /2 (orthogonal to the fictitious interference
fringes). This is indeed a specifically quantum mechanical effect, evading classical understanding. The surprising quantum mechanical result originates from the correct description of the physically
obvious fact that “When two photons have been detected, both atoms had to deliver their energy because an atom can emit only one photon,” while the classical description cannot rule out that the
counts are due to the same atom. From this point of view, it is essential that we deal with exactly two atoms. The quantum mechanical description indeed goes over into the classical one for light
sources consisting of many excited atoms. The discussed non-classical effect disappears also for the case when the numbers of atoms in both sources fluctuate according to a Poisson distribution. Then
the mean atom number may be made arbitrarily small. The described behavior of two spontaneously emitted photons must be completely incomprehensible within a naive photon picture. Imagining a photon
as a bullet emitted in a certain (random) direction, we must admit that there should obviously be “communication” between the photons that no “mishap” takes place and the photons hit two positions /2
apart, so breaking the rules of quantum mechanics. The described experiment does not allow us to determine the emission directions because we cannot know, in principle, from which atoms the detected
photons have come. It is again the ignorance about the path – in this case of the two photons – that makes interference possible (in the sense of the aforementioned correlations). The correct
description of the experiment requires a wave representation: the intensity correlations result from the superposition of the waves emitted by the atoms. From the total field thus formed, each of the
detectors extracts an energy amount hν (otherwise the event is not counted), and the question regarding the origin of the energy does not make sense.
7.6 Intensity correlations
121 D z z1 z2
F D
Fig. 7.7. Measurement of intensity correlations on two photons simultaneously generated by parametric fluorescence. C = non-linear crystal; F = frequency filter; D = detector.
In practice, there is not the slightest chance that the experiment would be possible. The emission takes place in the form of dipole waves, and because of this the probability of both detectors
responding is extremely small. This difficulty can be overcome by using directed emission. As described, parametric fluorescence is a possible option at our disposal. An experiment of this kind was
successfully carried out by Ghosh and Mandel (1987). The observation was made on two photons (the signal and idler photons) generated simultaneously by an incident strong ultraviolet laser beam in a
non-linear crystal, which the photons left along slightly different directions (Fig. 7.7). They were made to interfere – in the sense of an intensity correlation – by two mirrors. The “interference
pattern” was magnified using a lens to make the detection easier. Two movable optical glass plates defined the observation points z 1 and z 2 . The incident light was directed to each detector by the
plates. The coincidence was then determined electronically. The coincidence rate was modulated in dependence on the relative detector distance z 2 − z 1 as expected. The recorded measurement data –
only a few events per hour were detected – were in quantitative agreement with the quantum mechanical prediction when it was taken into account that the observation points are determined with a
limited precision given by the thickness of the glass plates. As mentioned above, the intensity correlations are formed by superposing the field strengths of two independently generated waves on the
detector. Such a superposition can also be achieved with the help of an “optical mixer.” This is simply a beamsplitter with both input ports being used (Fig. 7.8): one of the fields is sent into the
first port, and the other is sent into the second. In particular, a signal photon and an idler photon can be mixed in this way. The outgoing light is incident on two separate detectors. According to
quantum theory, this mixing leads to a surprising result: the photons form a pair again (see Section 15.4). They leave the beamsplitter through the same output port, and it is naturally left to them
to decide which of the output channels they use in a single measurement. For coincidence detection this means, however, that we find with certainty no coincidences!
Fig. 7.8. Beamsplitter used as an “optical mixer.” 1 and 2 are the incoming beams; 3 and 4 are the outgoing beams.
We have tacitly assumed an ideal alignment of the setup which guarantees that the photons indeed “meet” at the beamsplitter. In the opposite case, one photon takes no notice of the other and is
transmitted or reflected with 50% probability. As a result, we have enough events where one of the photons hits the first detector and the other hits the second detector; we measure these
coincidences. Their appearance indicates that the wave packets (pulses), as the photons appear to us in this kind of experiment, no longer overlap on the beamsplitter. We have a simple opportunity to
measure the effective length of a wave packet: displacing the beamsplitter (Fig. 7.9) makes the lengths of the two light paths different, and the coincidence rate rises from its minimal value zero to
a finite value and stays constant. The width at half maximum of the enveloping curve – as a function of the beamsplitter position – gives us a measure of the wave packet’s length, i.e. its spatial
extension. This experiment was in fact performed by the American scientists Hong, Ou and Mandel (1987). For the time spread of the photon they found a value of about 50 ps, which was determined
essentially by the transmission width of the frequency filters positioned in front of the detectors (according to the general relation Equation (3.22) between the frequency width and the time spread
of a wave packet). The technical measurement limit for the determination of the pulse length was in fact much lower (about one femtosecond). It is determined by the precision with which the
displacement of the beamsplitter can be measured. The point of this experiment is that it allows the measurement of extremely short times with “inert” counters. In fact, the integration time is
required only to be shorter than the distance between successive pulses in order to ensure that only one photon pair is registered.
7.7 Photon deformation
BS C
Fig. 7.9. Experimental setup for the measurement of the spatial extension of signal and idler photons. C = non-linear crystal; BS = beamsplitter; F = frequency filter.
7.7 Photon deformation In Section 7.2 we explained in detail that single photons exhibit interference phenomena which are described correctly by the classical wave theory. According to this theory
conventional interferometers work well even when single photons (in principle with arbitrarily large separation) are sent through them. However, as we explained in Section 3.4 for the example of the
Fabry–Perot interferometer, the frequency narrowing goes hand in hand with a lengthening of the incident light pulse. Single photons are already subjected to this process. This means that the photon
– considered as a wave – is deformable. The change in form is not limited – recall the Fabry–Perot interferometer – to a stretching of the wave packet. Only a frequency section of the incident wave
is transmitted; the remaining part is reflected. (We need not take into account events when the photon is absorbed in one of the silver layers because then it is “eliminated completely.”) Even though
both parts of the wave can become arbitrarily far apart as time passes, they must be considered together as a wave phenomenon associated with a single photon. Only after a measurement has taken place
is the photon found in one or the other partial beam. If photons – as waves – can become longer, there is no doubt that it must be also possible to make them shorter. This can be accomplished with a
fast shutter which can cut parts of the wave off or out. An impressive experiment of this form was performed by Hauser, Neuwirth and Thesen (1974), who “chopped” γ -quanta using a very fast rotating
wheel with absorbing spokes and observed that
the resulting broadening of the spectral distribution was in excellent agreement with the prediction of classical wave theory (see Equation (3.22)). We encountered a situation similar to spectral
decomposition in the direction selection of photons. Let us consider a photon having the form of a spherical wave incident on a reflecting screen with an aperture. The corresponding wave is split
into a reflected part and a transmitted part. Because the two parts can be made to interfere at any time, it would certainly be wrong to imagine the photon to be “in reality” only in one of the
partial beams while the other is “empty.” When we perform a measurement on one of the partial beams (with a photodetector), either we detect a photon or we do not. In the latter case – assuming ideal
detection efficiency of the measurement apparatus – we can conclude that the photon is present in the other beam. It is important to point out in this context that the measurement process in the
quantum mechanical sense is not necessarily associated with sophisticated and complicated apparatus. In fact, for a photodetector, the elementary process of electron release from the atom should be
viewed as the actual measurement act. More generally, any absorption process, provided it is irreversible (the absorbed energy is transferred to the surroundings and thus dissipated), can pass for a
measurement process. For example, when, in the spectral decomposition of light with a Fabry–Perot interferometer, the reflected light is incident on an absorbing screen, a measurement takes place
which determines whether the photon was reflected or not. This measurement “allowed survival of” only those photons which took the path through the interferometer and hence have a sharper frequency
than at the beginning. Similarly, the propagation direction is selected, in the case of a photon, in the form of a spherical wave, incident on an absorbing screen with a small aperture (Renninger,
1960). It might seem paradoxical that the physical properties of the outgoing photons were changed even though the screen obviously did not interact with them. However, it would be wrong to state
that nothing happened under such conditions. In such a case – let us consider direction selection – the photon would have to possess a well defined propagation direction before it was incident on the
screen. This is, however, in contradiction to the fact that we can perform interference experiments with the original photons, for example by following the example of Young, who illuminated a screen
with two holes and observed the transmitted light on an observation screen. The question of how the photon is affected by the screen when it passes the hole remains unanswered by quantum mechanics.
It just “manages,” with the aid of simple axioms, to predict exactly the experimentally verifiable effects of a measurement apparatus – an achievement coming close to a wonder in the face of the
complexity of the measurement process! An essential role in this scenario is played by the “collapse of the wave function” describing the transition from the possible
7.7 Photon deformation
to the factual. To make it explicit using the example of the direction selection: the spherical wave “contains” all possible propagation directions, and the observation apparatus forces a “decision”
regarding which of them becomes real. (Note that the position on the screen where the photon absorption took place can be determined in principle, and we can later – for a known position of the
emission center – determine the propagation direction of the particular photon.) The manipulation possibilities described in Section 6.7 are, in the case of parametric fluorescence, simply
astonishing. They are based on the fact that signal and idler photons are in an “entangled” quantum mechanical state. Through the measurement, for example on the corresponding idler photon, signal
photons with desired properties such as spatial localization or sharp frequency can be selected. There are no indications that quantum mechanics is just a “temporary solution” – in particular the
real Einstein–Podolsky–Rosen experiments described in Section 11.4 let all hope fade for a refining of the quantum mechanical description by introducing “hidden variables” – and we have, with regret,
to accept the fact that Nature is not willing to “disclose its secrets” on the microscopic level as it does on the macroscopic level.
8 Photon statistics
8.1 Measuring the diameter of stars As mentioned several times already, the particle character of light is best illustrated by the photoelectric effect. This effect can be exploited in the detection
of single photons by photocounting. The analysis of such counting data allows us, as will be discussed in detail in this chapter, to gain a deeper insight into the properties of electromagnetic
fields. We can recognize the “fine structure” of the radiation field – in the form of fluctuation processes – which was hidden from us when using previous techniques relying only on the eye or a
photographic plate, i.e. techniques limited to time averaged intensity measurements. The credit for developing the basic technique for intensity fluctuation measurements goes to the British
scientists R. Hanbury Brown and R. Q. Twiss, who became the fathers of a new optical discipline which investigates statistical laws valid for photocounting under various physical situations. When we
talk of studies of “photon statistics” it is these investigations that we are referring to. Interestingly enough, it was a practical need, namely the improvement in experimental possibilities of
measuring the (apparent) diameters of fixed stars, that gave rise to the pioneering work by Hanbury Brown and Twiss. Because the topic is physically exciting, we will go into more detail. It is well
known that the angular diameters of fixed stars – observed from Earth – appear to be so small that the available telescopes are not able to resolve the stars spatially. The starlight generates a
diffraction pattern in the focal plane of the telescope, the form of which is determined by the aperture of the telescope and has nothing to do with the real spatial extension of the star. A solution
to this problem was required before any progress could be made. An idea by Fizeau led to a practical solution in the form of Michelson’s “stellar interferometer.” In this apparatus, the starlight
falls onto two mirrors, M1 and M2 , a distance d apart, and the light is redirected by mirrors M3 and M4 into a telescope and focused 127
Photon statistics
d sin α M1
M2 M4
Fig. 8.1. Michelson’s stellar interferometer. O = diaphragm; M1 , M2 = movable mirrors, M3 , M4 = fixed deviating mirrors.
in its focal plane. In addition, filters are inserted into the optical path to guarantee that only light of a certain frequency is observed (see Fig. 8.1). To understand how the stellar
interferometer works, let us assume, at first, that the source is point-like. The large distance between the stars and the Earth means that the light beams incident on M1 amd M2 are essentially
parallel. As in the Michelson interferometer, an interference pattern in the form of straight line, equidistant fringes appears in the focal plane of the telescope. The fringes are formed because M1
and M2 are orientated at an angle not exactly 45◦ with respect to the telescope axis. Due to this we are dealing with interference lines of equal thickness, similar to those to be observed on a
wedge. Because we are dealing with a light source of finite spatial extension, we can imagine it as consisting of individual parts, each of which generates by itself an interference pattern of the
described form. It is of importance that usually the interference patterns do not coincide precisely in their positions but are mutually slightly shifted. The reason for this is that two interfering
beams emitted from one part L1 of the light source have an additional path difference s compared with two other interfering beams
8.1 Measuring the diameter of stars
coming from another part of the light source L2 (imaged on the same point in the focal point of the telescope). As can be seen from Fig. 8.1, the path difference s is given by s = d sin α ≈ dα,
where α is the (very small) angle between the two directions under which the light beams from points L1 and L2 are incident upon the surface of the Earth. The superposition of the interference
patterns related to different parts of the light source does not, according to Equation (8.1), play a significant role as long as the product of the mirror distance d and the maximum value α0 of
angle α, which obviously should be identified with the angular diameter of the star, is small compared with the wavelength λ of the starlight. Moving M1 and M2 further and further apart from one
another causes the visibility of the interference pattern to become worse – due to an increase of the relative shifts between the individual patterns – until no interference fringes can be observed.
As a rough estimate this happens when dα0 ≈ λ.
For this situation the interference patterns from the right and left edges of the star’s surface coincide, whereas the patterns generated by light emitted from the other parts of the star’s surface
are shifted in all possible ways. Hence the superposition of the individual patterns results in a uniform distribution of brightness – the interference has disappeared. We have already mentioned a
possible way of measuring the star’s diameter; i.e. increasing the mirror separation d in the Michelson stellar interferometer until no interference pattern is observable. Inserting the
experimentally found critical value for d into Equation (8.2), we obtain the value of the star’s angular diameter α0 . A refined version of this simplified approach allows the calculation of the
visibility of the interference pattern as a function of the mirror separation d for a given brightness distribution of the star’s surface (see Mandel and Wolf, 1965). In particular, it can be shown
that Equation (8.2), for the case of a uniformly radiating circular disk, must be corrected as follows: dα0 = 1.22λ.
The Michelson method proved to be very successful, and star diameters down to about 0.02 arc seconds have been determined (Michelson and Pease, 1921; Pease, 1931). An additional increase of the
resolving power, requiring mirror separations of many meters, faces two practical obstacles. First of all, the finite linewidth (ignored up to now in the analysis) of the observed starlight causes
trouble: it decreases the visibility of the interference pattern
Photon statistics
when the light paths – going through M1 and M2 , respectively – of the interfering beams uniting in the focal plane of the telescope do not have exactly the same length. Under such circumstances, a
small change in the wavelength causes a shift in the corresponding interference pattern. To eliminate this disturbing effect for a typical bandwidth of λ = 5 nm we would have to guarantee the path
difference to be less than 0.01 mm. Such a condition imposed on the mechanical stability of the interferometer – and fulfilled during the duration of the observation usually also involving guiding
the instrument – is absolutely impossible for the required large length of the interferometer arms. Second, the observation of the interference pattern is hindered by so-called atmospheric
scintillations. By this we mean the influence of atmospheric fluctuations, i.e. local air motion altering the air pressure, leading to changes of the air’s index of refraction. These atmospheric
disturbances, for large mirror separations, are statistically independent for the two light beams and cause fluctuations of the path difference, and therefore the position of the interference pattern
changes in time in an uncontrollable way. The difficulties described above were overcome by Hanbury Brown and Twiss, who extended their radioastronomical method developed a few years earlier to the
optical domain. They measured correlations between intensities at the different positions rather than between electric field strengths (which is the essence of any interference experiment). In this
way they freed the experiment once and for all from any disturbances dependent on phase fluctuations, simply because the phase no longer appeared in the measurements. The experiment was arranged in
such a way that the outer planar mirrors in Michelson’s setup were replaced by parabolic mirrors (Fig. 8.2) which focused the incident starlight onto separate photomultipliers (Hanbury Brown and
Twiss, 1956b). The correlations between the photocurrents were measured such that they were each amplified in the narrow band and then multiplied. The measurement signal was the time averaged value
of their product. The photocurrent follows the time fluctuations of the light incident on the photocathode (as explained in Section 5.2), and hence the signal is a measure of the time averaged value
of the product of the light intensities at the positions of the detectors, and thus it reflects the intensity correlations in the radiation field. The photocurrents are combined using normal electric
wires, which eliminates the need for mechanical stability of the interferometer setup. We might ask whether it is really possible to recover the information contained in the visibility of
interference fringes (and determined in the first place by phase relations) from the intensity correlations. In the following, we show that this is indeed possible. Let us once again return to the
Michelson measurement technique. In classical optics the ability to interfere is synonymous with coherence, and so we can say
8.1 Measuring the diameter of stars
Fig. 8.2. Hanbury Brown–Twiss stellar interferometer (Ph = photomultiplier; C = correlator; R = reflector; NA = narrow-band amplifier). The product of the two photocurrents is taken and time averaged
in the correlator.
that the Michelson stellar interferometer analyzes the spatial coherence of starlight (transverse to its direction of propagation). In particular, the transverse coherence length is measured, which
is simply the critical mirror separation for which the interference vanishes. Spatial coherence primarily means that the (instantaneous) phase of the electric field strength changes only slightly
within the coherence area. Since the stars are thermal radiators, the phase at a given position changes in an uncontrollable way, but the important point is that the electric field strength in the
neighborhood – within the coherence area – follows precisely these phase fluctuations. The thermal radiation field exhibits not only phase fluctuations but also strong amplitude fluctuations. The
instantaneous amplitude also changes noticeably over finite distances only, and it is – fortunately for the Hanbury Brown–Twiss procedure – a characteristic feature of thermal radiation that the
coherence area for which the phase is approximately constant coincides (apart from minute details) with the spatial domain for which the amplitude is also approximately constant. Observing the time
evolution of the intensities at two different points P1 and P2 lying on a plane (almost) orthogonal to the propagation direction, we find that the instantaneous intensities, in most cases, coincide,
provided the distance d between P1 and P2 is shorter than the transverse coherence length ltrans . When we experimentally find an intensity peak at P1 , then there is a large probability that we will
find the same at P2 , and the same applies to an intensity “dip” (a value below the average intensity).
Photon statistics
Roughly speaking, the following relation holds I1 (t) ≈ I2 (t)
(for d ≤ ltrans ),
where we have defined the deviation from the time averaged value I by I (t) ≡ I (t) − I and the subscripts refer to the positions P1 and P2 . As already discussed, the correlator used by Hanbury
Brown and Twiss ultimately registers the time averaged product of I1 (t)I2 (t) which, using Equation (8.4), can be written as (note that I = 0) 2
I1 (t)I2 (t) = I + I12
(for d ≤ ltrans ).
When the distance between P1 and P2 exceeds the transverse coherence length ltrans the intensities at the two points fluctuate independently, and the relation I1 (t)I2 (t) = 0
holds, which results in I1 (t)I2 (t) = I
(for d > ltrans ).
The decrease in the intensity correlations (with increasing d) allows us to draw conclusions about the coherence length ltrans , and by inserting it into Equation (8.3) in place of d we can find the
star’s angular diameter. Hanbury Brown and Twiss (1956b) estimated the diameter of Sirius in exactly this way. The possibilities offered by the method were further exploited in the early 1960s when
an extended observatory was installed at Narrabri, Australia (Hanbury Brown, 1964). The parabolic mirrors were mounted on railway bogies moving on circular tracks with a diameter of 188 m. This setup
allowed the measurement of star diameters down to 0.0005 arc seconds. The intensity correlations can also be detected using photo counters – and this brings us finally to photon statistics. The
response probability of a photodetector is proportional to the instantaneous intensity on its sensitive surface, and so the number of photons n(t; T ) counted within a finite time interval t − T2 to
t + T2 reflects the intensity of the field. (The integration time T must be chosen to be smaller than the coherence time of the field, otherwise the intensity fluctuations will be averaged out.) The
intensity correlations are now determined in such a way that we form the product n 1 (t; T )n 2 (t; T ) of the registered photon numbers n 1 (t; T ) and n 2 (t; T ) and average over a longer set of
measured data. The same physical information can be obtained by counting coincidences; i.e. we register only those events when both counters respond at the same time. In fact, the probability for
such a type of coincidence is proportional to the time average of I1 (t)I2 (t). With the stellar interferometer technique of Hanbury Brown
8.1 Measuring the diameter of stars
and Twiss an increased number of coincidences is observed when the distance between the two detectors is smaller than the transverse coherence length. When the distance between the detectors exceeds
this critical value, the intensities at the two positions fluctuate independently and the coincidences are purely random (see Equation (8.7)). The result of the observations can be formulated as
follows: for d ≤ ltrans more coincidences will be detected than from the random case. There exists a correlation, for d ≤ ltrans , between photons measured at the same time at different points: when
a photon is observed at position P1 , the probability of detecting a photon at P2 is larger than for the completely random case. Such a fact is incomprehensible when we employ a naive photon concept
that excludes wave properties of light. We can imagine that the atoms on the stellar surface radiate independently, thereby each emitting – in any elementary act of emission – a photon. However, if
nothing else happens and the photons fly through space like little balls without affecting each other and then hit the Earth’s surface, it would be impossible to understand how the described
correlations could be formed. A single photon cannot “know” what else is happening in its surroundings. In reality, however, the statistical behavior of the photons is determined by the diameter size
of the stars! These facts can be understood only in the framework of the wave concept of light. A photodetector is influenced by the instantaneous intensity of the electromagnetic field residing on
its sensitive surface. The intensity is determined by the electric field strength, which is given as a superposition of many elementary waves – in principle all the atoms on the star’s surface
contribute – and this is the deeper physical reason why the intensity correlations contain information about the spatial extension of the surface of the star. In fact, the spatial coherence (in the
transverse direction) measured in Michelson’s stellar interferometer comes about by the same mechanism (Fig. 8.1): the electric field strengths at the mirror positions P1 and P2 can be correlated
only because the radiation in both cases is coming from the same atoms. The elementary waves emitted by the atoms can have, as is indeed the case, phases and amplitudes that are fluctuating randomly
and independently. These fluctuations (we assume that the distance between P1 and P2 does not exceed the transverse coherence length) cause the same effect at the two observation points, and due to
this the fluctuations of the total electric field at the two points have “the same beat.” This illustrates once again that we cannot assign an individuality to the photons by ascribing to each of
them a certain “place of birth” (a well defined, though unknown, atom). The conclusion drawn in Section 7.4 in connection with the interference between independent photons was very similar. We can
state quite generally that the naive photon picture always fails when interference is involved.
Photon statistics BS
Fig. 8.3. Hanbury Brown–Twiss experimental setup for intensity correlation measurement. Ph = photomultiplier; BS = beamsplitter; C = correlator.
Finally, let us mention that Hanbury Brown and Twiss (1956a) tested the proposed measurement technique in the laboratory before they made their astronomical observations. As with the intended
astronomical measurements, they analyzed the transverse spatial coherence of a thermal radiation field, namely that of a diaphragm placed before a mercury lamp. Naturally the coherence length is,
under such circumstances, very small, and therefore the two receivers could not be positioned next to each other. Hanbury Brown and Twiss overcame this difficulty in an elegant way by using a
beamsplitter (Fig. 8.3). As expected, they observed a decrease in the intensity correlations when one of the two photomultipliers was displaced sideways from the position corresponding to the mirror
image of the other detector. The experiment provided clear proof of the existence of intensity correlations (for thermal radiation) and became a prototype for later photon statistical experimental
setups. As Rebka and Pound, (1957) showed for the first time, such a setup is also well suited to the observation of temporal intensity correlations, and we devote the following section to this
8.2 Photon bunching The dependence of the spatial intensity correlations on the separation of the detectors discussed up to now is determined by geometric parameters, i.e. the size of the light
source and its distance from the detectors. It can also be expected that time dependent correlations can appear which originate in the time fluctuations of the light intensity (at a fixed position).
The mean time spread of an intensity maximum or minimum for thermal light is approximately equal to the time interval in which
8.2 Photon bunching
0.6 Gτ
Fig. 8.4. Theoretical dependence of the coincidence counting rate C on the delay time τ for polarized thermal light with a Gaussian spectral profile G(ν) = const. exp(−(ν − ν0 )2 / 2 ). Cran = random
coincidence counting rate. After Mandel (1963).
the phase changes only slightly. This means that it coincides with the coherence time tcoh . (An analogous statement was made in the discussion of spatial intensity correlations.) Because tcoh is, in
order of magnitude, given by the inverse of the linewidth ν, the spectral properties of light come into play in the analysis of time dependent intensity correlations. The observation of time delayed
coincidences at a fixed point requires the measurement of the time averaged value of I (t)I (t + τ ) (τ is the delay time). The coincidence rate found for τ < tcoh is higher than that for τ > tcoh
because I 2 (t) is larger than I (t)I (t + τ ) for τ > tcoh . (The relation is very similar to that for spatial intensity correlations. The considerations leading to Equations (8.5) and (8.7) can be
applied directly to the present situation.) The coincidences are, for τ tcoh , purely random, and therefore an excess of coincidences compared with the random ones can be observed only if the delay
time stays below the coherence time. For thermal radiation with a Gaussian spectral profile we expect theoretically a coincidence dependence of the type given in Fig. 8.4. Herewith, a new possibility
for the measurement of linewidth of thermal radiation via time dependent intensity correlations appears. The experimental setup of Hanbury Brown and Twiss shown in Fig. 8.3 offers a possible
experimental realization. Both detectors are now fixed – their positions are mirror images with respect to the beamsplitter – and the measurement is carried out in such a way that, from the whole
ensemble of events detected by the counters, those are selected (electronically) for which the second detector responded τ seconds after the first
Photon statistics
one. (When using photomultipliers, one of the two photocurrents is delayed before multiplication in the correlator.) As explained above, the temporal intensity correlations have a duration of about
tcoh (see Fig. 8.4). The measurements require detectors with a response time shorter than tcoh . This implies that measurements of intensity correlations allow the determination of very small
linewidths because in that case tcoh is very large. In contrast, conventional optical spectrometers, for example the Fabry–Perot interferometer, are well suited for the measurement of large
linewidths and line distances. The new techniques based on photon statistical measurements and the traditional interferometric techniques fortuitously complement each other very well. In fact, the
spectral lines emitted by thermal radiators are so broad that the detection of intensity correlations is very difficult. However, there is a rather important application for the new technique, namely
the analysis of laser light, scattered by moving centers. In contrast to the incident (monochromatic) light, the scattered light has the same character as thermal light. The reason for this is
simple: the scattered light – which is very similar to thermal radiation – is formed from many partial waves with random phases emitted from individual centers. (There are fixed phase relations with
respect to the incident laser radiation, which has a phase that is constant over its cross section and changes only slowly in time, but the irregular distribution of the centers causes a random
distribution of scattered wave phases when observed from a randomly chosen fixed point.) The result is (at a fixed time) the formation of a “light ridge” in space. Due to the random motion of the
scattering centers (for example resulting from Brownian motion) the intensity also fluctuates in time in the same way as it is known to do for thermal light. Because the scattered light has an
extremely narrow bandwidth, it is almost an ideal object for photon statistical observations. For example, it is possible to determine diffusion coefficients of particles undergoing Brownian motion
in liquids by measuring time dependent intensity correlations. A typical measurement curve is given in Fig. 8.5. (The exponential decrease for increasing delay time is caused by a Lorentzian line
profile, in contrast to the Gaussian profile assumed in Fig. 8.4.) Similarly, the heat conductivity of liquids can be determined from measuring Rayleigh scattering of laser radiation caused by
temperature fluctuations. The situation illustrated in Fig. 8.5 can be visualized as a grouping – or “bunching” – of photons: the probability that two photons arrive at the same place shortly after
one another (i.e. with a defined delay τ < tcoh ) is distinctly higher than the probability that they arrive with a larger time delay τ > tcoh . We can also say that the photons tend to appear in
pairs. This phenomenon seems surprising from a naive photon picture. Let us again consider a thermal source: it seems that after the first atom has emitted a photon, at least one more atom hurries to
emit another photon. In fact, “photon bunching”
8.2 Photon bunching
× 1.8
C Cran 1.4
τ (ms)
Fig. 8.5. Ratio of the coincidence counting rate C and the random coincidence counting rate Cran as a function of the delay time τ measured on a suspension of bovine serum albumine (Foord et al.,
only reflects the strong intensity fluctuations (as do the spatial intensity correlations described in Section 8.1), and therefore it is the consequence of the interference between individual
elementary waves emitted completely independently by different atoms. Another photon statistical measurement method is based on the registration of the number n(t; T ) of photons during a fixed time
interval of length T and than extracting from such a data sequence the probabilities that exactly 0, 1, 2, . . . photons have been observed. From a theoretical point of view, as will be explained in
detail below, we expect the photon numbers n(t; T ) for polarized thermal light to obey a Bose–Einstein distribution; i.e. the probability pn (normalized to unity) of finding exactly n (= 0, 1, 2, .
. .) photons is given by pn =
nn , (n + 1)n+1
where n is the mean photon number. Interestingly enough, the distribution has its maximum for the value n = 0 (Fig. 8.6); i.e. the probability of not finding any photons is larger than the
probability of finding any given finite number of photons.
Photon statistics 0.15
0.10 (b)
pn 0.05
Fig. 8.6. Bose–Einstein distribution (a) and Poisson distribution (b) for the average photon number n = 10. pn is the probability of finding n photons.
The variance of the photon number for the distribution function Equation (8.8) is n 2 ≡ n 2 − n2 = n2 + n.
This result tells us that the photon number fluctuates strongly. This is a characteristic of thermal light. Let us note at this point that there is a distinct difference between polarized and
unpolarized light, with polarized light (which we have considered in this case) fluctuating much more strongly than unpolarized light. Let us also comment on the experiment. First, we note that
Equation (8.8) was derived within a single-mode approximation. This formula is simply the well known Boltzmann distribution p(E n ) for the energy level occupation of a system in thermodynamic
equilibrium. In our case, as explained in Section 4.2, the levels are equidistant – so the relation E n = nhν (n = 0, 1, 2, . . .) holds – and hence the Boltzmann factor is p(E n ) = const. exp{−nhν/
where k is Boltzmann’s constant and is the temperature. Rewriting the relation for the mean photon number n (and properly normalizing) we indeed obtain Equation (8.8). Our considerations make it
clear that the photon number is primarily related to the mode volume, which can be identified with the coherence volume Vcoh . In any case, we must take care in experiments that the above summation
over the individual events is limited to beam volumes smaller than Vcoh . In particular, this implies that the integration time T must not exceed the coherence time. So, what happens for smaller T ,
i.e. when only a part of the coherence volume Vcoh is under observation? Obviously, only a fraction of the photons present in the coherence volume Vcoh are observed. However, it would be erroneous to
8.2 Photon bunching
that this fraction is fixed. In this case randomness also comes into play, and the selection mechanism is of the same type as that for beamsplitting described in Section 7.1. (Instead of having
transmitted and reflected photons, we divide them into observed and unobserved ones.) As mentioned at the end of Section 7.1, the process transforms thermal light back into thermal light. This means
that Equation (8.8) also applies – with a correspondingly decreased photon number n – to the photons selected by the detector. With respect to real measurements, we also have to take into
consideration the fact that the detection efficiency of real detectors is considerably smaller than unity; i.e. an incident photon is detected only with a certain (fixed) probability. This makes it
possible to model a less efficient detector by a combination of a perfect detector and an absorber placed in front of it. Because the absorption process is formally the same as the process of
beamsplitting, the inefficiency of a photodetector has the same effect as the previously analyzed decrease of the observation volume compared to the coherence volume. We can thus conclude that even
inefficient detectors do not change the character of thermal light. Equation (8.8) is also valid for the actually measured photons. However, we must not forget one point: the integration duration T
must be smaller than the coherence time, i.e. the inverse of the linewidth. Like the observation of photon “bunching,” the measurement of the photon distribution Equation (8.8) is possible only for
thermal light with a very narrow linewidth. This is the reason why in this case one works with scattered laser light. Arecchi, Giglio and Tartari (1967b) used as the scattering medium a suspension of
polystyrene balls of different size in water and demonstrated that the photon number distribution n(t; T ) of the scattered light obeyed a Bose–Einstein distribution. There is a rather simple way of
generating narrow-band “pseudothermal” light for demonstration purposes: let quasimonochromatic laser light be reflected from a rotating ground glass plate (Martienssen and Spiller, 1964). The
incident light is scattered from an irregular rough surface. The rate of the amplitude change obviously depends on the angular velocity of the plate and hence can be arbitrarily varied. The coherence
time tcoh of the scattered light can be adjusted in such a way that photon statistical measurements can easily be carried out. In fact, the first successful experimental proof of the Bose–Einstein
distribution of photons was given in this way (Arecchi, Berne and Burlamacchi, 1966). It is very important to note that the previously described photon statistical properties of light are
characteristic of radiation emitted by thermal sources (and sent through a polarizer) or of the aforementioned scattered radiation. In fact, it is possible, using a laser, to generate light with
completely different properties. Before we go into details of this problem, let us comment on the influence of inefficient detectors on the outcome of coincidence measurements.
Photon statistics
First, we have to know the dependence of the simultaneous response probability of two ideal detectors at the same position on the photon number n in the given mode volume or coherence volume.
Following classical arguments, we are tempted to say that the probability in question is proportional to the time averaged square of the (instantaneous) intensity, or – since the intensity and the
photon number are related to each other through a constant factor – proportional to the time averaged value of n 2 , which can be replaced, due to the ergodicity of the radiation field, by the
ensemble average n 2 . In contrast, the quantum mechanical description leads to the result that, in the expression for the coincidence counting rate, the term n 2 must be replaced by n(n − 1) (see
Section 15.1). The quantum mechanical result is more trustworthy than the prediction based on classical considerations because it takes the energy conservation law into account. This can be seen most
easily by considering the situation with exactly one photon in the coherence volume. The quantum mechanical description predicts that the simultaneous response probability of the two detectors
exactly vanishes, as required by the energy conservation law, because each of the two detectors needs for its activation a whole energy quantum hν. However, the classical concept predicts for this
case a finite non-zero coincidence probability. We calculate quantum mechanically and write the (undelayed) coincidence counting rate (i.e. the number of registered coincidences per second) in the
following form C(0) = β 2 T1 n(n − 1),
where T1 is the detector response time and the constant β is the ratio of the detection efficiency and the length of the coherence volume rewritten as a time. (When the cross section of the mode
volume is imaged only partly on the detector’s sensitive surface, the detector sensitivity is decreased by an additional factor.) The factor T1 on the right hand side of the equation reflects the
fact that generally the response probability of the two detectors is proportional to the product of the lengths t1 and t2 of the respective time intervals during which the first and the second
detector measure. To obtain the coincidence counting rate we have to divide by one of the times t1 , t2 and the other is replaced by T1 . The counting rate of a single detector is proportional to the
mean intensity (the classical and quantum mechanical descriptions coincide at this point), and hence it is given by Z = βn.
From this follows the random coincidence counting rate Cran = Z 2 T1 = β 2 T1 n2 .
8.3 Random photon distribution
Of particular interest is the excess in the systematic coincidence counting rate when compared with the random one. Relating it to the random counting rate, we introduce the relative excess
coincidence counting rate, for which we find, using Equations (8.11) and (8.13), the expression R≡
C(0) − Cran n(n − 1) − n2 (n)2 − n = = . Cran n2 n2
We state with delight that the detector sensitivity has disappeared from this equation. The measurement of the relative excess coincidence counting rate characteristic of the photon statistics can be
realized without problems using bad detectors and, in addition, we do not have to be extremely careful with the imaging of the optical field onto the detector surface. (We only have to guarantee that
the dimension of the imaged area does not exceed the transverse coherence length.) We point out that the insensitivity of the measurable parameter Equation (8.14), is lost when considering instead
the parameter [(n)2 − n]/n = (n)2 /n − 1. This so-called Mandel’s Q-parameter is a convenient theoretical measure for photon statistical properties, but has the drawback that it is not directly
observable. Finally, let us mention that using Equation (8.9) allows us to calculate the relative excess coincidence counting rate (Equation (8.14)) for polarized thermal light, the result being R =
1, in agreement with Fig. 8.4. 8.3 Random photon distribution We can now raise the question of the photon statistical properties of radiation fields which are not of thermal type: we think first of
laser light. In fact, there is a fundamental difference between the effective emission mechanism in a laser and that in a thermal radiation source. The consequence is that the photon statistics in
the two cases are also completely different. Laser light exhibits, in contrast to thermal radiation, a high degree of amplitude stabilization. Let us explain this point in more detail. While the
atoms of a thermal radiation source mainly radiate spontaneously, the laser action is dominated by stimulated emission due to the high electric field strength. This brings “law and order” into the
emission process. The physical mechanism is as follows: the field in the laser resonator, which has built up starting from spontaneous emission, induces electric dipole moments on the atoms of the
laser medium, excited by the pump process. Between them and the driving force – given by the electric field strength – well defined phase relations are established. (The phase relation is such that
it enables the dipole moment to do maximum work on the field.) Because the field is coherent over the whole resonator volume, all the dipole moments oscillate with the same “beat” and a
Photon statistics
macroscopic polarization builds up in the medium. (This is defined as the sum of all dipole moments in the unit volume.) The laser emission is therefore a collective process. In addition, the atoms
emit their energy much faster than they do via spontaneous emission, the quantum mechanical probability for the transition from a higher level to a lower level responsible for the emission being
proportional to the intensity of the field. The emitted light amplifies the inducing field; i.e. the light waves emitted by the atoms have the same frequency, propagation direction and polarization
as the driving field. In this way the almost miraculous properties of laser beams – enormous spectral density, sharp frequency and propagation direction – are formed. In fact, these properties of
laser light represent “only” a qualitative progress, because even though they are overwhelming, they are obtainable – here speaks the theoretician – by making extremely hot (thermal) light sources
and employing filters and diaphragms of the highest quality. Actually, the amplitude stabilization is responsible for the fundamental qualitative difference between laser light and thermal radiation.
How does this special feature of laser radiation come about? It is a mechanism specific to the laser, the so-called saturation, that is responsible. By saturation we mean the following: the excited
atoms lose their energy through the process of emission, and they finally reach the lower level. They are excited again by the pump process and can radiate again etc. As mentioned above, the
elementary emission process (in the form of induced emission) is faster the higher the intensity of the driving radiation. The pumping, however, has a constant rate. The result is that the mean
number of excited atoms – and with it, the surplus number, decisive for the laser action, of excited atoms over those not excited, the so-called inversion of the laser medium – decreases with
increasing intensity of the laser field, and this effect is called saturation. The saturation mechanism is responsible (in stationary laser action) for the stabilization of the amplitude and the
intensity, respectively. For instance, when a momentary increase of intensity compared to its stationary value occurs, it will cause a decrease in the inversion (as explained above). This will lead
in turn to a weaker emission and the initial intensity peak will be attenuated. Similarly, a momentary intensity drop is damped away via the induced short-time increase of inversion. The phase of the
laser light does not have an analogous “resetting mechanism;” it changes in time in a completely random way. The linewidth of the laser radiation is (under ideal conditions) determined mainly by this
phase diffusion – in distinct contrast to the situation for thermal radiation where phase and amplitude fluctuations contribute equally to the linewidth.
8.3 Random photon distribution
The absence of intensity fluctuations does not allow for a surplus of coincidences, and a measurement must yield a coincidence counting rate independent of the delay time. However, the existence of a
constant intensity does not imply that the number n(t; T ) of registered photons during a time interval T also remains the same. Only the probability of the response of the counter is always
constant. Enough freedom remains for the counting events to result in a statistical distribution of the photon numbers n(t; T ). The form of the distribution is predictable once we use the
correspondence between classical monochromatic waves with well defined amplitudes and phases and the quantum mechanical Glauber states discussed in Section 4.4. As already explained, in this case the
photons related to the mode volume follow a Poisson distribution, which, according to Equation (4.6), takes the form (we replace the parameter |α|2 by the mean photon number n) pn = e−n
nn n!
(n = 0, 1, 2, . . .).
The distribution is basically different from the Bose–Einstein distribution valid for thermal light as illustrated in Fig. 8.6: it shows a pronounced maximum at a position near the mean photon number
n and is much narrower than the Bose– Einstein distribution. (It is known that n 2 = n holds for the Poisson distribution.) We identify the mode volume with the coherence volume Vcoh in the same way
as in the theoretical description of thermal radiation in Section 8.2, whereby the coherence volume Vcoh is characterized by the property that the phase does not change significantly within it. It is
now necessary to draw conclusions about the statistics of the count events from the known statistical behavior of the photons present in the coherence volume. As explained in Section 8.2, the
influence of either an observation volume smaller than the coherence volume or that of inefficient detectors can generally be described as an effective beamsplitting process. Because in such cases a
Poisson distribution is transformed into a distribution of the same type, we arrive, as for thermal light, at the result that Equation (8.15) applies also to photon numbers found experimentally for
integration times T shorter than the coherence time tcoh . The detectors need not be ideal, and the light beam cross section might be imaged only partly onto the detector surface. The Poisson
distribution describes, as is known from classical statistics, completely random processes, and therefore Equation (8.15) applies also to T > tcoh , i.e. it is valid for arbitrary integration times.
We can, using Equation (8.15), easily calculate the mean value of n(n − 1) characteristic for (undelayed) coincidences. It is a straightforward matter to state; we
Photon statistics 1.0
C − Cran
Fig. 8.7. Relative excess coincidence counting rate with respect to the random rate Cran as a function of the delay time τ , measured on a single-mode gas laser whose output power was 2.7 times the
threshold power. From Pike (1970).
find the result n(n − 1) = n2
(for a Poisson distribution),
indicating that the coincidence counting rate to be measured agrees precisely with the random one. The relative excess coincidence counting rate R introduced in Section 8.2 (Equation (8.14)) equals
due to this zero. Laser radiation does not – in contrast to thermal radiation – exhibit a tendency towards photon bunching. The probability of detecting two photons (at the same position) with a time
delay τ is the same for all values of τ . All the statements about laser light made up to now refer to an idealized case which can be realized only approximately. However, the radiation of a
single-mode laser operated far above the threshold comes very close to the ideal. The saturation near threshold is small, and the amplitude stabilization mechanism based on it becomes less efficient.
This results in intensity fluctuations. Near threshold we observe a noticeable surplus of coincidences compared with the random case, as illustrated by Fig. 8.7. It is interesting to follow the onset
of laser action with the photon counting technique. (Fortunately the build up time of laser oscillations is quite long – for the
8.4 Photon antibunching
4000 (f) (a) 2.6 µs (b) 3.7 µs (c) 4.3 µs (d) 5.0 µs (e) 5.6 µs (f) 8.8 µs
(a) (b) N 2000
Fig. 8.8. Measured photon distribution in the build up regime of a single-mode gas laser. The plots show the number of cases N when n photons were registered for different times after switch on. From
Arecchi, Degiorgio and Querzola, (1967a).
He–Ne laser, for example, it is of the order of microseconds – and an integration time short compared to this is achievable without great effort.) The result found experimentally is shown in Fig.
8.8. The photon distribution in the initial stage of the oscillation differs only slightly from a Bose–Einstein distribution; however, it becomes more and more similar to a Poisson distribution as
time elapses.
8.4 Photon antibunching From the classical point of view, by amplitude stabilization of light the utmost is done to minimize the fluctuations of the photon number (in the sense of counting events).
Accordingly, the Poisson distribution represents the greatest order a radiation field might have from the classical point of view. Quantum theory does not share this view. It declares as possible,
states of the electromagnetic field which have photon distributions narrower than a Poisson distribution. Naturally, such states have no classical analog, and they are manifestations of the
specifically quantum mechanical (ultimately corpuscular) aspect of the radiation.
Photon statistics
In fact, the states with sharp photon numbers (the eigenstates of the energy operator of the free field) appearing “naturally” in the quantum formalism are of this “strange” type. Moreover, a
detector with ideal efficiency responding to the field contained in the whole mode volume would not indicate any fluctuations in photon number! To be able to predict the results of a realistic
measurement, we have to subject the distribution of the numbers of photons present in the mode volume to the Bernoulli transformation, Equation (7.3). In the present case of a sharp photon number n
in the mode volume, Equation (7.2) describes directly the distribution of the number of registered photons, k. The original distribution, as expected, is considerably broadened due to the influence
of statistical laws governing the “selection” of photons by the detector from the ensemble of photons contained in the mode volume. These laws act because the observed volume Vobs is smaller than the
mode volume or the coherence volume and the detection efficiency η is smaller than unity. The corresponding variance is not difficult to calculate. According to Equation (7.4), the relations k = tn,
k(k − 1) = t 2 n(n − 1)
hold, and imply k 2 − tn = k2 − t 2 n
k 2 = k 2 − k2 = (1 − t)k.
The variance is greater the more the overall damping factor t (given by the product of Vobs /Vcoh and η) differs from unity; i.e. the smaller the average number of registered photons. In the real
experiment, there is no trace of a sharp photon number! Let us note that it is not only a shorter integration time – compared with the coherence time – and a lower detection efficiency that are
responsible for such a “falsification,” but an insufficient focusing of a light beam coherent over its whole cross section onto the sensitive detector surface also has the same effect. We can – apart
from the cases of thermal and ideal laser radiation, as we have seen above – measure the true photon statistics only when the detector registers with certainty all the photons contained in the mode
volume. Measured photon distributions, except Bose–Einstein and Poisson distributions, yield only limited information as long as the effective damping factor t is unknown. From Equation (7.4), we can
infer quite generally, i.e. for the case of arbitrary photon statistics, the following relation between the measured variance and the real variance of the photon number: k 2 1 1 n 2 − − = , 2 2 k n k
8.4 Photon antibunching
which implies the relation k 2 1 n 2 = + (t − 1) . 2 2 k n k
Let us now turn our attention to coincidence measurements. As explained in Section 8.2, the relative excess coincidence counting rate R is fortunately independent of the detector sensitivity. For a
sharp photon number n (in the mode volume) it takes, according to Equation (8.14), the value R = −1/n. Because n 2 can be at best zero, this is also the biggest (with respect to its modulus) negative
value of R for a given mean photon number. Hence for the R < 0 case we do not predict a surplus of the coincidences but a deficit. Because for large delay times τ L/c, with L being the length of the
mode volume, we detect only random coincidences, the obtained result means that two photons are found within a smaller distance τ (≤ L/c) less often than for larger distances. We are dealing with an
effect which is opposite to that characteristic of thermal light (bunching), and in the literature the term “antibunching” has been introduced for it. The photons do not seek mutual nearness, but
instead prefer a certain “distance” between each other. This shows that “bunching” is not a fundamental effect. (Originally it was widely held that it is a direct consequence of the fact that
photons, as spin 1 particles, have to obey Bose statistics.) In fact, it is the particular form of light generation that determines whether the photons seemingly attract or repel each other. The
appearance of a negative value of R is not comprehensible within classical optics; it is a manifestation of the particle aspect of radiation. Classically, the coincidence counting rate is
proportional to the intensity correlation, i.e. to the time averaged value of I (t)I (t + τ ). Under stationary conditions this is just the autocorrelation function, and it is well known that such a
function has its absolute maximum at τ = 0 (see, for example, Middleton (1960)). For our situation, this implies that the coincidence counting rate must have its largest value for τ = 0. Based on the
arguments given above it is clear that the “antibunching” effect is noticeable only for small mean photon numbers (related to the mode volume); it is, in fact, of microscopic nature. For n → ∞ it
vanishes in a pleasing agreement with Bohr’s correspondence principle requiring that the quantum mechanical description goes over into the classical one for high excitations (in our case for high
photon numbers). Based on Equation (8.14), we can distinguish between classical and nonclassical light. The borderline is at R = 0, i.e. for n 2 = n, and our results can be summarized as in Table
8.1. It is also possible to determine the photon distribution through photon counting. Let us recall, however, that under realistic experimental conditions (in particular through the use of
inefficient detectors) the distribution will be falsified: it becomes
Photon statistics
Table 8.1. Classification of light based on the relative excess coincidence counting rate R. The last column states the relation of the photon distribution with respect to the Poisson distribution. R
n 2
Photon statistics
Photon distribution
>0 =0 n = n < n
super-Poisson Poisson sub-Poisson
broadened, as can be seen from Equation (8.21). However, once we measure a distribution that is narrower than a Poisson distribution with the same mean photon number, we have proved that we are
dealing with non-classical light. The most important question is whether non-classical light really exists, or, in other words, is it possible to realize radiation fields with n 2 < n and so make
“photon antibunching” a measurable effect? The question may be answered “yes” in principle. The simplest idea is to use as a light source a single atom excited to resonance fluorescence by an
incident laser wave (see Section 6.1). Such a system emits photons, one after the other. There is, however, a time delay between two subsequent emissions – after each emission the atom has to be
excited again, which takes a finite time. Observing the radiation emitted in a certain direction, we see the photons ordered like a string of pearls. However, the distances between the photons are
unequal, very short separations seldom occur, and zero separation is never observed. Coincidence measurements on such a light beam would yield none for delay time 0 (ideal conditions assumed), but
there would be coincidences for longer delay times. All this sounds easy, but a real experiment is very difficult to set up. The main obstacle is the detection of the radiation from a single atom. As
soon as it is possible that a second atom contributes to the observed radiation, coincidences can appear because it will sometimes happen that both atoms emit one photon simultaneously, each in the
same direction. The first experiment demonstrating “photon antibunching” was performed as follows (Kimble et al., 1977; Dagenais and Mandel, 1978). The light source was formed by an atomic beam
consisting of sodium atoms. Resonance fluorescence of the atoms was induced by a resonant laser beam (incident orthogonal to the atomic beam). The sideways emitted radiation was observed. Only a
fraction of the radiation, emitted from a small section of the atomic beam of 100 µm length (the diameter of the beam was also 100 µm), was collected using a microscope objective and sent, via a
balanced beamsplitter (see Fig. 8.3), to the two detectors of the coincidence counting
8.4 Photon antibunching
1000 N
60 τ(ns)
Fig. 8.9. Measured numbers of coincidences N as a function of the delay time τ in the case of resonance fluorescence. From Dagenais and Mandel (1978).
apparatus. The velocity of the atoms was chosen such that the observed volume contained, on average, less than one atom. However, it could not be ruled out that at certain times the volume contained
two, or even three, atoms simultaneously. (And at other times, there was no atom present at all.) We can assume that the atoms contained in the observed volume fluctuate according to a Poisson
distribution. The consequence is, as a theoretical analysis (Jakeman et al., 1977) revealed, that no coincidence deficit (in the sense of the inequality R < 0) should be found. In agreement with
this, the results of the measurements shown in Fig. 8.9 indicate that the number of measured coincidences – as a function of the delay time τ – has a minimum at τ = 0 which does not go to zero as it
should for a single atom (under ideal conditions): the minimum even lies above the number of random coincidences. The results of the experiment are not very satisfactory. However, as previously
explained, the appearance of a relative minimum at τ = 0 in Fig. 8.9 is in contradiction to classical theory and it bears witness to the particle aspect of light. In more recent experiments the
required condition was satisfied and the detector “took notice” of only a single atom. This was achieved with the help of modern trapping techniques (see Section 6.1): using optical cooling it is
possible to trap a single atom for as long as necessary in a radio frequency quadrupole trap (Paul trap). “Photon antibunching” was demonstrated on a trapped Mg+ ion in complete agreement with theory
(Diedrich and Walther, 1987). Another possibility is to excite a single pentacene molecule located in a p-terphenyl crystal, thus producing fluorescence (Basch´e et al., 1992). The process was not
resonance fluorescence, however;
Photon statistics
instead, the deexcitation took place in two steps. However, the emission conditions are very similar, and the fluorescence light exhibited the “antibunching” effect in its full beauty. One might
object that the described experiments do not invalidate classical electrodynamics but just the classical assumption that the photodetector responds to the instantaneous intensity independently of how
much energy is really available for absorption. In fact, the essential discrepancy between the classical and the quantum mechanical description of the discussed experiments originates from the fact
that the classical formula for the simultaneous response of two photodetectors predicts observable coincidences even for single incident photons. However, this is impossible due to energetic reasons!
The classical theory cannot be saved simply by improving the description of the photoelectric detection process. The deeper reason for its failure is the fact that – as a wave theory – it cannot
avoid predicting the splitting of single photons incident on a balanced beamsplitter. (Otherwise there could be no “interference of the photon with itself.”) This leads, however, as discussed in
detail in Section 7.1, to a contradiction with our experience. A correct description of the way in which a detector works has always to take into account that half photons cannot be absorbed. This
would imply that, provided an ideal source were available, none of the detectors would respond, in the experiment by Mandel and coworkers. The light generated by a single atom through resonance
scattering has the “antibunching” property from the very beginning. We might ask whether it is possible to alter already generated light in such a way that it becomes “antibunched.” In fact, a change
of the photon statistics is not surprising. For example, a twophoton absorber (the individual atoms always “swallow” two photons at once) suppresses existing intensity fluctuations. This is easily
seen from the classical description; the probability of absorption is in this case proportional to the squared intensity and therefore the intensity peaks are disproportionately suppressed. (The
ratio of the maximum and the mean intensity decreases during interaction, in contrast to one-photon absorption where it stays constant.) The result is that the intensity becomes almost constant. The
classical treatment predicts a completely amplitude stabilized field as the final stage. A constant intensity does not imply, as was explained in the preceding text, a fixed value of the photon
numbers; these will fluctuate according to a Poisson distribution. Would it not be possible to reduce further such fluctuations through interaction with a two-photon absorber? Quantum mechanical
calculations show that the conjecture is correct. It is easily seen from a naive photon picture (in which we imagine photons as energy packets localized in space). An atom absorbs two photons more or
less simultaneously in any elementary act. To be more precise, we should say that the delay between the
8.4 Photon antibunching
arrival of the two photons should not exceed a maximal value τmax for absorption to be possible. (For simplicity we assume the incident beam to be so thin that the photons are sufficiently close to
each other in the transverse direction so that one and the same atom can absorb them – provided their temporal distance is sufficiently short.) The two-photon absorber always “picks out” from the
field pairs of photons that are close to each other. This leads (starting from an amplitude stabilized beam for which the probability of finding two photons with a time separation τ is independent of
τ ) to the formation of a deficit of photon pairs with τ ≤ τmax and this is exactly what is understood by “photon antibunching.” The final state for long enough interactions would be a state with
photons separated in time by τ > τmax (provided we could eliminate all disturbing factors such as scattering, weak one-photon absorption through impurities, etc.). The process faces another,
insurmountable, obstacle, however – the extremely low efficiency of two-photon absorption. This implies that a considerable attenuation of the beam is possible only for high intensities – on the
other hand, as can be seen from Equation (8.14), a chance to observe the “antibunching” effect exists only for very low intensities. It would be highly desirable to have a type of two-photon process
which is efficient at small intensities. In fact, there is such a process in the form of induced harmonic generation in a non-linear crystal, by which we mean a weak fundamental wave at frequency ν
incident onto a crystal, together with an intense harmonic wave (oscillating at the doubled frequency 2ν). The phases of the two waves ϕν and ϕ2ν are chosen in such a way (the time independent
difference ϕ2ν − 2ϕν is relevant) that the fundamental wave is attenuated and the harmonic is amplified. The fundamental wave is affected by a kind of two-photon absorption because the elementary
process of the interaction is a “fusion” of two photons of the fundamental wave into a photon of the harmonic according to the equation hν + hν = h(2ν).
It is to be expected – and quantum mechanical calculations confirm this – that the (attenuated) fundamental wave leaving the crystal exhibits the “antibunching” effect. We emphasize that the
intensity of the incident wave can be arbitrarily small; the process is driven by the incident strong harmonic. Experiments of this type have not yet been achieved. Considering electrons instead of
photons, the “antibunching” effect appears as something very natural. Due to the Coulomb repulsion it is in the nature of electrons not to approach too close to each other. In fact, this type of
behavior can be transferred from electrons onto photons. The task is to convert electrons – when possible in a 1:1 ratio – into detectable photons. This happens, for example, in the Franck–Hertz
experiment. Here atoms are bombarded by a beam of electrons and
Photon statistics
are thus resonantly excited. Subsequently they spontaneously emit a photon each. The experimental verification of the effect was successful (Teich and Saleh, 1985): the generated photons exhibited
sub-Poisson statistics. However, the measured deviation from Poissonian statistics was very small. The reason is mainly due to the fact that not each electron was definitely “converted” into a photon
which then went on to be detected. The weak points of the experiment are (i) the conversion efficiency of the electrons, which is below 100%, (ii) the fact that the photons are emitted in all
directions, which allows only a fraction of the photons, collected by a lens, to be directed onto the detector; and (iii) the inefficiency of the detectors. Another way to convert electrons into
photons is presented by semiconductor lasers. More precisely, it is the annihilation of electron and hole pairs which leads to light emission. One advantage compared with the Franck–Hertz experiment
is that the photons are emitted in a well defined direction. On the other hand, the resonator – the end surfaces of the laser crystal act like mirrors – hinders the photons’ immediate exit. This
leads to a time spread of photons generated at the same instant. Because of this, a sub-Poissonian statistics of the photons can be observed only when the measurement time is longer than the mean
lifetime of the photons in the resonator. In addition, it is necessary to stabilize very precisely the pump current. Finally, laser crystals of excellent quality are needed to make losses of photons
due to absorption and similar processes as small as possible. In fact, the discussed effect could be observed using an InGaAsP/InP laser with distributed feedback (Machida, Yamamoto and Itaya, 1987).
In practice, instead of photon counting, the noise spectrum of the photocurrent is measured. The appearance of sub-Poissonian photon statistics from a certain measurement interval T onward is
indicated by a drop in the noise power spectrum below the shot noise for frequencies ν > 1/T . Another light source producing non-classical radiation by itself is the one-atom maser, also called the
micromaser. Atoms excited into a certain Rydberg state (see Section 6.2) are sent one after the other through a microwave resonator tuned to a (microwave) transition into one of the lower lying
Rydberg states. Thanks to the unusually large transition dipole moment of Rydberg atoms – a consequence of the extremely elongated electron orbits – a strong interaction between the atoms and the
radiation field takes place in the presence of a few (or zero) photons in the resonator. The consequence is that stationary maser operation is possible at small mean photon numbers (Meschede, Walther
and M¨uller, 1984). A necessary prerequisite is, however, a resonator with extremely small losses, which is achievable using superconducting materials cooled with liquid helium. The strong cooling
has the additional desirable effect of suppressing the thermal radiation from the resonator walls. It is amazing that in such a maser a resonator field with subPoissonian statistics can be generated
in the interplay between the energy supply
8.4 Photon antibunching
by the atoms and the resonator losses for suitable chosen parameters (such as the time of flight of the atoms, the atomic flux, etc.). The detection of this nonclassical behavior can be achieved only
indirectly. There are no microwave photon counters, and we are fully dependent on concluding the photon statistics from the statistics of the de-excited atoms – with the aid of field ionization
detectors it is easily possible to find out whether an atom leaves the resonator in the upper or the lower state of the maser transition. Fortunately, there is a close connection between the two
statistics (proven by theory); in particular, the type of statistics (super-Poisson, Poisson and sub-Poisson) of the atoms and the photons is always the same. The corresponding experiments have been
successfully performed at Garching (Rempe, Schmidt-Kaler and Walther, 1990). Finally, let us also note that by external manipulation of previously generated light one can change its statistics and
generate “antibunching.” Here again we make use of parametric fluorescence (see Section 6.7). One possibility is to use the idler wave for the manipulation of the pump wave. The idler is incident on
a detector and its output signal is used to close a shutter inserted into the pump beam for a short time (Walker and Jakeman, 1985). In this way, pieces of equal length are cut out from the signal
wave at irregular intervals. Obviously with this procedure we decrease the probability of finding two photons within a short interval, and this is clearly what is meant by “antibunching.”
9 Squeezed light
9.1 Quadrature components of light In the previous chapter we learned about a special form of “non-classical light.” It was the particle character of the light that could not be properly described
within the framework of a classical field theory (classical optics). We can, however, visualize light particles – photons – when we compare them with bullets. For example, it is not difficult to
imagine a sequence of fast moving particles lined up like a string of pearls – similar to the bullets fired by a machine gun in a certain direction – to visualize ideal “antibunching.” The abnormal
(from the viewpoint of classical optics) photon statistics does not begin to exhaust the wealth of curiosities that Nature has at hand. It is possible to manipulate the quantum mechanical field
fluctuation in a very subtle way, which leads to measurable effects, and we will concentrate on this problem in the following. First of all, we have to introduce an alternative description of the
field which is more suited to the problem under scrutiny. Usually we quantize the (classical) electric field strength by decomposing it into positive and a negative frequency parts. This results in
the representation involving plane waves, E(r, t) = A e−i(ωt−kr) + A∗ ei(ωt−kr) ,
where A is a complex amplitude. Instead of Equation (9.1), we write the electric field strength as E(r, t) = X cos(ωt − kr) + P sin(ωt − kr),
with two real amplitudes X and P. Apart from a factor of two, these are the real and imaginary parts of the complex amplitude A. We learn from comparison with a material harmonic oscillator that the
variables X and P (in proper normalization) have the meaning of position and momentum (which explains our notation). Let us 155
Squeezed light
write Equation (9.2) in the form E(r, t) = C{x cos(ωt − kr) + p sin(ωt − kr)}.
The normalization factor C can be chosen such that the (Hermitian) operators xˆ and pˆ corresponding to the classical variables x and p will satisfy the commutation relation ˆ [x, ˆ p] ˆ = i1.
This relation differs from the well known Heisenberg commutation relation for position and momentum only in a factor of h¯ missing from the right-hand side. The commutation relation implies
Heisenberg’s uncertainty relation, and therefore the properly normalized real electric field amplitudes will satisfy the uncertainty relation 1 (9.5) x p ≥ . 2 The amplitudes x and p are usually
called field quadrature components. Choosing a reference wave that oscillates as cos(ωt − kr), we can interpret x and p as the in phase and out of phase quadrature components. What is the meaning of
the inequality in Equation (9.5)? First of all, it states that both quadratures necessarily fluctuate. It is natural to ask what those quantum states that have the minimal uncertainty that is
compatible with quantum mechanics look like, i.e. for which the equality sign holds in Equation (9.5). This problem was solved in 1933 by W. Pauli (1933) in his famous Handbuch der Physik in an
elegant three-line proof. The result is that the unique solutions are Gaussian wave functions, (x − x)2 px − 14 − 12 (x) = (2π) (x) exp − +i , (9.6) h¯ (2x)2 where x is the square root of the
variance of x, i.e. x = (x − x)2 . Pauli was thinking of x and p as the position and the momentum of a particle; however, we can equally well interpret these variables as quadrature components, and
there is nothing to stop us assigning a Schr¨odinger wave function (x) to the quantum state of the field. We simply choose the representation with respect to the quadrature component x instead of the
usual expansion in terms of Fock states! It is important to point out that the (positive) parameter x was chosen arbitrarily in Equation (9.6). This equation thus describes a whole class of quantum
mechanical states of the radiation field. Among them, those with equally strong
9.2 Generation
fluctuations in both quadrature components are distinguished. For the quadratures, then, the relation (x)2 = (p)2 = 1/2 holds. It seems that this is a natural symmetry property of light, and the
coherent states described in Section 4.4 have this property. Also, the fluctuations of x and p for these states are exactly the same as for the vacuum state (which can be considered as the limiting
case of a coherent state with an infinitely small complex amplitude α). In general, the states in Equation (9.6) have the property that the fluctuations differ for the two quadrature components. Due
to the validity of the relation (x)2 (p)2 = 1/4, the variance of one of the quadratures is larger than the vacuum value 1/2 and the other is smaller. At first glance, this statement seems astonishing
since we consider the vacuum fluctuations to form a natural lower limit. However, when we analyze the problem in the context of a harmonic oscillator, we do not find it surprising that one can
localize a particle with a greater precision than that corresponding to its ground state. On the contrary, it is one of the axioms of quantum mechanics that we can measure the particle’s position
with arbitrary precision. Why then should it be impossible for light to have quadrature components, one of which is better defined than for the vacuum state? The most important question concerns the
practical realizability of such light, which soon received its own name – “squeezed light.” This was a challenge for theoreticians as well as experimentalists. Apart from the question of how to
generate such a strange form of light, it was also a problem to detect its squeezing properties. The fact that this task could be successfully accomplished is one of the greatest achievements in
quantum optics. We will discuss the steps taken to achieve this in the following sections. 9.2 Generation Processes associated with non-linear optics are best suited for the generation of squeezed
light. Especially efficient is degenerate parametric down-conversion: a special type of three-wave interaction (as discussed in Section 6.7), where the signal and the idler wave coincide. Let us
consider the case of a strong pump wave and a weak signal wave. The underlying physical mechanism is the formation of a polarization of the medium by the pump wave (we neglect the depletion of the
pump due to amplification of the signal) and the signal wave, and this polarization drives in turn the signal wave. The complex amplitude A of the signal wave satisfies the equation of motion A˙ = κ
A∗ ,
where the effective coupling constant κ is proportional to the non-linear susceptibility of the medium and the complex amplitude of the pump wave. In the
Squeezed light
following we will choose the phase of the signal wave such that κ will be positive. Equation (9.7) can be used to write down the equations of motion for the quadrature components x and p, which are
(apart from a normalization factor) the real and the imaginary parts of A: x˙ = κ x,
p˙ = −κ p.
Obviously they describe an exponential increase in x and an exponential decrease in p. The process of amplification will cause one quadrature component to be enhanced, whilst at the some time the
other will be attenuated. This asymmetry is also transferred to the uncertainties of x and p, for which Equations (9.8) and (9.9) result in (x)2t = e2κt (x)20 ,
(p)2t = e−2κt (p)20 .
This behavior is exactly what we have been looking for, namely a dramatic decrease in one of the uncertainties. The increase of the other uncertainty is the price we have to pay. (That in our
treatment the p quadrature component is “squeezed” is the result of the special choice of the signal phase. If we changed it by π , the x component would be squeezed.) Since the equations of motion
are linear, the performed analysis is valid in the classical as well as in the quantum case, and hence, with the degenerate parametric amplifier, we have an instrument at hand which enables us to
prepare squeezed states from coherent (Glauber) states. The variances of the initial Glauber state (x)20 and (p)20 equal the vacuum value 1/2 and, according to Equation (9.11), one of the quadratures
will be more and more suppressed during the amplification process. The described squeezing experiment was performed successfully (Wu et al., 1986). The experiment did not start from a coherent but
from the vacuum state. The process, which is also called subharmonic generation, is initiated by the parametric fluorescence discussed in Section 6.7. The weak signal thus produced is then further
amplified. The resulting state is usually termed “squeezed vacuum.” The credit for generating and detecting squeezed light for the first time, however, goes to another American research group
(Slusher et al., 1985; Slusher and Yurke, 1986). They used a four wave mixing type interaction and sodium vapor as the non-linear medium. In the experiment, the reflection of a strong laser wave was
utilized to create a second, counter-propagating wave. These two waves generated in a resonator – also from the vacuum – two signal waves, differing slightly in
9.2 Generation
their frequencies. It was proved that the total field formed from these two waves – after their exit from the resonator – has the predicted squeezing properties. Before examining the detection
technique used more closely, let us comment on the theoretical analysis achieved so far. It follows from Equations (9.10) and (9.11) that the uncertainty product x p is time independent.
Consequently, when Heisenberg’s uncertainty relation is originally fulfilled with the equality sign, it remains so for ever. According to Pauli’s proof mentioned in Section 9.1, the Schr¨odinger wave
function of the field state is always of the form in Equation (9.6), and only the parameter x is time dependent according to Equation (9.10). This relation also makes it clear that strong squeezing –
in our case, of the quadrature component p – is connected with a significant increase in energy: when p becomes very small, x must quite generally grow at least inversely since otherwise Heisenberg’s
uncertainty relation, which is fundamental for quantum mechanics, would be violated. The energy of the field is proportional to the sum x 2 + p 2 , and this means that the energy fluctuations (and
therefore the mean energy) will become bigger. In contrast to photon statistics, where the non-classical behavior is easy to observe for small intensities, the nonclassical squeezing effect requires
high intensities to become well pronounced. A more detailed theoretical analysis shows in addition that the fluctuations are much larger than for thermal light. In other words, we are facing a
“super-bunching” of photons. This is particularly true for the abovementioned squeezed vacuum. The name given to this state is not a very appropriate one as it suggests the existence of a new type of
vacuum. We are not dealing here with a vacuum in the sense of a field state without photons! The squeezed vacuum is extraordinary also with respect to its photon distribution. Because the elementary
process of subharmonic generation is the conversion of a pump photon into two signal photons, only an even number of photons can be generated (under ideal experimental conditions). The photon number
distribution has a comb-like structure: it is zero for all odd photon numbers. It is clear that such a “pathological” state must be very fragile for even the smallest disturbances. The comb-like
structure will be completely destroyed, for example, by a single absorbing atom which interacts with the field during a time interval that is just long enough that it becomes excited with a
probability of 50%. This “fragility” is a general feature of “squeezed states” and makes their use for optical data communication rather illusory as we cannot avoid damping occurring in the optical
fiber. The advantage of such an application would be to imprint the signal onto the squeezed component, thus improving considerably the signal to noise ratio. Squeezed states might possibly assume
practical importance in high sensitivity interferometers. Their sensitivity with respect to mirror displacements could be increased by preventing the intrusion of vacuum fluctuations into the
Squeezed light
unused input port which “contaminate” the interferometer. To this end, squeezed light (with a proper phase) might be coupled into the interferometer instead of the vacuum. The increased effort would
probably pay off in gravitational wave interferometry where a mirror displacement indicates the presence of a gravitational wave. Since inconceivably small displacements – by orders of magnitude
smaller than the proton radius – are to be measured, any procedure leading to an improvement of the signal to noise ratio without additional intensity increase is more than welcome. 9.3 Homodyne
detection Radio engineers developed a very efficient detection method that comprises the mixing of a (weak) signal with a wave generated by a local sender. This technique – depending on whether the
signal frequency coincides with that of the “local oscillator” or not being called homodyne or heterodyne measurement – also found an application in optical spectroscopy. A special type of optical
homodyne detection, balanced homodyne detection, proved itself to be an almost ideal quantum optical measurement procedure. It is especially well suited for the detection of squeezing effects, and
therefore we will examine it more closely. The first step is the optical mixing of the usually weak radiation to be analyzed (in the following called the signal) with an intense laser wave as the
local oscillator (Fig. 9.1). For the mixing we will use a beamsplitter with a high enough accuracy to achieve splitting in a 1:1 ratio. The outgoing waves impinge onto two separate detectors. The
measured signal is given by the difference between the two photocurrents. The theoretical analysis of this measurement method shows that the measured variable – apart from a factor proportional to
the laser field amplitude – is just one of the quadrature components of the field (see Section 9.1). It is assumed that an intense laser wave is used, allowing for a classical description. It is
especially important that we use a 50%:50% beamsplitter, as it must guarantee exact compensation of the coherent contributions of the laser field in the difference between the photocurrents, which
would otherwise be dominant. Which quadrature component will be measured? It is obvious that it will be determined by the relative phase between the signal and the local oscillator. The theoretical
treatment shows that we measure the x quadrature component when equals zero, whereas for = π2 we measure the p component. We point out that experimentally the desired relative phase can be adjusted
without great difficulty. We start from a laser field and split it into two parts which are then used twofold: first (if necessary after a frequency transformation, usually frequency doubling with
the help of a non-linear crystal) for the signal generation (in a non-linear
9.3 Homodyne detection
detector λ plate 4
local oscillator
Fig. 9.1. Homodyne measurement of quadrature components x and p, respectively. The difference of the two photocurrents delivered by the two detectors is proportional to x, and, with an inserted
quarter wave plate, it is proportional to p.
optical process), and second as a local oscillator. There is a certain phase relation between the signal wave and the local oscillator, and the relative phase can be easily adjusted to the prescribed
value by varying the path difference. The phase fluctuations of the primary laser field obviously do not play a role. We point out that this procedure is the only feasible way of performing the
homodyne measurement. The other possibilities are (i) a signal with the property that the measured data (in the sense of distributions) do not depend on , or (ii) a laser with extremely high
frequency stability to guarantee the (absolute) phase stability of the local oscillator during the whole time of the measurement (on an ensemble). Employing the balanced homodyne detection technique,
the measurement of x and p is simple. Once we have arranged the experimental setup for the measurement of the x component, it is sufficient to insert a λ/4 wave plate into the local oscillator path
to measure the p component. This is indeed an experimental “comfort”, almost without parallel; just think of quantum mechanics where measurements of the position and momentum each require a different
experimental setup. The capabilities of the balanced homodyne technique are not exhausted by this; we can, for example adjust the value of . What is measured then is a kind of
Squeezed light 2.0
V(Θ) 1.0
Θ0 + π
Θ0 +2 π
Θ0 + π
Fig. 9.2. Squeezed vacuum state detection. Relative signal from a spectrum analyzer is shown plotted against the phase of the local oscillator. From Wu et al. (1986).
mixture between x and p, namely a new observable x = cos x + sin p
(see Section 15.6). If we measure the distributions ω (x ) for a complete set of variables x , where changes stepwise – in the sense of an approach towards a continuous progression – between 0 and π,
we obtain, as discussed in detail in Section 10.3, essentially the complete quantum mechanical information about the respective system. Let us return to the experimental detection of the squeezing
effect. Based on the given arguments, we need to do the following: construct the measurement setup as shown in Fig. 9.1 and block the signal at the beginning. We cannot prevent the vacuum from
entering the setup, and under these circumstances we will measure the vacuum fluctuations in the form of fluctuations of a chosen quadrature component of the field. There is no preferred phase of the
electric field strength in the vacuum, and therefore the uncertainties indicated by a spectrum analyzer,1 i.e. the square roots of the square fluctuations, are independent of the relative phase of
the local oscillator. They mark the “zero level.” When we send the squeezed light into the measurement apparatus, the measured signal varies as a function of with period π between maxima rising above
the “zero level” and (compared with the maxima) less pronounced minima falling below it. The minimum is 1 Under ideal experimental conditions, the signal of a spectrum analyzer is proportional to the
square root of a
squared quadrature component x averaged over a certain time interval. When the mean value of x itself vanishes, as is the case for the vacuum as well as for the “squeezed vacuum,” the uncertainty x
will be directly indicated. The measured signal will actually deteriorate due to the detector inefficiency and, for amplification of the individual photocurrents, the amplification noise.
9.3 Homodyne detection
exactly what we have been looking for. Such a minimum indicates that the fluctuations of a certain quadrature component x – the value of does not play a role as it is related to the path difference
between the signal and the local oscillator, which is not measured – are smaller than those of the vacuum; i.e. we have “squeezed” light. This result was obtained in the pioneering work on squeezing
by Slusher et al., (1985). It was later confirmed by other researchers using methods of light generation other than the abovementioned degenerate parametric amplification (Wu et al., 1986). A typical
measurement curve is shown in Fig. 9.2. Let us emphasize the non-classical character of squeezed light. Within a classical theory it is not possible to understand the falling below the “zero-level.”
From the classical point of view, this level is simply the shot noise of the detector, which is a consequence of the “grainy” character of the electrons. That the photocurrent fluctuates even for
constant incident light can be understood classically in the following way. The sensitive surface of the detector is formed from myriads of atoms, out of which, in a short time interval, only a few
are ionized and hence the process is governed by chance. Since the ionization probability – for a uniformly illuminated detector surface – is the same for all atoms, and since each individual process
is statistically independent of all the others, we find all the conditions for a Poissonian statistics to be satisfied. This implies that electrons released in an arbitrarily chosen time interval
follow this type of statistics. The result is white (frequency independent) noise of the photocurrent, called shot noise. From this follows the fact that the shot noise represents an absolute lower
limit: when we illuminate with light that is not amplitude stabilized, i.e. it exhibits intensity fluctuations, the noise can only increase and never decrease. It was later found that the balanced
homodyne measurement is also well suited to quantum mechanical measurements of the phase of light. Problems related to phase have been of interest since the early days of quantum mechanics, and hence
will be discussed in some detail in the following chapter.
10 Measuring distribution functions
10.1 The quantum phase of light The proper description of the phase of a quasimonochromatic light field, idealized as a single-mode state of the radiation field, is among the most delicate problems
in quantum theory.1 Generations of theoreticians have tried in vain to find a phase operator which would satisfy the quantum mechanical standard. The problem is the required hermiticity. Only by
means of a mathematical trick was it possible to construct a strictly Hermitian phase operator. The clue was to curtail artificially the “playground” of the phase operator; i.e. to limit the
dimension of the corresponding Hilbert space to a very large but finite value. If we study the history of the subject, we see that work by F. London (which is largely forgotten now) provided a
satisfactory approach to the phase problem. London (1926) introduced the socalled phase states, i.e. quantum mechanical states with sharp phase value ϕ, of the form ∞ 1 |ϕ = √ einϕ |n, (10.1) 2π n=0
where |n are the states with sharp photon number n (see Sections 4.2 and 15.1). The states in Equation (10.1) account for the same internal relation between time and phase as is known from classical
theory. Classically we can find the phase of a monochromatic oscillation by observing the zero passages. The oscillation variable (displacement or amplitude of the electric field strength) depends
only on the combination ωt − ϕ, which implies that the time evolution is equivalent to a phase shift. Such a property also has the phase state in Equation (10.1) because the time evolution of the
states |n with sharp photon number – and hence with 1 To avoid misunderstandings, we emphasize that we are talking about the absolute phase; i.e. the phase with
respect to an ideal reference wave and not a relative phase determined by a path difference and measurable by interferometric arrangements.
Measuring distribution functions
sharp energy n h¯ ω – reduces to a multiplication by the phase factor exp(−inωt). The fact that the phase states are not normalized is not worrying. On the contrary: because we expect the phase to be
a continuous variable, the corresponding eigenstates cannot be normalizable – just think for comparison of a running plane wave which represents a momentum eigenstate. However, we should require
normalizability in the sense of Dirac’s delta function, since quite generally eigenvectors corresponding to different eigenvalues of a Hermitian operator should be mutually orthogonal. This is indeed
a problem because the states in Equation (10.1) do not satisfy this condition, with the consequence that they cannot be the eigenfunctions of a Hermitian operator, i.e. of a well behaved phase
operator. This problem is insurmountable. The phase seen as a quantum mechanical operator plays an extra role, and we might ask whether there is a special reason for this. The likely answer is that
the coupling between the radiation field and the atomic system takes place via the electric field strength in which the phase and the real amplitude become fused. As a consequence, it is difficult to
conclude anything about only the phase (or amplitude) from measured data. The introduction of phase states, as in Equation (10.1), is essentially satisfactory. They yield a decomposition of the unit
operator (and constitute a complete set of states). This allows us to interpret the squared absolute value of the scalar product of an arbitrary field state |ψ with a phase state |ϕ – the projection
of the field state onto the phase state – multiplied by dϕ as the probability of finding in a phase measurement a value within the interval ϕ . . . ϕ + dϕ. This gives us a clear “recipe” for the
calculation of phase distributions. Unfortunately, no one can tell us what such an ideal phase measurement should look like! Facing such an unsatisfactory theoretical situation, the pragmatist can
only try to turn the tables: instead of starting from a profound theoretical analysis, we need to produce a practicable phase measurement strategy and only afterwards try to find a proper theoretical
framework for it. This path towards an operational phase definition was taken by L. Mandel and coworkers, and we will follow their arguments in the following sections (Noh, Fougeres and Mandel, 1991,
10.2 Realistic phase measurement We start with a classical description and write the electric field strength of a running plane wave in the form E(z, t) = E 0 cos(ωt − ϕ − kz).
Using the addition theorem for the cosine and comparing the result with Equations (9.2) and (9.3), we find a simple relation between the phase ϕ and the quadrature
10.2 Realistic phase measurement
components x and p: cos ϕ =
x x 2 + p2
sin ϕ =
p x 2 + p2
It is interesting that already within classical optics the measurements of two variables are required to find the phase. The experimental procedure will exploit a balanced beamsplitter which splits
the optical ray into two parts: on one will be measured the x component and on the other the p component (the balanced homodyne detection discussed in Section 9.3 is well suited for this task). We
may ask now whether this procedure is also practicable for a quantum mechanical phase measurement. A theoretician will have serious objections. The variables x and p, according to the principles of
quantum mechanics, cannot be precisely measured simultaneously because they are canonically conjugated and therefore have to satisfy Heisenberg’s uncertainty relation, Equation (9.5). Hence, the
theoretician declares that we are not measuring the true quadrature components x and p in the described experiment but modified variables, which contain, in addition to x and p, noise components
originating from the need to use a beamsplitter to be able to measure the two variables on the same system. The noise enters the system in the form of vacuum fluctuations via the unused port (see
Section 7.6, Fig. 7.8). We can describe the situation in the following way: we can measure x and p simultaneously, but at the expense of precision. This applies in general to any pair of conjugated
variables. What we really measure in the above experimental setup is not the “ideal” but a “noisy” phase. However, we can measure in this way at least something which comes close to the phase, and,
in the case of large amplitudes, when the role of vacuum fluctuations becomes negligible, even coincides with it. We can formulate also an “operational approach” to the phase, in the discussed
experimental setup this results in the definition of a physical variable termed a phase rather than the phase. Let us now analyze the described (indirect) phase measurement from the quantum
mechanical point of view. Because a single measurement does not tell us very much – the phase might have, for example, the particular value 0.72314 – we are interested mainly in statistical
statements about the phase. We will have to perform many single measurements on an ensemble of identically “prepared” systems; for example, a sequence of ultrashort light pulses. It then seems
natural to calculate from the measured data averaged values of the type sin ϕ, cos ϕ, cos2 ϕ, sin 2 ϕ, sin ϕ cos ϕ, etc. To extract the full information contained in the measured data we would also
need to calculate, in principle, all higher moments of the form sin k ϕ cos l ϕ, which is difficult to achieve practically. It is more advantageous to calculate an individual phase value ϕ from the
directly measured data x and p using Equations (10.3), thus (on repeating the procedure on many ensemble
Measuring distribution functions
members) determining a distribution, a so-called histogram, of the phase values. (Such a phase distribution allows us to calculate any moment sin k ϕ cos l ϕ.) By proceeding in this way, information
about the (real) amplitude and its correlation with the phase is lost. The most advantageous method of evaluating the measured data is to interpret the measured pairs x and p as points in a phase
space and then to determine their distribution – a “mountain” rising above the x, p plane. The phase space distribution w(x, p) contains all the information about the light field obtainable from the
described measurement device. When we are interested only in the phase, we just have to average over the amplitude. To this end, we rewrite the distribution function in polar coordinates ρ, ϕ, and
the phase distribution w(ϕ) is obtained using Equations (10.3) as the integral ∞ w(ϕ) =
ρ dρ w(x = ρ cos ϕ, p = ρ sin ϕ).
When we average over the phase, we find the amplitude distribution as the marginal distribution of w(x, p). Also, knowing w(x, p) allows us to calculate mean values of arbitrary variables dependent
on amplitude and phase, in particular of correlations between them. The adequate theoretical description of the measurement apparatus is to establish a general relationship between the phase space
distribution w(x, p) and the quantum mechanical state of the analyzed light field. Surprisingly, a very simple relation was found in the case of a strong local oscillator (this was not so in the
Mandel experiments (Noh et al., 1991, 1992), which made the theoretical analysis rather complicated): the directly measurable distribution function w(x, p) is obtained by projecting the field state |
ψ onto a Glauber state |α (Freyberger, Vogel and Schleich, 1993; Leonhardt and Paul, 1993a); i.e, the following relation holds: w(x, p) =
1 |ψ|α|2 , π
where we have to identify α = x + i p. The right-hand side is known as the Husimi or Q function, and has long been a very popular function among theoreticians. Usually it is introduced in such a
way√that in the expression |ψ|α|2 the complex amplitude α is replaced by (x + i p)/ 2. Equation (10.5) then takes the form √ √ (10.6) w(x, p) = 2Q( 2x, 2 p). Represented graphically, the Q-function
gives us a “global” impression of the respective state. In particular, the squeezing properties of light can be immediately recognized. However, the Q-function was mostly considered as a theoretical
construction. The above proof that it can be directly measured has considerably
10.2 Realistic phase measurement
increased its physical importance. Indeed, there have been indications that other realistic phase measurement methods will lead to the Q function too (for details see below), but this has remained up
to now only theory. It is easy to see that the transition from the quantum mechanical state |ψ to the Q-function is connected with information loss. This is in full agreement with the statement above
that a simultaneous measurement of x and p must necessarily be inaccurate. The theoretical description indeed shows that the finer details of the quantum mechanical state |ψ are no longer in the
measured data. To realize this it is favorable to characterize the field state by the Wigner function rather than by the wave function (or the density matrix). First, let us say a few words about the
Wigner function or the Wigner distribution. The motivation for its introduction was the desire to find a quantum mechanical description similar to that in classical statistical physics. In
statistical physics our knowledge about the system is represented by a distribution function for position and momentum (imagine, for simplicity, a particle with a single degree of freedom). The
translation of this concept into quantum mechanics already seems hopeless due to the fact that the position (x) and the momentum ( p) are not simultaneously measurable. It is well known that the wave
function depends exclusively on either x or p and contains nevertheless all the information about the system. E. Wigner showed, however, that it is possible to define a formal quantum mechanical
analog to the classical distribution function. This distribution, W (x, p), bears his name and has the following properties. (a) It is real, and is usually not just positive; it can also become
negative. (b) The following two marginals: ∞ w(x) =
W (x, p) d p,
W (x, p) dx,
−∞ ∞
w( p) = −∞
are the exact quantum mechanical distributions of position and momentum. (c) The quantum mechanical expectation value of a function F(x, ˆ p) ˆ depending on position and momentum operators can be,
when symmetrically ordered, calculated as the classical mean value of the variable F(x, p), whereby the Wigner function plays the role of the classical weight function; i.e. the following relation
holds: ∞ ∞ F(x, ˆ p) ˆ =
F(x, p) W (x, p) dxd p. −∞ −∞
Measuring distribution functions
(d) The Wigner function is normalized: ∞ ∞ W (x, p) dxd p = 1.
−∞ −∞
(e) It is calculated from the wave function ψ(x) or the density operator x|ρ|x of the state according to the following prescription: 1 W (x, p) = π
exp(2i py)ψ(x − y)ψ ∗ (x + y) dy
or 1 W (x, p) = π
∞ exp(2i py)x − y|ρ|x + y dy.
Equations (10.11) and (10.12), which are simply Fourier transformations, are easily inverted, and so it is evident that the Wigner function contains the full quantum mechanical information about the
respective state. This makes it clear that a description similar to the classical one is possible; however, there are two important limitations: due to property (a) the Wigner function cannot be
considered as a true probability distribution (it is more appropriate to use the term quasiprobability distribution); property (c) makes life hard for us because we have to transform the operator
function F(x, ˆ p) ˆ into a symmetrically ordered form before we can apply Equation (10.9). Because the square of a symmetrically ordered function is not, in general, a symmetrically ordered
function, the calculation of higher moments – exceptions are (due to property (b)) only moments of x or p – is much more complicated than in classical statistics. The additional terms appearing due
to the symmetrization express quantum mechanical corrections. Despite these “defects,” which remind us that the quantum mechanical description cannot be reduced to a classical one, the Wigner
function is a very useful construction for theoretical analysis. Because it is real, it can easily be graphically displayed, and we can get a visual impression of the respective quantum state whereby
relevant physical properties, like, for example, squeezing, are immediately recognized as characteristic asymmetries. For our purposes, the relation between the Q function and the Wigner function is
of particular interest. It turns out that the Q function can be obtained from the Wigner function by a convolution with a Gaussian function: 1 Q(x, p) = π
∞ ∞ −∞ −∞
W (x , p ) exp − (x − x )2 + ( p − p )2 dx d p . (10.13)
10.2 Realistic phase measurement
This operation is simply a smoothing of the Wigner function, and therefore it is accompanied by a loss in the finer details of its structure (for an example, see Fig. 10.1). We have seen that the
Mandel experiments lead to the Q function, and we now have a simple picture of how the unwanted noise entering the apparatus through the unused port of the beamsplitter distorts the measured data.
Quantitatively, the influence of the noise can be clearly described: when we calculate the uncertainty of x and p using the Q function as a distribution function, we obtain the uncertainty relation
xp ≥ 1.
The right-hand side of the relation is twice the value for Heisenberg’s uncertainty relation, Equation (9.5). The analyzed realistic measurement apparatus contributes the same amount of noise to the
uncertainty as the “quantum noise.” The phase distribution obtained from the measured distribution function w(x, p) or the Q function according to Equation (10.4) is, as a rule, always broader than
the “ideal” phase distribution wid (ϕ), which can be calculated using the phase states |ϕ (see Equation (10.1)) for a given light field state |ψ according to the simple formula wid (ϕ) = |ϕ|ψ|2 , but
for which up to now a measurement prescription is not known. Finally, let us point out that the smoothing concept is also applicable to another noise source. This source is the additional loss of
measurement precision due to photodetector inefficiencies. A non-ideal detector can be modeled as an ideal detector with an additional absorber, say a partially transparent mirror, placed in front of
it. The latter again allows noise to enter, and it comes as no surprise that this additional disturbance can be described by further smoothing of the Q function (with a Gaussian function whose width
is determined by the detection efficiency). Because performing two convolutions with Gaussian functions one after the other is equivalent to a convolution with a single Gaussian function, we come to
the following conclusion. Taking into account the deviation of the detection efficiency η from unity, the measurement is not described by the Q function but by the stronger smoothed function – the
so-called s-parametrized quasiprobability distribution −1 W (x, p; s) = πs
∞ ∞ −∞ ∞
1 2 2 W (x , p ) exp (x − x ) + ( p − p ) s
dx d p ,
(10.15) where the parameter s is related to the detection efficiency as follows: s = −(2 − η)/η (Leonhardt and Paul, 1993b). (Note that the Wigner function corresponds to s = 0 and the Q function
corresponds to s = −1.) The measured
Measuring distribution functions
Fig. 10.1. (a) Wigner function of a state with exactly four photons and (b) the corresponding Q function obtained by smoothing it.
distribution function w(x, p) is given by w(x, p) =
2 √ −1 √ −1 W ( 2η 2 x, 2η 2 p; −(2 − η)/η). η
The stronger smoothing in Equation (10.15) leads to an additional loss of detailed information about the field state. In particular, the experimentally determined phase distribution is broadened.
After the analysis of the Mandel scheme, let us comment briefly on earlier quantum mechanical proposals for phase measurement. The oldest relied on a simple physical idea (Bandilla and Paul, 1969;
Paul, 1974). Because it is impossible to measure directly the phase on a microscopic field, we should amplify the field to macroscopic intensities, introducing as little noise as possible, with the
help of a laser amplifier. The phase measurement (and an amplitude measurement if desired) on the amplified signal can be “easily” performed using classical methods. It is obvious that these methods
also suffer from reduced measurement precision due to the basically unavoidable amplifier noise.
10.2 Realistic phase measurement
Another proposal from Shapiro and Wagner (1984) uses a heterodyne measurement. The signal is optically mixed with a strong coherent reference wave frequency shifted by ν and is detected by a
photodetector. The photocurrent contains an alternating current component oscillating at the frequency ν. The amplitudes of the components oscillating as cos(2πt) and sin(2πt) are measured on this
current, using a quadrature demodulator. These amplitudes prove to be proportional to the quadratures we denoted above by x and p. These quantities are also measured as noisy variables, and the
evaluation of the data is the same as for the Mandel scheme. But through which door does the noise now enter? The answer is: the beamsplitter mixes the signal wave not only with the strong reference
wave but also, in principle, with all other vacuum oscillations. When the frequency of the reference wave is ν0 and that of the signal is ν0 + ν, we also have to take into consideration the wave
oscillating at the “mirror” frequency ν0 − ν because this will also contribute (thanks to the mixing with the reference wave) to the photon current component oscillating at ν and therefore will play
the role of the unwanted noise source. When we analyze theoretically both realistic phase measurement strategies, we arrive at the rather surprising result that they lead again to the Q function. (In
the case of amplification, the measurement results in an “inflated” Q function that originates from the Q function for the initial signal simply by scaling.) In summary, we can say that all three
methods are physically completely equivalent, and therefore the experimentally obtained phase distributions are identical. This is by no means obvious. A qualitative agreement in the sense that the
measurement result is noisy is clear. However, the fact that different noise sources result in quantitatively identical effects is surprising. A closer inspection reveals that the noise sources, when
treated formally, show a close similarity: the corresponding fluctuation operators (Langevin forces) appearing in the quantum mechanical equations of motion have the property that they guarantee the
validity of the commutation relations for the creation and annihilation operators of the field (see section 15.1) after the interaction, and thus make the quantum mechanical description consistent.
We can state that the known schemes of phase measurement can be cast into a single universal scheme: generally a simultaneous measurement of two canonically conjugated variables (such as position and
momentum) is performed. In this way, we measure directly (using ideal detectors) the Q function of the radiation field, and the phase distribution is obtained by averaging over the field amplitude.
The price to pay is a decreased measurement precision compared with the “ideal measurement.” The precision decreases further when measurements are performed using non-ideal detectors. The influence
of detrimental low detection efficiency is negligible in the amplification scheme – we measure on a macroscopic
Measuring distribution functions
object – and from a practical point of view we should give preference to this method. 10.3 State reconstruction from measured data We demonstrated in Section 10.2 that apparatus designed for phase
measurement can be used for measuring directly certain quasiprobability distributions – the Q function or a smoothed Q function when inefficient detectors are employed. The question arises as to
whether with this we already have the complete information about the quantum mechanical state of the field to hand. The theoretician usually shows no hesitation in replying “yes.” Indeed, as can be
shown rather generally, this answer is correct; however, a very important assumption must be made. The respective quasiprobability distribution, for example the Q function, must be known with
absolute precision; i.e. it must be given to us as a mathematical function. In principle, we can reverse any convolution when the smoothing function is precisely known. The smoothing of the Wigner
function, Equation (10.13), does not eliminate the finer details of it completely; they are only strongly suppressed. The practical problem is that the measurement must be performed with extreme
precision, and hence enormous experimental efforts must be made to reconstruct the Wigner function (from the experimentally determined probability distribution), thus gaining access to the full
information about the quantum mechanical state. We have to conclude that practical reasons are the source of irreversible information loss during the process of measurement. The root of this evil was
discussed in Section 10.2: it is the additional noise that makes the simultaneous measurement of canonically conjugated variables possible. We might ask whether there is no other way to extract the
full information about the system from measured data. Why do we have to perform simultaneous measurements? Is it not sufficient to measure different variables separately on parts of the ensemble of
identically prepared systems? The unwanted penetration of vacuum fluctuations into the measurement apparatus would, in any case, be prevented. It was realized early on that it is possible to
reconstruct – though not always uniquely – the wave function from (separately measured) probability distributions for position and momentum; – however, an implementable experimental scheme is still
lacking. It should also be pointed out that the same problem is known to appear in classical optics, namely in (optical and electron) microscopy. The role of the Schr¨odinger wave function is taken
by the distribution of the complex classical field amplitude A in the object and the image plane, respectively. An intensity measurement immediately gives the distribution of |A|2 ; however, the
information about the field phase is missing. Its retrieval is the real physical problem. To this end, a further independent measurement is required. Such a measurement is the
10.3 State reconstruction from measured data
intensity measurement in the focal plane of the objective (exit lens). It is known that the Fourier transform of the field in the object plane is formed there, which corresponds to the change from
the position to the momentum representation in quantum theory. Let us concentrate on quantum optics and enquire about a practicable method of state reconstruction from measured data. For this
purpose, we can exploit the amazing measurement possibilities offered by the homodyne technique. It was shown in Section 9.3 that, using this method, we can measure not only two chosen quadrature
components of the field – the analogs of the position and the momentum – but a whole set of independent observables x (see Equation (9.12)). We emphasize that we are not dealing with simultaneous
measurements; rather, we measure each variable on a part of the ensemble and determine its distribution. Indeed, a set of such distribution functions ω (x ), with varying in sufficiently small steps
between 0 and π, contains the complete information about the quantum mechanical state of the system. This is generally valid; i.e. it applies not only to pure states but also to mixtures described by
density matrices. It was shown by K. Vogel and H. Risken (1989) in a seminal paper that the Wigner function can be reconstructed from the distribution functions ω (x ) via an integral transformation
(see also Leonhardt and Paul (1995)). It is interesting to note that this transformation – it is the inverse radon transformation – has been known for a long time not only in mathematics; it is also
the basis of medical computer tomography. Hence, the quantum mechanical reconstruction problem is, from the mathematical point of view, identical to the computer generation of an image of a body part
(in the sense of an absorption profile) from a series of measured data which have been obtained by measuring the X-ray absorption of the object from different directions. The discussed quantum
mechanical measurements – now named optical homodyne tomography – have recently been realized experimentally (Smithey et al., 1993). The Wigner functions of important quantum mechanical states of the
light field, such as the Glauber state and the squeezed vacuum, were successfully reconstructed from measured data. As mentioned in Section 10.2, we obtain the density matrix from the Wigner function
by a Fourier transformation, and because of this we can calculate the quantum mechanical expectation value for any desired variable and its probability distribution and declare the results as
(indirectly) measured. In particular, “ideal” phase distributions can be so determined “experimentally.” Such a procedure is in contrast to the usual “measurement philosophy,” which relies on the
measurement of one or several observables, thus determining chosen experimental characteristics of the respective state, such as, for example, the photon statistics. Optical homodyne tomography,
however, allows us to form a sort of holistic view on the quantum mechanical state. Having determined the Wigner
Measuring distribution functions
function, we have everything at hand that is possible to know. The practicalities of this should not be underestimated: while the conventional method requires different measurement devices for
different observables, optical homodyne tomography uses the same setup (in balanced homodyne tomography one needs only to change the phase of the local oscillator). In addition, optical mixing with a
strong local oscillator allows us to work at considerably higher intensities, and hence we may use avalanche photodiodes (distinguished by a high detection sensitivity) for detection. It may be hoped
that typically non-classical features, such as the comb-like structure of the photon distribution of the “squeezed vacuum,” can be experimentally verified in this way, something that is not possible
with conventional techniques. Finally, let us comment on the latest developments related to quantum state reconstruction. It was proven theoretically, and then confirmed through extensive numerical
simulations of measurements, that the density matrix of a quantum state (most favorably with respect to states with sharp photon numbers, i.e. in the Fock basis) can be reconstructed directly from
the distribution functions ω (xθ ) (see Leonhardt 1997). The detour via the Wigner function was proven unnecessary, and we even improve the precision; i.e. we can extract finer details from the
measured data. Also, it can be shown that less efficient detectors could be tolerated: it is possible, in principle, to reconstruct the density matrix, provided the detector efficiency is larger than
1/2, but then we need a considerably increased measurement precision, which implies an increase in the number of measured data by orders of magnitude. This is in agreement with the statement made
above that, in the process of smoothing the Wigner function – optical homodyne tomography with inefficient photodetectors enables us to reconstruct a smoothed Wigner function instead of the ideal one
– the finer details are not lost irretrievably, it just requires much more effort to find the “truth.”
11 Optical Einstein–Podolsky–Rosen experiments
11.1 Polarization entangled photon pairs Throughout his life, Albert Einstein was never reconciled to quantum theory being an essentially indeterministic description of natural processes, even though
he himself contributed fundamental ideas to its development. “God does not play dice” was his inner conviction. In his opinion, quantum theory was only makeshift. His doubts about the completeness of
the quantum mechanical description were expressed concisely in a paper published jointly with Podolsky and Rosen (Einstein, Podolsky and Rosen, 1935). This paper analyzes a sophisticated Gedanken
experiment, now famous as the Einstein–Podolsky–Rosen paradox, which has excited theoreticians ever since. The Gedanken experiment was recently realized in a laboratory. The analyzed objects are
photon pairs1 – and this is what has motivated us to dedicate a chapter to this problem which has bearing upon the foundations of quantum mechanics. The photon pairs are formed by two photons
generated in sequence (in a so-called cascade transition, as shown in Fig. 11.1). Due to the validity of the angular momentum conservation law (discussed in Section 6.9) for the elementary emission
process, the two photons exhibit specifically quantum mechanical correlations, which are incompatible with the classical reality concept, as will be discussed in detail below. How do the correlations
appear in detail? Let us assume the initial state of the atom to be a state with angular momentum (spin) J = 0, the intermediate state to have angular momentum J = 1, and the final state to have
again J = 0. The angular momentum conservation law implies for the system composed of the atom and the photons that the photon pairs must be in a state with total angular momentum zero. To satisfy
this the two photons must “adjust” to one other, and we will analyze in detail what this means for their polarization states. 1 In Einstein, et al. (1935), a system composed of two material particles
prepared in a special state was consid-
Optical Einstein–Podolsky–Rosen experiments J=0
m= + 1
m= − 1
Fig. 11.1. Two-photon cascade transition. J = atomic angular momentum; m = magnetic quantum number.
The spatial separation of the sub-systems (in our case that of the photons) plays an important role in the Einstein–Podolsky–Rosen experiment. So, let us analyze such processes where the two photons
have been emitted in opposite directions (we denote one direction as the x axis). Because the angular momentum J = 1 allows three orientations with respect to an arbitrarily chosen reference axis,
the intermediate level of the cascade consists of three sub-levels with magnetic quantum numbers m = −1, 0 and + 1. We choose the quantization axis orthogonal to the observation axis x, and denote it
as the z axis. Corresponding to the three sub-levels, there are three transitions. The quantum mechanical calculation shows that the transition to the sub-level with m = 0 is associated with
oscillations of the emitting electron in the z direction. On observation, we find that the radiation emitted in the x direction is linearly polarized in the z direction, as expected from the emission
of a classical dipole. The transitions to the intermediate levels with m = +1 and m = −1 are each associated with a circular motion of the electron (one clockwise and the other anticlockwise) in the
x, y plane. The part of the wave traveling in the x direction is therefore linearly polarized in the y direction. The same motions of the electron take place in the transitions between the
intermediate state and the final state of the cascade: when the transition starts from the level with m = 0, we are dealing again with oscillations in the z direction. The other two transitions are
associated with circular motion in the x, y plane. The cascade process may take place via three channels, as shown in Table 11.1. These three channels should not be understood in the sense of
alternative routes. Only after a suitable measurement has been made can it be determined which channel was actually used. For example, if the first photon were found to be z
11.1 Polarization entangled photon pairs
Table 11.1. Channels of the cascade transition J = 0 → 1 → 0. Channel
Intermediate level, magnetic number m=0 m = +1 m = −1
Polarization direction First photon
Second photon
z y y
z y y
polarized, only the first channel is compatible with this result. We can see from Table 11.1 that the second photon will be polarized in the same direction as the first. We could choose the z
direction arbitrarily, apart from the constraint that it must be orthogonal to the observation direction. We can thus make the following general prediction. When a photon falls onto a detector with a
polarizer in front of it and is detected, we can be sure that the other unobserved photon propagating in the opposite direction is polarized in the transmission direction of the polarizer. Using a
polarizing prism instead of a polarization filter, we can measure the polarization in two different directions. The prism splits the beam into two mutually orthogonally linearly polarized components
which are, in addition, spatially separated. Performing a measurement on each of the partial beams with a separate detector for one incoming photon always results in one of the detectors responding
and indicating one of the polarization directions (we assume ideal experimental conditions). This does not imply, however, that the detected photon was polarized in this way before it interacted with
the measurement apparatus – rather the photon was transferred in the normal way into this state in the sense of a “reduction of a wave packet” by the measurement. We then know, based on the above
information, that the second photon is in the same polarization state, and we do not need to perform another measurement. The strong correlation between the polarization directions of the two photons
listed in Table 11.1 implies that the spin part |ψtot of the quantum mechanical wave function of the emitted field is a superposition of the states |y1 |y2 and |z1 |z2 , where the state |y1 describes
the first photon linearly polarized in the y direction, etc. The relation between the superposition coefficients is determined by the requirement that the total wave function |ψtot is the
eigenfunction corresponding to the zero eigenvalue of the spin projection onto the propagation axis of the photons. (As we explained previously, in the cascade transition the total spin of
Optical Einstein–Podolsky–Rosen experiments
the two photons – as a consequence of the angular momentum conservation law – must be zero, and this holds true in particular for the mentioned component.) The total wave function thus reads 1 |ψtot
= √ (|y1 |y2 + |z1 |z2 ). 2
This is again a representative example of a quantum mechanically entangled state! Linearly polarized light can also be viewed as a superposition of circularly polarized light, and so Equation (11.1)
can be rewritten in the basis of circularly polarized states |+ and |−. Referring the sense of rotation (helicity) of the two photons to the same direction, say the x direction, we find the
alternative representation 1 |ψtot = √ (|+1 |−2 + |−1 |+2 ), 2
and it is obvious that the spin in the x direction vanishes because for both terms the rotations of the field strength vectors compensate. From Equation (11.2) also follows the fact that when
circular polarization detectors are used we find strong correlations: when one observer detects a left handed circularly polarized photon, the other observes the corresponding photon to be right
handed circularly polarized, and vice versa. (Circularly polarized light can be measured with the help of the apparatus used for the detection of linearly polarized light with a quarter-wave plate
placed in front of the polarizing prism.) A discovery of great experimental importance was made in the mid 1990s. It was found that parametric fluorescence (Section 6.7) can be used for a very
efficient generation of polarization entangled photon pairs. The interaction must be of type II because only in this case do two orthogonal polarization directions come into play. In the degenerate
case (the signal and the idler wave frequency coincide) in a type I interaction, the propagation directions of the two photons lie on the same cone, while in a type II interaction they are on
separate cones whose axes are tilted by an angle ±θ with respect to the propagation direction of the pump wave. The polarization directions for the two cones are mutually orthogonal. Using a beta
barium borate (BBO) crystal, and properly choosing the incidence angle of the pump beam with respect to the crystal optical axes (Kwiat et al., 1995), the cones intersect along two straight lines
(Fig. 11.2). A photon propagating along such a line does not know to which cone it belongs, and therefore its polarization state is quantum mechanically uncertain. The other photon propagates along
the other line and faces the same dilemma. We expect that the property of the two photons having orthogonal polarizations will not be
11.1 Polarization entangled photon pairs e
P C o
Fig. 11.2. Type II parametric fluorescence. The propagation directions of the two emitted photons each lie on a separate cone. P = pump wave; C = nonlinear crystal; o = ordinary (vertical)
polarization; e = extraordinary (horizontal) polarization.
lost under such special conditions. With respect to their polarization properties, the photons will be in a entangled state 1 |ψtot = √ (|x1 |y2 + eiα |y1 |x2 ). 2
The subscripts 1 and 2 refer to the propagation directions, and x (y) is the polarization direction of the ordinary (extraordinary) beam. The phase α can be set to 0 or π with the help of a retarder
(phase shifter from a birefringent material). The polarization direction can be, in addition, rotated by 45◦ by positioning a half-wave plate in one of the beams. In this way, any of the following
four entangled states can be generated: 1 | ± = √ (|x1 |y2 ± |y1 |x2 ), 2 1 |± = √ (|x1 |x2 ± |y1 |y2 ). 2
(11.4) (11.5)
These form a complete basis of the Hilbert space of the polarization states of two photons. The states in Equations (11.4) and (11.5) are known as Bell states, and they play an important role in the
quantum teleportation of an (arbitrary) polarization state (see Section 13.1). The great advantage of this source of polarization entangled photon pairs is the fixed geometry of the two photon beams.
The two photons can be easily selected using two pin holes (in principle, one would be enough as the photons are also strongly correlated in their propagation directions), or even better they can
each be coupled directly into a glass fiber. The two-photon cascade does not have this property. When one photon is detected, the other photon will usually propagate anywhere other than towards the
other detector. Therefore, parametric fluorescence
Optical Einstein–Podolsky–Rosen experiments
of type II, as a generation mechanism of polarization correlated photons, is by far superior to cascade emission. 11.2 The Einstein–Podolsky–Rosen paradox The quantum mechanical predictions based on
the entanglement of the wave function – and this is the essence of the Einstein–Podolsky–Rosen paradox – are not compatible with the classical concept of reality. Indeed, the entangled states in
Equations (11.1) to (11.5) must be interpreted in such a way that it is uncertain in principle what the polarization state of the photons is. It is of particular importance that the spatial
separation of the photons can be made arbitrarily large. This ensures that any measurement performed on the first photon cannot physically affect the second photon in any way. According to the
special theory of relativity, an action can travel at most with light speed; in the special case of the two-photon cascade, where the two photons travel in opposite directions, the action would chase
the second photon without any chance of reaching it. On the other hand, quantum mechanics assumes that the reduction is an instantaneous process. Then we can argue along with Einstein, Podolsky and
Rosen in the following way. If a measurement is performed on the first photon, which indicates linear polarization, the other photon is, at that moment, definitely in a state of identical
polarization (for the case of the wave function in Equation (11.1)). Because, as already discussed, the measurement cannot in any way influence the second photon, its polarization state could not
have been changed, i.e. it had to be polarized in this way from the beginning. However, we could have chosen to measure circular instead of linear polarization. In this case, we would come to the
conclusion that – depending on the result of the measurement – the second photon is either left or right handed circularly polarized, and must have already been so before the measurement. A single
photon cannot be simultaneously linearly and circularly polarized, and we have arrived at the Einstein–Podolsky–Rosen paradox. What can we say now? We must state that there is no experimental
contradiction. We are dealing with an unreal conditional clause of the form “If we had used another measurement apparatus, then we would have . . . .” Such a statement is not experimentally
verifiable. We can in quantum theory – on a single system – perform only one type of measurement, and whatever another, unperformed, measurement would have yielded will remain speculation. In
addition, we must realize that predictions of the form “This photon is linearly polarized” cannot be experimentally verified. We force the photon, through the choice of the measurement apparatus, to
unveil itself as linearly or circularly polarized; i.e. we prescribe to the photon the type of the measured polarization state. The only thing that is left to the photon is the
11.3 Hidden variables theories
“decision” between the two orthogonal polarization directions or between left and right handed circular polarization. The above paradoxical statement “A single photon should be simultaneously
linearly and circularly polarized” can be understood simply as that any photon can arbitrarily be detected as linearly or circularly polarized. We have thus arrived at the root of the paradox. It
does not make sense to ascribe definite properties (such as polarization) in the sense of the classical reality concept to a single microscopic system (for example, a photon). In other words, it is
the non-objectifiability of the quantum mechanical description of nature which causes headaches; we are reminded once again that quantum theory is intrinsically a statistical theory which, in
principle, does not allow a detailed description of single systems. It is difficult to avoid the impression that the quantum mechanical description is incomplete. A natural question arises as to
whether the theory could be extended to remedy this flaw. Various scientists have hoped to reach this goal by introducing “hidden” (not accessible by measurement) variables. The impossibility of such
a dream was demonstrated rigorously in the Einstein– Podolsky–Rosen experiment. Owing to the importance of this point for the foundations of quantum mechanics, we will explain it in detail in Section
11.3. 11.3 Hidden variables theories It is tempting to believe that uncertainty, which plays a central role in the quantum mechanical description of natural processes, is the consequence of our
ignorance of very “fine” parameters. Knowledge of such parameters could allow us to predict, in individual cases, the results of arbitrary measurements, for example, at which moment a single
radioactive atomic nucleus will decay. This knowledge lies at the heart of a (deterministic) hidden variables theory. Even though we cannot hope to ever obtain precise knowledge about the hidden
variables, it is an interesting question for the theoreticians whether it is possible to believe in the existence of such a theory without penalties, i.e. without contradicting the predictions of
standard quantum theory. In the positive case the theory would provide a deterministic “foundation” for quantum mechanics. Surprisingly, Bell (1964) could prove, using the example of the
Einstein–Podolsky–Rosen experiment, that this is not so. Certain consequences of any hidden variables theory are not compatible with quantum mechanical predictions. Let us explain the problem in some
detail using the example of a realistic experiment, namely the measurement of the polarization properties of photon pairs generated in a cascade process – Bell himself studied Bohm’s version of the
Einstein–Podolsky–Rosen experiment involving two spin 1/2 particles prepared in the appropriate state. (Details can be found in the original paper by
Optical Einstein–Podolsky–Rosen experiments
D P
D P
Fig. 11.3. Experimental setup for the measurement of coincidences on cascade two-photon transitions. A = atom; D = detector; P = polarizer; the arrows indicate the directions of transmittance of the
polarizers. After Clauser et al. (1969).
Clauser et al. (1969) and also in the reviews by Clauser and Shimony (1978) and Paul (1980). Bell had the lucky idea to exploit the fact that we have a free parameter at our disposal in the
experiment. We choose the experimental arrangement as shown in Fig. 11.3. The two photons propagating in opposite directions are detected by two detectors with polarizers placed in front of each of
them. We can set the orientations of the polarizers as we like, and, since the problem is rotationally symmetric, only the relative orientation of the transmission directions of the polarizers
matters; hence, the angle between the two orientations is, in this case, the above mentioned free parameter. The measured quantity is the coincidence counting rate C() of the two detectors observed
for different values of . We describe the experiment using a deterministic hidden variables theory. In other words, we assume there exist hidden parameters which describe “in detail” the state of the
radiation field after the cascade transition has taken place. The variables should be such that their particular values in any single case, together with the macroscopic parameters characterizing the
setting of the measurement apparatus (in our case the orientations of the polarizers), allow the measurement results to be unambiguously predicted. It is of utmost importance for Bell’s argument to
adopt the reasoning of Einstein, Podolsky and Rosen explained in Section 11.2, which states that the measurement on one of the photons cannot influence the measurement on the other photon (due to the
large spatial separation). We make, together with Bell, a locality assumption of the following form. Everything happening on the first detector is exclusively determined by the particular values of
the hidden variables and the setting of the polarizer, and is completely independent of the orientation of the polarizer in front of the other detector. With these assumptions, it is possible to show
mathematically (with the limitation that the detectors are ideal, i.e. each incoming photon is detected) that the coincidence counting rate C() has to fulfil the following inequality (Freedman and
Clauser, 1972): −1 ≤ 3
C() C(3) C I + C I I − − ≤ 0. C0 C0 C0
11.3 Hidden variables theories
The value C0 is the coincidence counting rate without polarizers inserted into the beam paths, and C I and C I I are the coincidence counting rates with two polarizers. The result in Equation (11.6)
is extremely impressive when we recall the general assumptions under which it was derived. Because of this, we are not able to give a mathematical expression for C() itself. Nevertheless, we can say
that C() must satisfy the limitations in Equation (11.6) whenever a deterministic theory which is, in addition, local in Bell’s sense is used for the description. The restriction in Equation (11.6)
is drastic and, more importantly, it contradicts quantum mechanical predictions. This is easy to show. The quantum mechanical prediction is that, provided the first photon has been detected, the
second photon is polarized in the transmission direction of the polarizer for the first photon (see Section 11.1). When the second photon impinges on a polarizer whose transmission direction is
rotated with respect to the aforementioned direction by an angle the following situation occurs. When we assume that the polarizers have 100% transmittivity the projection (in the classical
description) of the electric field strength of the incoming wave onto the transmission direction of the polarizer is completely transmitted, i.e. only the fraction cos2 of the intensity passes
through. Because the photon is (as far as energy is concerned) indivisible, the detector behind the polarizer (which need not be ideal) will either detect the photon or not. The frequency of response
(assuming the classical predictions to be still valid in the statistical mean) compared with the = 0 case decreases by a factor cos2 . The quantum mechanical coincidence counting rate as a function
of the angle thus takes the simple form C qu () = C qu (0) cos2 ,
where the coincidence counting rate for equally oriented polarizers is denoted C qu (0). Because, in this case, only half of the photons are detected, Equation (11.7) takes the form qu
C qu () = 12 C0 cos2 ,
where C0 is the coincidence counting rate in the absence of both polarizers. The coincidence counting rate C qu (0) remains unchanged when one of the polarizers is removed. (The second photon is, as
we know, definitely polarized in the direction of the transmission direction of the polarizer, and hence is transmitted without problems.) Using the same notation as above, we have qu
CI = CII = C qu (0) = 12 C0 .
After inserting the quantum mechanical expressions from Equations (11.8) and (11.9) into Equation (11.6), we find that it is not satisfied for certain values of .
Optical Einstein–Podolsky–Rosen experiments
The biggest discrepancies arise for = π/8 and = 3π/8. It is then advantageous to write down Equation (11.6) for these two angles and then to subtract the two relations. We thus obtain the following
simple inequality: |
C( π8 ) C( 3π 8 ) − | ≤ 14 , C0 C0
which has the advantage that the coincidence counting rates C I and C I I no longer appear. It is easily checked that the quantum mechanical formula√in Equation (11.8) is in contradiction with
Equation (11.10); it yields the value 2/4 ≈ 0.35 on the left hand side. Hence, we must bury our hopes that it is possible to alter quantum mechanics without touching its quantitative predictions,
through the introduction of hidden variables, and so put it onto a “sound”, i.e. classical-deterministic, basis. There are – under the conditions existing in a Einstein–Podolsky–Rosen experiment –
correlations between two sub-systems which defy classical understanding and are therefore of a purely quantum mechanical nature. Let us illustrate this once again using the example of the two-photon
cascade. Let us consider the setup depicted in Fig. 11.3, but now the polarization is measured on each beam in two orthogonal directions e and f . This can be achieved with the help of a polarizing
prism and two detectors positioned behind it. Assuming both prisms to be equally oriented, we find striking correlations between the measured data; the two measurement apparatuses indicate the same
polarization state: the two photons (forming one pair) prove to be oriented either both in the e direction or both in the f direction, and these two cases appear, without statistical irregularities,
with equal probability. This fact, were one to describe it on the basis of the classical concept of reality, could be understood only under the assumption that the photons had the given (in the
measurement, found) polarization right from the beginning, i.e after being emitted. The source could not emit photon pairs polarized in a different direction g (= e or f ) because then there will
unavoidably occur events in which both detectors register different polarizations. (We assume from experience that a photon polarized in the g direction will be detected, with a non-zero probability,
as if it were polarized in the e direction, and as if it were polarized in the f direction, with a different, but also non-zero probability.) This picture is incompatible with the fact that the
discussed correlations – due to the rotational symmetry of the problem with respect to the line connecting both detectors – fully persist when both crystals are rotated by the same angle. The failure
of all attempts to objectify the polarization properties of the emitted photons comes as no surprise when it is viewed from the quantum mechanical
11.3 Hidden variables theories
formalism. As discussed at length in Section 11.1, we know that both photons are in an entangled quantum mechanical state, and this means that the polarization of the two photons must be uncertain in
the quantum mechanical (not interpretable as simple ignorance) sense. A measurement of the polarization on one of the photons leads, according to the quantum mechanical rules for the description of
the measurement process, to a reduction of the wave packet, and the unobserved second photon (depending on the measurement result) is brought into one or the other polarization state. According to
the two possible representations, Equations (11.1) and (11.2), of the photon wave function, we can polarize the second photon linearly or circularly at will. Because the only constraint for the x
direction is that it must be orthogonal to the propagation direction of the photons, we can choose in addition the orientation of the “crossed axes” of the linear polarization by rotating the
measurement apparatus around the propagation direction. The reduction is instantaneous, as assured by quantum mechanics, and hence it seems we face a faster than light transfer of action onto the
second photon such that the uncertainty present before the measurement is removed. Whatever physical process is hidden behind the reduction, one thing can be taken as granted: it is basically
impossible to detect a physical effect (on the individual sub-system). What is observable is the change of a physical variable, and this requires well defined initial values. This is, however, not
the case in the present situation, and hence the reduction of the wave function cannot be used for signal transmission. We return to this point for a more detailed discussion in Section 11.5. First,
let us analyze an analogous experiment in classical optics. An unpolarized light beam is split, with the help of a polarization-insensitive beamsplitter, into two (also unpolarized) beams, which
propagate in different directions. We have to picture unpolarized light classically as follows. When we measure, within any coherence time interval, a certain (random) polarization (in general,
elliptical), we detect a different polarization within the next time interval. Observing the light over a long time interval, we find all possible polarization states with equal probability. When an
observer measures the instantaneous polarization state in one of the partial beams (classically this is not a problem), the polarization state of the other light beam at the same distance from the
beamsplitter is revealed. It is exactly the same because the beamsplitter does not change the (instantaneous) polarization. The information gain is, in this case, also instantaneous. Naturally, there
is no physical influence on the unobserved sub-system from the measurement. We only find out a fact that already existed objectively though it was unknown because of the statistical changes of the
polarization state. This is the essential difference between classical and quantum theory.
Optical Einstein–Podolsky–Rosen experiments
We have mentioned several times that quantum mechanical predictions can be verified only on ensembles. So, let us look at the ensemble of photon pairs! As above, let us assume that both observers
oriented their linear polarization measurement apparatuses in the same way. Each observer does not notice actions taken by his colleague. The measurement apparatus of each observer indicates that the
photons are randomly polarized – with equal probability – in the e and f directions. Actually, the observers are dealing with unpolarized light! Only after they compare their measurement results do
they realize (as explained) that they agree completely. The same also happens when circular polarization is measured (we, as usual, relate the sense of rotation to the propagation direction). We can
also think of a realization of the experiment where the first observer makes a measurement earlier than the second one (Paul, 1985). Then the first observer is enabled to play the role of a
“clairvoyant.” Let us imagine that the second beam, after traversing a certain distance, is diverted by a mirror such that it comes close to the first beam. The two observers can then position
themselves next to one another. The first observer can now predict the measurement results as there is a time delay before the second photon arrives (we assume that the source emits photon pairs one
after the other). The second measurement is thus unnecessary. The second observer can use the obtained information to divide the ensemble into two parts with well defined polarization properties; in
the present case, into photons linearly polarized in the f and e directions. The statement that we have subensembles with sharp polarization can be experimentally verified: a corresponding
measurement on each member of the respective ensemble will give the same result. The first observer could have rotated the apparatus to measure the polarization with respect to two other directions,
say e and f . A randomly selected photon from the second beam, which belongs in the case of the first measurement to the ensemble of, say, e polarized photons, would neatly fit into a sub-ensemble
with another polarization direction (e or f ). The first observer could have used, in addition, a measurement apparatus for circular instead of linear polarization. A photon from the second beam
would then have become a member of a group of either only left or right handed circularly polarized photons. A single photon is thus unmasked as a pure “opportunist”, adjusting without any problems
to different situations, and there are no “independent values” such as real polarization properties. The described experiment can also be interpreted in such a way that the second beam, which is by
itself unpolarized, can be split in different ways into two subensembles of orthogonal polarizations. That such decomposition is thinkable is easily seen from the form of the density matrix. The
experiment proves that this is indeed also feasible. The decomposition of a quantum mechanical mixture into
11.3 Hidden variables theories
different “components” might seem surprising at first glance. This is, however, a general peculiarity of the quantum mechanical description. For instance, there is experimentally no detectable
difference between two ensembles formed in the following way: the sub-ensembles are in Glauber states with equal amplitude and phases uniformly distributed between 0 and 2π, or they are in states
with sharp photon numbers (Fock states) with a Poisson weight function. The density matrix is exactly the same in both cases. Finally, let us comment on the quantum mechanical description of a single
system using a wave function. This means moving on to shaky ground! Following the basic principles of quantum theory, the wave function is always related to a whole ensemble of systems. In
particular, the quantum mechanical probability predictions can be verified only on an ensemble. We pointed out in Section 11.2 that it does not make sense to ascribe objective properties to a single
system. Nevertheless, the wave function provides us with valuable experimental hints. Knowing, for example, that a photon is in the polarization state |y, we can predict with certainty that the
photon will pass a y oriented polarization filter (under ideal conditions) but will not pass an orthogonally oriented filter. An important question is how do we obtain the information required about
the single system. We need to know the preparation process in detail. For example, we have “fabricated” the polarized photon from a weak unpolarized beam that was made to impinge on a polarization
filter. However, it is sufficient that we watch the preparation, or gain the information from the person conducting the experiment (who is assumed to be trustworthy). Another possible preparation
method was opened up by the Einstein–Podolsky–Rosen experiments. The situation is completely different when we are asked to determine the polarization state of any given single photon. In such a
case, our position is hopeless; we saw in Section 10.3 that state reconstruction can be realized only on an ensemble. We are in a strange position: if we are lucky we know the state of a single
system, but we cannot determine it if we are unaware of the process of preparation. Let us now return to the Einstein–Podolsky–Rosen experiment. The most astonishing property of the observable
correlations is that they are not limited to microscopic dimensions, but actually extend over macroscopic distances. Are the quantum mechanical predictions trustworthy under such extreme conditions?
This question seems completely justified, and Schr¨odinger (1935) was the first sceptic. He considered it possible that the correlations between the parts of a system break down spontaneously when
their separation exceeds a critical value. For photons, such a distance can only be the coherence length. Indeed, Bell’s locality assumption is physically justified only when the photons belonging to
one pair – conceived as wave trains whose length in the propagation direction is given just by
Optical Einstein–Podolsky–Rosen experiments
the coherence length – are “liberated” from the atom. The experimental examination of equation (11.10) recently performed by several authors is thus of interest, not only from the viewpoint of making
an experimental decision for or against hidden variables theories, but represents at the same time a test of quantum mechanics under extraordinary conditions. The following section analyzes briefly
the experimental situation, but let us state the result beforehand – quantum mechanics was perfectly verified. 11.4 Experimental results Before we say a few words about real Einstein–Podolsky–Rosen
experiments in the form of coincidence measurements discussed in Section 11.3, we will comment on a fundamental difficulty in Bell’s argument caused by realistic, i.e. low efficiency detectors. The
derivation of Equation (11.6) relies to a large extent on the assumption that all incoming photons are detected. When we drop this assumption, the inequalities obtained have no physical relevance.
This motivated Clauser et al. (1969) to modify the assumptions about the action of the hidden variables: these should (together with the macroscopic parameters) determine the passing or not passing
of the photon through the polarizer rather than the action of the detector. With this concept, the inequalities in Equations (11.6) and (11.10) still hold. The first experiment was performed by
Freedman and Clauser (1972). To guarantee an undisturbed two-step emission process (assumed in the theoretical treatment), they used as a light source a beam consisting of Ca atoms excited to the
starting level of the cascade transition by resonant absorption of radiation from a deuterium arc lamp. The analysis was rather difficult because the coincidence counting rate contained a large
contribution of random coincidences. We have to keep in mind that the atoms rarely emit the two photons in opposite directions. When one of the photons of the pair is detected, its partner will, in
most cases, be flying somewhere else in a different direction. It often happens that the detectors register photons belonging to different pairs because many atoms emitted simultaneously. Such
coincidences are obviously purely random. Freedman and Clauser determined the random coincidence counting rate separately by measuring coincidences with large time delays. They determined the
systematic coincidence counting rate to which the theoretical predictions refer by subtracting the random coincidence counting rate from the undelayed counting rate. Because, even in the absence of
polarizers, only about 0.2 coincidences per second were counted, it took at least 200 hours to obtain statistically reliable results. Freedman and Clauser found a significant violation of the
inequality in Equation (11.10); on the other hand, the quantum mechanical predictions were in excellent
11.4 Experimental results
agreement with the measured data. Later experiments, in which the use of electron beams or laser light helped to excite atoms in much greater numbers, drastically shortening the measurement time, led
to similar results (for details, see Clauser and Shimony, 1978 and Paul, 1980. A common feature of all these experiments was the short distance between the detectors (to avoid intensity losses). A
separation of the detectors by a distance much larger than the coherence length of the investigated radiation was realized for the first time in the experiment by Aspect, Grangier and Roger (1981).
They demonstrated not only that the measured data verified the quantum mechanical prediction for all angles between the transmission directions of the two polarizers with unprecedented statistical
precision, but also that this data did not change when the distance of the polarizers from the light source was made as large as 6.5 m. Aspect et al. hence satisfied an important condition underlying
Bell’s locality assumption. There is one further requirement. The locality postulate is necessary only when it is demanded by the causality principle. This implies that the orientation of at least
one of the two apparatuses must be changed quickly enough during the measurement to ensure that the information about the actual orientation of one apparatus cannot reach the other while the
measurement takes place there. Causality enters the situation at this point: a signal cannot be transmitted faster than light (orienting the polarizers before the beginning of a measurement sequence
in a certain way would give the measurement apparatuses enough time to exchange information). A later experiment by Aspect, Dalibard and Roger (1982) also incorporated fast switching of both
measurement apparatuses. These authors used a technique similar to that applied in the “delayed choice” experiment described in Section 7.3. Both photons had the choice of two possible paths to reach
the detection apparatuses which were equipped with differently oriented polarizers. An acousto-optical switch “decided” which way was open. The switch consists of a liquid cell with two opposite
in-phase driven electro-acoustic transducers, which generate a standing ultrasound wave in the liquid; i.e. a density variation acts as a diffraction grating (called the Debye–Sears effect). The
incoming wave is completely deflected to the side in a fixed direction for a high enough amplitude of the ultrasound wave; however, it leaves the liquid unaffected when the amplitude is zero. In this
way the incoming beam is periodically deflected. To come closer to the ideal random switching, Aspect et al. (1982) used two high frequency generators of different frequencies to drive the two
acousto-optical modulators (one for each photon). This sophisticated experimental setup led straight to the statement that quantum mechanics is, as always, right. The experimental results are in
distinct contradiction with Equation (11.10); i.e. they are incompatible with any, however refined, (deterministic) hidden variables theory satisfying Bell’s locality postulate. This is a result
Einstein probably least
Optical Einstein–Podolsky–Rosen experiments
expected when, in 1935, he (with Podolsky and Rosen), with his profound criticism, started the development leading to the described impressive theoretical and experimental accomplishments. Recently,
the construction of photon pair sources using parametric interaction of type II underwent noteworthy progress. In an experiment performed by the Zeilinger group (Weihs et al., 1998), the photons of a
pair were each coupled into a single-mode optical fiber. The receiving stations were at a distance of 400 m. In addition, the “last minute” procedure of polarization orientation of Aspect et al.
(1982) was perfected. Scrupulous critics could still argue that the acousto-optical modulators switch not randomly but deterministically. This would give the photons, in principle, the opportunity to
foresee which polarizer setting they will find. Aspect et al. closed this loop-hole by letting quantum mechanical randomness rule absolutely. They used a beamsplitter illuminated by a light emitting
diode, with a detector at each exit, as a random number generator. The light intensity was kept very low to avoid coincidences (when they happened, they were ignored). The response of the one or the
other detector was the desired random process delivering digital zeros and ones. In the latter case the signal was used after amplification to drive an electro-optical modulator. It caused a rotation
of the polarization direction of the photon by an angle proportional to the applied voltage. The photon impinged again on a polarizer, or, more precisely, a polarizing prism with a detector at each
of the exits. Instead of rotating the polarizing prism, the polarization direction of the photon was rotated in a definite way, the response probabilities of the detectors being the same in both
cases. Also in this experiment a significant violation of Bell’s inequality (in a slightly more general form than in Equation (11.6)) was observed, and agreement with the quantum mechanical
prediction was excellent. A particular advantage of the novel source of photon pairs is the fixed propagation directions of the polarization correlated photons, so we are spared the abundance of
random coincidences known from the two-photon cascade. This experiment is especially impressive because the Einstein–Podolsky–Rosen correlations extend over a macroscopic distance (400 m). Further,
using the glass fiber net used by the Swiss communications company Swisscom (Tittel et al., 1998) quantum correlations at a distance of 10 km were observed, for photons entangled in energy and time
(see Section 11.6). There are no signs of a spontaneous collapse of such correlations for large spatial separations of the subsystems. If there were such a collapse, for instance in the case of the
two-photon cascade, the foundations of our description of nature would break down. The disappearance of quantum mechanical correlations would contradict the angular momentum conservation law: the
initial sharp value (zero) of the angular momentum of the two-photon system would become necessarily non-sharp by such a process.
11.5 Faster-than-light information transmission?
From this point of view, a theoretician will accept the results of the experiments with satisfaction. Finally, we would like to point out that the great range of the quantum mechanical correlations
characteristic of the Einstein–Podolsky–Rosen experiment can be deduced from simple optical beam-splitting (for details, see Paul (1981)). For initial sharp photon number states split by the
beamsplitter, we are actually dealing with an experiment of the abovementioned type. On the one hand it is possible, by measuring the photon number at one of the outputs (the reflected or the
transmitted), to predict with certainty how many photons will be detected at the other output. On the other hand, it is known that the two partial beams can be made to interfere, which means that the
relative phase must be fixed. By measuring the phase of one of the beams, we obtain precise information about the phase of the other beam. We can now repeat the argument of Einstein, Podolsky and
Rosen given in Section 11.2, and we come to the following conclusion. Imagine that we separate sufficiently the partial beams: the photon number, as well as the phase, should be well defined in each
of the partial beams from the beginning; this, however, is not possible according to quantum mechanics. The disappearance of the quantum mechanical correlations between the two beams would destroy
their ability to interfere, but such loss of interference has not been observed even for considerable spatial separation (before their reunion) – let us recall the experiment of J´anossy and N´aray
(1958) with an interferometer arm length of 14.5 m – nor would such an occurrence be expected by any theoretician to occur, however large the separation might be. 11.5 Faster-than-light information
transmission? There has been daring speculation about the possibility of using the Einstein– Podolsky–Rosen correlations for information transmission, leading to the possible exploitation of the
instantaneous character of the reduction of the wave function – caused by a measurement performed on one of the sub-systems – for faster-thanlight information transmission. This would imply a
dramatic violation of causality, and hence we could dismiss the whole idea as bizarre. However, a detailed analysis why this does not work helps us to learn a great deal about the physics involved,
and so we include it here. What should such an experiment look like? Let us think of two observers (far away from each other) performing polarization measurements on the photons belonging to a photon
pair emitted in a cascade transition (Fig. 11.3). One of them, say observer A, wishes to send a message to observer B. The first consideration is that of information coding. The use of the
polarization states of the photons is certainly a good choice. An important feature of this experiment is the randomness of the individual polarization state – when, for instance, observer A
Optical Einstein–Podolsky–Rosen experiments
chooses the setup for measuring linear polarization along the x and the y direction he has no influence on whether a randomly selected photon will “decide” in favor of the x or y direction. This
rules out a simple coding such as x polarization means zero, y polarization means one. However, it seems that the difference between circular and linear polarization could be exploited; for example,
observer A could perform the coding by switching at will between the two measurement setups. As mentioned in Section 11.1, this could be accomplished by inserting a quarter-wave plate into the
optical path, or removing it, respectively. The arrangement would be, for instance, that circular polarization means zero and linear polarization means one. Because the measurement changes the
polarization state of the other photon instantaneously into the state of the measured one (linear or circular polarization) – at least quantum theory asserts this (Section 11.2) – the task left for
observer B is (after a short time delay necessary to ensure that the reduction has taken place) to identify the polarization (circular or linear) of the photon, and thus to obtain the information.
However, this is not possible! It was shown in detail in Section 11.2 that the polarization state (circular or linear) of a single photon is prescribed by the measurement apparatus. It is absolutely
impossible to deduce from a single measurement the polarization state of the photon before the measurement. It makes no physical sense to ascribe to a single photon a polarization property in the
sense of an objective characteristic. We come to the interesting conclusion that the non-objectifiability of the quantum mechanical description (which is usually not taken too seriously) is the price
paid for upholding causality, and is thus of fundamental importance. The discussion initiated by Einstein, Podolsky and Rosen has an additional merit: it disclosed an unexpected and close
relationship between the non-objectifiability in the micro-cosmos and causality – the fundamental principle of (mainly) macroscopic physics. We can draw some additional conclusions. A sceptic might
ask the following questions. Do we have to perform a direct measurement? Couldn’t we somehow make a lot of copies of the photon first and then measure the polarization (this would not cause any
problems as we have a whole ensemble of identical particles at our disposal)? Taking the reasonable standpoint that a faster-than-light signal transmission is impossible, since otherwise the whole
foundations of physics would crumble – we could influence the past, and, to give an example, I could kill my father before he procreated me – we can say beforehand that copying cannot be successful.
It implies in particular that it is not possible to “clone” an individual photon, i.e. to produce one copy and hence an arbitrary number of identical copies. This no go theorem can also be proven
directly. It could be shown that a possible “cloning” is in direct contradiction to the linearity of the Schr¨odinger equation (Wootters and Zurek, 1982). This result is known as the “no cloning
11.5 Faster-than-light information transmission?
Another consequence is that there is no amplifier which would allow us to “read out” from the amplified field the polarization state (linear or circular) of the original photon. It is not obvious
that this should be so. Certainly it is not possible to avoid amplifier noise which distorts the amplified field, but should it not make a difference, even when amplifying a single photon, whether
the initial signal was linearly or circularly polarized? Indeed, performing a quantum mechanical calculation for the initial condition “we start with a photon with a well defined polarization,” we
naturally find that the light leaving the amplifier depends on the polarization of the initial photon. However, we realize again the serious limitation imposed on the quantum mechanical description
by the ensemble interpretation: we find a “trace” of the original polarization in the amplified field (and from the measured data it is possible to infer the polarization), but this holds only when
we investigate the whole ensemble, i.e. when we repeat the experiment under exactly the same conditions many times. It should be stressed yet again that quantum mechanics does not make any detailed
predictions about a single experiment. A detailed theoretical analysis of an (idealized) polarization insensitive amplification process leads to the following results (Glauber, 1986): in each single
case, the amplified field is in a well defined polarization state. Independently of the initial photon polarization, any polarization state (i.e. elliptic polarization) is possible. Differences are
found when we ask for the probability of finding a given polarization of the amplified field in dependence on the polarization of the original photon. As is to be expected, the probability is largest
when both polarizations coincide. However, the probability is not zero (due to amplification noise), even when the two polarization states are orthogonal. Only by measuring the probability
distribution – which can be determined only on an ensemble of amplified fields – can the polarization of the identical original photons be determined. (The measurement of the polarization state
itself is, for sufficiently large amplification, not a problem. In such a case we are dealing with classical fields, so standard methods of classical polarization optics are applicable.) In contrast,
a single measurement does not give any information about the polarization of the initial photon. However, as mentioned previously, our goal is only to distinguish between linear and circular
polarization of the initial photon. We can give a simple argument when we work with larger redundancy. By this we mean the following: observer A, having the desire to communicate at a speed faster
than light, encodes the information not into single photons but each bit into an ensemble of many photons forming a whole sequence. Accordingly observer A does not change the setup during time
intervals of duration t. Let us imagine the sender to be far away from us; i.e. the time t is still much shorter than the time light needs to reach us (there is no fundamental limitation for the
distance). The aforementioned ensembles are formed from differently polarized photons depending on whether they are to communicate
Optical Einstein–Podolsky–Rosen experiments
a zero or a one. In one case the photons are linearly polarized in the x or the y direction, and in the other case they are left or right handed circularly polarized. We saw in Section 11.3 that such
ensembles are in fact physically identical: they describe unpolarized light. We have not the slightest chance of distinguishing between them experimentally, whatever clever technique we might employ;
and an amplifier will be of no help either, nor can it save the faster-than-light information transmission (even though researchers in this field had believed, or at least hoped, it would). 11.6 The
Franson experiment In Section 6.7 we mentioned two important features of photon pairs generated by parametric fluorescence, namely the simultaneity of the two-photon emission and the frequency
relation ωp = ωs + ωi (see Equation (6.8)). Because both photons have broad bandwidths ωs and ωi , the frequencies ωs and ωi have to be understood as frequencies within the corresponding bandwidths,
as they appear in the measurement on a single photon pair. It is the energy conservation law, which is also valid for individual systems, that is behind this frequency relation, as explained in
Section 6.7. The “instance of emission” is, however, not determined. The non-linear crystal is pumped continuously and the emission is spontaneous. As in the case of spontaneous emission from an
excited atom, the emission moment is not predictable. The American scientist Jim Franson was inspired by these unusual properties of the photon pairs, often referred to as energy and time entangled,
and he proposed an original experiment,2 which will be discussed in some detail below (Franson, 1989). The idea was to make coincidence measurements of photon pairs (signal and idler photons fall
onto separate detectors positioned at the same distance from the source, and only events with simultaneous detector response are registered) more exciting by placing in front of each detector one of
two identical Mach–Zehnder interferometers (see Fig. 11.4). The transit time difference in the interferometer, T , should be large compared with the inverse of the bandwidths of the signal and the
idler wave. Representing the photons in a classical picture by coherent light pulses, the condition requires the duration of the pulses to be short compared with T . This implies that the two partial
pulses originating from a photon impinging on the interferometer (one of the pulses carries straight on, the other takes a detour prescribed by the interferometer) never meet again, in particular not
on the detector. There is no place for the “interference of the photon with itself.” However, interference effects show up 2 Actually, Franson considered photon pairs from a cascade transition, which
are also energy and time entangled.
11.6 The Franson experiment M2
Fig. 11.4. The Franson experiment. Coincidences between detectors D1 and D2 are registered. S = source of entangled photon pairs; M1 , M2 = identical Mach– Zehnder interferometers.
in the coincidence counting rate (noticed by Franson, to his credit), whereby the mentioned frequency condition plays a decisive role. We will try to understand the effect with the help of classical
optics. Because the response probability of a detector is proportional to the instantaneous intensity I (r, t) residing on its sensitive surface, the coincidence frequency reads, in this case (up to
a constant factor), Wc = I (r1 , t)I (r2 , t) = E (−) (r1 , t)E (+) (r1 , t)E (−) (r2 , t)E (+) (r2 , t) = |E (+) (r1 , t)E (+) (r2 , t)|2 .
The positions of the two detectors are r1 and r2 , and we have expressed the intensities according to Equation (3.10) through the positive and negative frequency parts of the electric field strength.
The presence of the interferometers leads to the following expressions for the variables E (+) on the detector surfaces: (+) (+) E (+) (r1 , t) = 12 E s (r1 , t) + E s (r1 , t − T ) , (11.12) E (+)
(r2 , t) =
(r2 , t) + E i
(r2 , t − T ) .
In writing these equations we have assumed that the signal wave hits the detector at position r1 and that the idler wave strikes at position r2 . We expressed the superposed electric fields through
the field strengths found in the absence of the interferometers. Equations (11.12) and (11.13) express the fact that the part of the pulse taking the direct route meets the part of the pulse making
the detour (in the interferometer), and hence delayed by T , at the detector surface. The factor of 1/2 originates from the splitting of the electric field strength on√the beamsplitter; the
transmitted and reflected parts are attenuated by a factor of 1/ 2.
Optical Einstein–Podolsky–Rosen experiments
The term relevant for the coincidences according to Equation (11.11), E (+) (r1 , t)E (+) (r2 , t), takes the following form: (+) (+) (+) E (+) (r1 , t)E (+) (r2 , t) = 14 E s (r1 , t)E i (r2 , t) +
E s (r1 , t − T ) (+)
(r2 , t − T ) + E s(+) (r1 , t)E i (+) + E s(+) (r1 , t − T )E i (r2 , t) .
×E i
(r2 , t − T ) (11.14)
First, let us consider the third and fourth terms on the right hand side of Equation (11.14). The squared absolute value of the third term, for instance, determines the probability that the detector
at position r1 registers a (signal) photon at time t, and the other registers an (idler) photon at time t − T . Because the emission of the photons is simultaneous and their extension in time is much
smaller than T , the probability must be zero. The third term is therefore negligible, and the same holds for the fourth term. The first term on the right hand side of Equation (11.14) describes a
coincidence at time t in the absence of the interferometers. Assuming the pulses to be classical wave packets, we can rewrite the first term as (s) (+) (i) E s(+) (r1 , t)E i (r2 , t) = cks ei(ks r1
−ωs t) cki ei(ki r2 −ωi t) . (11.15) ks ,ki
Here the complex amplitudes determining the pulse form are denoted by cks and (i)
cki . The summation runs over all wave number vectors ks and ki present in the pulse (more precisely, integrations should be taken instead of the summations, but for our argument this does not
matter). The frequencies ωs (ωi ) are those related to the wave number vectors ks (ki ). The really interesting term in Equation (11.14) is the second one. According to Equation (11.15) it can be
expressed as (+)
E s(+) (r1 , t − T )E i (r2 , t − T ) (s) (i) eiωs T cks ei(ks r1 −ωs t) eiωi T cki ei(ki r2 −ωi t) . =
ks ,ki
It is easy to see that the two sums virtually vanish when they take their maximum or a value close to it, for T = 0. The reason for this is the inclusion of factors eiωs T and eiωi T (the products ωs
T and ωi T are large compared with unity by assumption), which undergo several oscillations in the summation. The only nonvanishing term in Equation (11.14) is, therefore the first one, and this
implies, according to Equation (11.11), that the probability of counting a coincidence is independent of the time delay T caused by the interferometer; i.e. there are no interference effects.
11.6 The Franson experiment
The obtained result is not too surprising as we tacitly assumed the signal and idler pulses not to be correlated, as can be seen from the factorized form in Equation (11.15). It is a straightforward
matter to amend this relation to account for cor(s) (i) relations. It is necessary to replace the product cks cki by a new coefficient cks ,ki . Because both waves are generated by parametric
fluorescence, we will require this coefficient to be non-zero only when the phase matching condition Equation (6.7), as well as the frequency condition, Equation (6.8), are satisfied. This rather
pragmatic attitude leads to a surprising result. The double sum replacing Equation (11.16) is no longer affected by exponential factors depending on T , since they can be extracted as a common
prefactor eiωp T . The results can be summarized as follows. Using the notation (r1 , r2 ; t) = cks ,ki ei(ks r1 −ωs t) ei(ki r2 −ωi t) , (11.17) ks ,ki
the coincidence probability can be written, according to Equations (11.11) and (11.14), in the following form: Wc = = =
1 2 16 |(r1 , r2 ; t) + (r1 , r2 ; t − T )| 1 iωp T (r , r ; t)|2 1 2 16 |(r1 , r2 ; t) + e 1 2 iωp T |2 . 16 |(r1 , r2 ; t)| |1 + e
This relation is easily generalized to the case when an additional optical element, making the phase of the light larger by s or i , respectively, is inserted into the longer arms of the
interferometers. As a consequence, the positive frequency part of the electric field strength making the detour must be multiplied by the phase factor exp(−is ) or exp(−ii ), respectively. Equation
(11.18) thus becomes Wc = =
1 2 i(ωp T −s −i ) |2 16 |(r1 , r2 ; t)| |1 + e 1 2 8 |(r1 , r2 ; t)| [1 + cos(ωp T − s − i )].
The results obtained are, in fact, in full agreement with quantum mechanical prediction (Franson, 1989). Equation (11.19) allows the following simple interpretation: there is a probability amplitude
(r1 , r2 ; t) for measuring a coincidence at positions r1 and r2 at time t in the absence of interferometers. The absolute value squared of this amplitude gives us, up to a prefactor, the probability
of this event. The corresponding probability for the time t − T is the same, in accordance with the fact that the emission time is quantum mechanically uncertain. In the presence of an
interferometer, the two amplitudes add up. The first relates to the situation when both photons took the short path through the respective interferometer; the second to
Optical Einstein–Podolsky–Rosen experiments
the situation when they took the long path (and started correspondingly earlier). The second differs from the first simply by a phase factor determined by the sum of the path differences which the
signal and the idler waves accumulated on the long path. The two path differences have a dispersion due to the large bandwidths of the waves. Their sum, however, has a sharp value; it is exactly the
path difference the pump wave would accumulate when fictitiously going through both interferometers. According to Equation (11.19), we find an interesting interference phenomenon: changing one of the
phases s , i over several periods, the coincidence rate will oscillate sinusoidally. We can observe “interference fringes” analogous to the interference patterns known from conventional interference
experiments. Of particular interest is that the fringe visibility attains unity. Of importance also is that the interference pattern in Equation (11.19) is a specifically quantum mechanical effect
because it results from the signal and idler photon entanglement. In fact, it was shown by several authors that in a classical description the fringe visibility has the maximum value 50%. Bell type
arguments, as sketched in Section 11.3, also apply to the Franson experiment. Assuming a local hidden variables theory, we find, as a√Bell inequality, the statement that the fringe visibility cannot
exceed the value 1/ 2 = 70.7%. The analyzed experiment thus offers the possibility to test this Bell inequality. Indeed, Kwiat, Steinberg and Chiao (1993) observed a fringe visibility of 80.4%, and
thus demonstrated a clear violation of the Bell inequality. Finally, let us stress once more the interesting point that the linewidths of the individual photons, and therefore also their coherence
lengths, do not appear in Equations (11.18) and (11.19) and so do not play a role in the experiment. Instead, the linewidth of the pump wave is physically relevant. It did not appear in our
derivation because we assumed a monochromatic pump wave. This is an idealization, which is justified only for times shorter than the coherence time of the pump wave. When T becomes larger than this
time, the fringe visibility will decrease and finally vanish. The characteristic coherence length in the Franson experiment is the coherence length of the pump wave, which sounds a bit mysterious
because the pump wave survives only in the “memories” of the photons which actually take part in the experiment.
12 Quantum cryptography
12.1 Fundamentals of cryptography The essence of the Einstein–Podolsky–Rosen experiment analyzed in the preceding chapter is our ability to provide two observers with unpolarized light beams,
consisting of sequences of photons, which are coupled in a miraculous way. When both observers choose the same measurement apparatus – a polarizing prism with two detectors in the two output ports,
whereby the orientation of the prism is set arbitrarily but identically for both observers – their measurement results are identical. The measurement result, characterized, say, by “0” and “1,” is a
genuine random sequence – the quantum mechanical randomness rules unrestricted – from which we can form a sequence of random numbers using the binary number system. The experimental setup thus allows
us to deliver simultaneously to the two observers an identical series of random numbers. This would be, by itself, not very exciting. Mathematical algorithms can be used to generate random numbers,
for example the digit sequence of the number π , which can be calculated up to an arbitrary length. Even though we cannot be completely sure that such a sequence is absolutely random, such procedures
are sufficient for all practical purposes. The essential point of the Einstein–Podolsky–Rosen experiment is that “eavesdroppers” cannot listen to the communication without being noticed by the
observers. When eavesdroppers perform an observation on the photons sent, they inevitably destroy the subtle quantum mechanical correlations, and this damage is irreparable. The
Einstein–Podolsky–Rosen correlations can become the basis of an eavesdropper safe transmission of an identical sequence of random numbers to two remote observers, traditionally called Alice and Bob.
This is exactly what is needed for cryptography, namely encoded, absolutely secret, information transmission. Let us go into details. The basic principle of cryptography is the coding of plain text
by replacing letters by other symbols (or other letters). Achieving this by constantly using the
Quantum cryptography
same rule, replacing, for example, the letter “a” always by the same symbol, would make it simple for an unauthorized person to “break the code.” (This concept was described in a magnificent and
literary way in the short story “The Gold Bug” by Edgar Allen Poe.) After guessing the language in which the text might be written, it is easy to succeed with a frequency analysis: first find the
most frequent symbol in the coded text and identify it with the most frequent letter used in the language, then look for the second most frequent symbol, and so on. Much also can be achieved by
guesswork. An absolutely secure protection against decoding is possible using a sequence of random numbers as the key (the sequence should be as long as the text). The key itself must be kept secret;
in contrast to the ciphered text and the encoding and decoding procedures, the key must be available only to the authorized users. The coding itself is achieved with the help of the Vernam algorithm:
first the letters of the plain text are converted into numbers (usually in binary representation), using strict rules given in the form of a table, for example. Then the random numbers of the key are
added (modulo 2) to the converted text one after the other so that each random number is used only once. The result is the ciphered text, the cryptogram. The deciphering is performed simply by
reversing the performed transformation: the random numbers of the key are subtracted (modulo 2) from the encoded text, and the result is translated into plain language using the known table. Because
a random key was used, the cryptogram is free of any system and the code is therefore completely safe. What is absolutely vital is the secrecy of the key. The key can be given, for example, to a spy
who reports the encrypted message to his authority via short wave communication, with the strict instruction to destroy the key immediately after its use. The danger is, however, that the spy will be
caught together with the key. A better variant is the use of an eavesdropper safe communication channel for the key transmission. A method for implementation is offered by the subtle peculiarities of
quantum mechanics, namely correlations of the Einstein–Podolsky–Rosen type, as discussed in the introduction to this chapter. From a practical point of view, these things cannot be taken seriously.
The theoretician, however, cannot resist the attraction of such considerations which underline the unexpected possibilities hidden in quantum mechanics. It is almost a rule in physics that such
“purely academic” considerations lead eventually to practicable solutions. We will return to this later. Let us first discuss in more detail how the use of an Einstein–Podolsky–Rosen channel can
protect us against unwanted eavesdropping. 12.2 Eavesdropping and quantum theory We emphasize that the generation and communication of a secret key in the form of a random number sequence has nothing
to do with information transmission.
12.2 Eavesdropping and quantum theory
The key itself does not have any meaning. What strategy can be developed by Alice and Bob to protect their communication against eavesdropping (perhaps we ought to think of the eavesdropper as a lady
named Eve)? A possible variant is the following (Bennett, Brassard and Ekert, 1992): Alice and Bob use the same measurement device to measure linear polarization, namely a polarizing prism with a
detector in each output. They note a one or a zero, dependent on whether the first or the second detector registered the incident photon. We assume the photon pairs to be emitted at discrete and
equidistant times, which can be accomplished by pumping a non-linear crystal with a train of pulses. Alice and Bob therefore know when to expect the arrival of a photon. In addition, they agree to
change constantly the settings of their polarization prisms – mutually independently – in a statistically arbitrary way. They will choose between the following two orientations: (a)
horizontal–vertical in which the outgoing extraordinary beam is vertically and, accordingly, the ordinary beam horizontally polarized, and (b) rotated by 45◦ in such a way that the polarization
directions of the extraordinary (and therefore also that of the ordinary) beams are the same for both observers. Thanks to the existence of quantum mechanical correlations between the polarization
directions of both photons, this guarantees that the observers, in case they have accidentally chosen the same orientation of the polarizing prism, find the same measurement result (interpreted as
“0” or “1”). This does not apply for different settings. According to Equation (11.4), when Alice measures a one, for example, Bob’s apparatus will indicate in half of the cases a one and in the
other half a zero. This is easy to see: imagine, for example, that one of the observers, say Alice, makes a measurement slightly earlier (because she is nearer to the source). The ensemble of photons
received by Bob is vertically polarized when it was selected, according to the criterion that Alice registered a vertically polarized photon in each case. When Bob now sends such selected photons
onto a 45◦ degree rotated polarizing prism, this light will be split (from the classical point of view), in equal shares, into two partial beams with polarization directions rotated by 45◦ or 135◦
from the vertical. Quantum mechanically, this means that both detectors respond with 50% probability. What happens when Eve, our eavesdropper, is tapping the line? A strategy that would be a
reasonable one for Eve is the following: she uses the same apparatus as Alice and Bob and changes accidently the two orientations (a) and (b). Then either of the two following situations can happen.
First, Eve chose by chance the same orientation as Alice, and so finds the same result as Alice. To avoid being detected, Eve sends afterwards an identically polarized photon, as a replacement for
the photon consumed by the measurement, to Bob. A cloning of the first photon is not possible! Secondly, Eve chose a polarization different from Alice’s, so when she follows the above strategy she
distorts the original photon. When the photon was vertically polarized, for example, the replacement will be polarized either 45◦
Quantum cryptography
or 135◦ from the vertical. In both cases, Bob, when he chooses the same orientation as Alice, will find in 50% of the cases a horizontally polarized photon, which is impossible when everything is
correct. Such errors allow Alice and Bob to find out that an eavesdropping attack has taken place. Alice and Bob proceed in detail as follows. After they have completed their measurements, they
inform one another about the measurement settings chosen at the respective times. They eliminate as useless all the data obtained for different orientations. To offset the losses in transmission (for
example in glass fibers) and possible inefficiencies of the detectors, they communicate publicly the times at which photons should have arrived (and the settings of their apparatuses were identical)
but did not, and they discard these data. What remains is a sequence of measurement signals which are, according to quantum mechanics, identical for both observers, provided no eavesdropping
occurred. To be sure that this was the case, Alice and Bob select, with the help of a random but common key, several of their data (for example the 23rd, 47th, 51st, etc. value in their cleaned
sequence) and inform one another about them using public communication channels. Finding the expected exact coincidence, they are satisfied, and they eliminate the now publicly known data from their
lists which form the desired absolutely secret key. The security of the key, as discussed previously, is guaranteed from the beginning by the impossibility of objectifying the polarization properties
of individual photons. If Eve could determine through a measurement on a single photon its “objective” polarization, she would be able to send to Bob an exact copy and she would be undetectable. Such
things are basically possible in classical theory! Although we made use of the specific quantum mechanical correlations a´ la Einstein, Podolsky and Rosen, we actually do not need them!1 Instead of
starting from correlated photon pairs and leaving “her” photon to choose between two orthogonal polarization directions, Alice can generate the photons she sends to Bob and set their polarization
state at will. The secret transmission of the key can thus be realized much more simply (Bennett et al., 1992) in the following way. Alice has a light source at her disposal which generates at given
times, say, a vertically polarized photon. Alice changes the polarization state of the photon at will with the help of two Pockels cells, thereby choosing the polarization directions to be either 0◦
, 45◦ , 90◦ or 135◦ (with respect to the vertical). Bob performs measurements in the same way as before. He communicates to Alice, through a public channel, the times at which
1 These correlations could be used for an additional test. Alice and Bob rotate their polarization prisms not only by 45◦ but also by 22.5◦ (also purely random). From the measured data they can,
through communication
via public channels, calculate coincidence counting rates for a difference between the polarization directions of 22.5◦ and 67.5◦ and thus test the Bell inequality. When it is violated, as is
predicted by quantum theory (see Section 11.3), they can be sure that no eavesdropper was at work.
12.2 Eavesdropping and quantum theory
he detected a photon and with which setting of his apparatus. Alice compares it with her record of polarization settings, and informs Bob which data to keep. A prototype of such a transmission has
already been achieved (Bennett et al., 1992). The vertically polarized primary photons were replaced, for practical reasons, by weak flashes of light emitted from a diode, and these were linearly
polarized by a polarizing filter. When the flashes contain more than one photon, eavesdropping is possible: Eve splits the signal with the help of a beamsplitter, makes a measurement on one of the
beams and sends the other unchanged to Bob. This danger can be avoided by severely weakening the primary signal to an average photon number of less than one (for instance one-tenth). This reduces
dramatically the transmission rate – in about 90% of the cases no photons will be registered at all – but the probability of detecting two photons in one flash (one by Eve and one by Bob) will be
negligibly small. Working with such extremely weak light pulses where, in most cases, the pulses are empty (in the sense that the detectors do not respond), the experimental setup can be simplified
further. Instead of converting the polarization into binary digits (0 and 1), we can work with other photon properties, such as the frequency. For example, Alice sends to Bob red and green light
pulses in a random sequence, which contain, on average, much less than one photon. Under such circumstances, Eve will detect very few signals when she chooses to use a beamsplitter for eavesdropping
– she will not detect all those which Bob detects. Eve will obtain only a very fragmentary knowledge of the key. Bob informs Alice, in public, about those times when he detected photons but not about
their color sequence, which forms the secret key. We have arrived again at photons whose behavior is governed by chance, thus illustrating the break with classical concepts. Glass fibers are good
candidates for signal transmission using polarized photons over long distances; distances of more than 20 km have already been surpassed. A weak point of quantum cryptography is its low transmission
rate. Therefore, the encoding of secret messages is, in practice, performed in a different way, namely using so-called “one-way functions” – these are easily calculated (quickly) in one direction
used for the encoding, whereas the decoding (the calculation of the reverse function) is extremely time consuming. A representative example is the decomposition of a large number into prime factors.
The prime factors represent the secret key and are known only to the authorized user. From the knowledge of the public key obtained by multiplication of the factors, it is not possible to recover the
factors in a realistic time, even with the most advanced computers. The procedure is as follows: Bob chooses randomly two prime numbers and multiplies them. He sends the product to Alice, and she
uses the result as the key for coding. The decoding is only possible if the factors are known, and they are known only to Bob. Bob is therefore the only receiver able to read the message transmitted
from Alice.
13 Quantum teleportation
13.1 Transmission of a polarization state The word “teleportation” comes from parapsychology and means transportation of persons or things from one place to another using mental power. It was taken
over into science fiction literature, where the transport is imagined to take place instantaneously. However this is still to be invented, and is surely nonsense – relativity theory teaches us that
the velocity of light is the upper bound for the motion of an object. Nevertheless, teleportation has occupied a firm place in our fantasies, and when renowned quantum physicists (as has happened)
use this word, they can be sure to attract attention. So, what is it all about? The basic idea is that it is not necessary to transport material constituents (ultimately the elementary particles).
The same particles already exist at other places; we “simply” need to put them together in the right way. To do this, we need a complete set of building instructions, and this is, according to
quantum theory, the quantum mechanical wave function representing the maximum information known about an object. We could imagine the wave function measured on the original system, then transmitted
via a conventional (classical) information channel to another place and there used for system reconstruction. Unfortunately, the first step, the determination of the wave function on a single system,
is impossible (see Section 10.3). However, quantum mechanics offers us another “magic trick.” It is first of all important to realize that Alice, who wants to teleport, does not need to know the wave
function: it is the transmission of the wave function that needs to be successful. That this is feasible will be shown in an example. There is, however, a serious drawback to this. The receiver, Bob,
cannot simply imprint the received wave function on the matter (or radiation) present at the site; rather, his system must somehow be “intimately related” to the system at Alice’s site. In other
words, the two systems must be entangled. Careful preparations are necessary. The teleportation relies on the fact
Quantum teleportation
that Alice’s measurement on the original system also influences the partner system. What is really sent to Bob is the result of such a measurement. Let us analyze the promised example. Bennett et al.
(1993) contrived a procedure to teleport the polarization state of a photon. Let us assume the photon (labeled 1) to be in a polarization state described by the superposition |ϕ1 = α|x1 + β|y1
(which Alice need not know). The states |x and |y describe, as before, the polarization state of a photon linearly polarized in the x and y direction, respectively, and α, β are arbitrary complex
numbers satisfying the normalization condition |α|2 + |β|2 = 1. The state described by Equation (13.1) can describe any type of polarization, in general elliptical. The “vehicle” used for
teleportation is a pair of polarization entangled photons in the state 1 − |23 = √ (|x2 |y3 − |y2 |x3 ) (13.2) 2 (see Equation (11.4)). The pair is generated by a source which sends photon 2 to Alice
and photon 3 to Bob. (The indices 1, 2 and 3 refer to three different modes of the radiation field.) The goal is to put photon 3 into the state of the original photon 1 as in Equation (13.1). To
accomplish this task, Alice and Bob must become active. Alice must perform a measurement on photon 1 including photon 2 (resulting in a destruction of both photons) and she must communicate the
result to Bob via a classical communication channel. Bob learns in this way which manipulation he has to carry out on photon 3 to obtain a duplicate of photon 1. The theoretical background to this
procedure consists in a decomposition of the output state of the whole system of three photons in terms of Bell states (introduced in Section 11.1) of photons 1 and 2 (which form a complete basis of
the Hilbert space of both photons). A short calculation leads to − − |ϕ1 |23 = 12 (α|y3 − β|x3 )|+ 12 + (α|y3 + β|x3 )|12 + − + (−α|x3 − β|y3 )|12 . (13.3) +(−α|x3 + β|y3 )|12 We see how to proceed.
Alice must ensure, through proper measurement, that the system consisting of photons 1 and 2 is transformed into one of the Bell states. The complete wave function given in Equation (13.3) is thus
reduced to that part which is in agreement with the measurement result. Photon 3, arriving at Bob’s site, is described by the state in the corresponding round brackets of Equation (13.3). Looking
more closely at these terms, we find that the fourth is identical to the state of the original photon 1, while the others can be converted into this state by a sign change of α or β, respectively,
and/or through the permutation of the states
13.1 Transmission of a polarization state
Alice Bob
D λ 2
BS D
Fig. 13.1. Teleportation of a polarization state. Alice communicates to Bob her measurement results. S = source of entangled photon pairs; BS = beamsplitter; P = polarizing prism; D = detector, I =
input signal, T = teleported signal.
x and y. The sign change can be experimentally achieved using a half-wave plate oriented in the y direction, and the interchange of x and y is achieved through a 45◦ rotated half-wave plate. When
Alice communicates which Bell state she measured, Bob knows exactly what to do to prepare an exact copy of the original photon. The difficult part is on Alice’s side: she mixes the original photon,
with the help of a semitransparent mirror, with photon 2 to distinguish experimentally between the Bell states (Fig. 13.1). Polarizing prisms are positioned into the reflected and transmitted beams
with detectors in each of the output ports (the orientations of the prisms coincide with the x and y directions). A detailed analysis (Braunstein and Mann, 1995) shows − + that there are no problems
distinguishing between the states |12 and |12 : in the first case we detect at each prism one photon and the two photons have different polarizations (in the x and y directions, respectively). In the
second case, two differently polarized photons are detected at the first or the second polarizing prism. Unfortunately, it is not possible to distinguish between the two states |+ 12 − and |12 with
the present apparatus; two photons leave jointly any of the four output ports of the polarizing prisms, with the same probability. The teleportation therefore succeeds only in 50% of all cases. Just
to demonstrate the possibility
Quantum teleportation
of teleportation, the experimentalist can decide to give away an additional 25% and − be content with the detection of the Bell state |12 ; according to Equation (13.3) this spares Bob the effort of
performing any manipulations on his photon. As stated − before, the Bell state |12 is the only one which is detected by registering a photon at each of the prisms. This property is thus sufficient
for its identification. This has also reduced Alice’s experimental effort: the polarizing prisms are not required and the photons can be sent directly onto the detectors. Experimentally, the quantum
teleportation process proceeds as follows (with Alice using the apparatus described in Fig. 13.1). Finding a coincidence, Alice knows that (since the projection is an instantaneous process) at any
instant photon 3 is in the same state as the original photon 1. Bob is ignorant of this so Alice has to communicate to him the happy event. This can happen only at the speed of light at maximum,
which saves causality. When Alice does not find a coincidence, the teleportation failed. Taking into consideration the usually low detector efficiency, we see that the number of actually measured
coincidences drops further, and that the whole procedure, realized for the first time by the Zeilinger group at Innsbruck (Bouwmeester et al., 1997), is inefficient. Notwithstanding this, the
experiment confirmed the prediction of the theory that a teleportation actually took place, and thus the concept of teleportation at least was proven. What is the physical basis behind the
teleportation concept? The basic mechanism is projection. Therefore, only one polarization state, from all those possible for photon 3, will become real (factual). An ensemble of photons of type 3
are in a completely unpolarized state (as long as measurements are performed only on photon 3 alone). Using corresponding polarization optics, photons with arbitrary polarization state can be
prepared from this ensemble. The measurement on photon 3 can be replaced by a measurement on photon 2 because the two photons are strongly entangled. This was explained in Section 11.3. Because we
are interested only in principles, let us for simplicity limit ourselves to linear polarization (of different orientations). The teleportation process simplifies considerably when Alice knows the
polarization of photon 1 (this is the case in the experiment anyway); i.e. the apparatus used for preparation is a polarizer rotated by an angle θ. Alice can then proceed in the following way: she
lets photon 2 fall on a polarization prism also rotated by the angle θ, with a detector in each output. Depending on which of the detectors responds, she communicates to Bob whether the photon is
already in the desired state or its polarization is rotated by 90◦ , and Bob can easily undo this. The important point to be considered in teleportation is that the state of photon 1 is not known,
which requires the projection on the Bell states, and this is experimentally more demanding. In any case, the teleportation of a given state is, in principle, the purposeful “crystallizing” of a
prescribed state from a kind of potential
13.2 Transmission of a single-mode wave function
state, in which the former – as one among many – is latent as a possibility. When quantum teleportation confuses us it means only that we did not really understand the reduction of the wave packet.
Our astonishment increases when we analyze the case of a photon (photon 1) which is not isolated (with a wave function of its own) but is entangled with another photon (photon 0), which is somewhere
in the world. From the linearity of the teleportation process, we can immediately conclude that the entanglement will be transferred onto photon 3. The destiny of a particle (photon 0) will be, as
soon as it loses its partner (photon 1), tied in a mysterious way to a completely strange particle (photon 3), “until death do they not part.” However surprising the accomplishments of quantum
mechanics seem in describing the teleportation process – we see the fantastic potential of quantum mechanical entanglement – a practical person will oppose our description, saying that we are making
things unnecessarily complicated. Alice could, for example, send the original photon via a glass fiber directly to Bob. Such a criticism will be void, however, when we replace photons by atoms. When
Alice and Bob are separated by a large distance, it would take a long time (compared with the propagation at the velocity of light) before the original atom arrived at Bob’s site. Using
teleportation, however, we could transfer the quantum mechanical state of the atom much faster (in principle, with the speed of light). We must just take care that the vehicle of teleportation, the
entangled atomic pair, started early enough such that one atom arrives at the same time as the original atom at Alice’s site and the other atom simultaneously at Bob’s site. Indeed, the discussed
teleportation scheme can be transferred without any problems to atoms, which, under certain experimental conditions, can be idealized as two-level systems. The two orthogonal polarization states are
then replaced by the two energy eigenstates. Experiments with atoms excited to Rydberg states look promising in this respect. Recently, two atoms have been entangled (in analogy to Equation (11.3))
through coherent energy exchange within a non-resonant empty cavity (Osnaghi et al., 2001). 13.2 Transmission of a single-mode wave function A natural question arises as to whether quantum
teleportation is also feasible in the case of continuous variables. The two-dimensional Hilbert space appropriate for polarization is replaced by an infinite-dimensional one. This problem has been
solved theoretically (Vaidman, 1994; Braunstein and Kimble, 1998). In addition, the proposed procedure was experimentally realized (Furusawa et al., 1998). The system considered is one mode of the
radiation field (in the following denoted again by the index 1). It is its, in principle unknown, quantum state which should be transferred by Alice onto the wave arriving at Bob’s site. A suitable
Quantum teleportation
was found to be an entangled two-mode state of the type used already by Einstein, Podolsky and Rosen (1935) in their sophisticated criticism of quantum mechanics. It is a state of two particles with
highly correlated position and momentum variables. The total momentum as well as the position difference have sharp values. Identifying the position and momentum with the quadrature components of the
field (see Section 9.1), we can easily take the step from a mechanical system to a two-mode system. In fact, there exist known methods of non-linear optics that allow us to prepare approximately
quantum states of two modes 2 and 3 with quadrature components satisfying the relations x2 = −x3 ,
p2 = p3 .
The quadrature components themselves fluctuate, but the fluctuations of one wave find their counterpart, according to Equation (13.4), in the other wave. This radiation is generated by mixing two
beams (with the help of a 50:50 beamsplitter), one of which is strongly squeezed in the x component and the other in the p component (see Section 9.2). (The delta function-like correlations (Equation
(13.4)) are obtained only in the limit of infinitely large squeezing.) The teleportation of wave 1 takes place in the following manner. Alice mixes wave 1 with wave 2 of the vehicle using a
beamsplitter and measures the x component on one and the p component on the other output (Fig. 13.2) using the homodyne technique (see Section 9.3). Let x 0 and p 0 be the measured values. The
quantum mechanical description (Braunstein and Kimble, 1998) shows that the state of Bob’s wave is readily described in the Wigner formalism: its Wigner function Wout (x3 , p3 ) is related to the
Wigner function Win (x1 , p1 ) of the√original wave 1 through a simple √displacement in phase space (x, p plane) by − 2x 0 in the x direction and by 2 p 0 in the p direction. Thus the following
relation holds: Wout (x3 , p3 ) = Win (x3 +
√ 0 √ 2x , p3 − 2 p 0 ).
To make the result plausible, let us forget for a moment quantum theory and look at the problem from a classical point of view. The Wigner function is then considered as a classical distribution
function. We can think of the quadrature components of the three waves at each time – more precisely speaking in each time interval whose length is determined by the coherence time – as having a
defined (not predictable) value x1 , p1 , etc. The beamsplitter used by Alice causes the quadrature components of the outgoing waves x , p and x , p , and those of the
13.2 Transmission of a single-mode wave function
Alice L Bob x A
p BS
Fig. 13.2. Teleportation of a single-mode wave function. Alice measures the quadrature components x and p and communicates the results to Bob. Using the received values, Bob changes the amplitude and
the phase of a laser beam. S = source of an entangled two-mode state; BS = beamsplitter; M = strongly reflecting mirror; L = laser; A = amplitude modulator; P = phase modulator; I = input signal; T =
teleported signal.
incident waves x1 , p1 and x2 , p2 to be related as follows: 1 x = √ (x1 + x2 ), 2 1 x = √ (−x1 + x2 ), 2
1 p = √ ( p1 + p2 ); 2 1 p = √ (− p1 + p2 ). 2
(These relations are simply Equation (15.48) for the special case r = t = 1/2 rewritten into real and imaginary parts.) When Alice measures x with the result x 0 and p with the result p 0 , we have 1
x 0 = √ (x1 + x2 ), 2
1 p 0 = √ (− p1 + p2 ). 2
Taking into account also the correlations in Equations (13.4) for waves 2 and 3 we can proceed as follows: √ √ x1 = x3 + 2x 0 , p1 = p3 − 2 p 0 . (13.8)
Quantum teleportation
This implies strong correlations between the original wave and Bob’s wave: when the first wave (randomly) takes the values x1 and p1 , then in the latter wave the values x3 and p3 satisfying
Equations (13.8) will be realized. The distribution of the quadrature components of the original wave is thus transferred to Bob’s wave in the form of Equation (13.5). Let us return to the quantum
mechanical prediction in Equation (13.5). In order to reproduce the state of wave 1, the displacement of the Wigner function must be undone. Fortunately, there exists a procedure for this: we have to
mix wave 3 with a wave in a coherent state (Glauber state). The mirror used for this task has to be highly reflective (large coefficient of reflectivity r ) and correspondingly weakly transparent
(small transmitivity coefficient t) so that it will reflect almost all of wave 3 (and hardly change it). The coherent wave becomes superposed with the reflected wave after having passed through the
mirror, whereby its Wigner function becomes displaced. In passage, the amplitude of the coherent √ wave is weakened by a factor of t. We √have to compensate for this by increasing the input amplitude
by the factor 1/ t. Taking into consideration the relation √ α = (1/ 2)(x + i p) between the quadrature components and the coherent state complex amplitude √ α (Section 15.2), Bob has to use for the
mixing a Glauber state 0 0 |α = (x − i p )/ t. (Because of the large amplitude, it is de facto a classical wave.) To prepare the wave, Bob needs to know the measurement data found by Alice, which
must be communicated to Bob with the speed of light at best. Using optical mixing, Bob undoes the displacement of the Wigner function, and the teleportation was successful! The two different squeezed
light beams in the performed experiment (Furusawa et al., 1998) were generated using a sub-threshold operated optical parametric oscillator with ring type resonator geometry. The oscillator was
pumped with two counter-propagating waves coming from the same laser. The input signal (wave 1) originated from the same primary laser; i.e. it was in a Glauber state. The experiment was performed in
the continuous regime. The homodyne measurement was thus continuous, and the measured data in the form of difference photocurrents (Section 9.3) were sent directly to Bob’s observation station via
electric conductors. There they drove, after amplification, amplitude and phase modulators. The modulators prepared the required auxiliary wave from a wave split from the primary laser beam. To
validate the teleportation, Furusawa et al. introduced a third (fictitious) observer named Victor (verifier) who generated the input signal for Alice and compared it with the teleported signal. The
fidelity of the teleportation was found to be 58 ± 2%. The relatively low value is caused, on the one hand, by the insufficient squeezing of waves 2 and 3 and, on the other hand, by propagation
losses of the two waves as well as detector inefficiencies. The advantage of this experiment is – contrary to the teleportation of the polarization state of a photon – that the quantum state of each
input signal was teleported.
14 Summarizing what we know about the photon
How can we construct a picture of the photon from the wealth of observation material available to us? The photon appears to have a split personality: it is neither a wave nor a particle but something
else which, depending on the experimental situation, exhibits a wave- or a particle-like behavior. In other words, in the photon (as in material particles such as the electron) the particle–wave
dualism becomes manifest. Whereas classically the wave and the particle pictures are separate, quantum mechanics accomplishes a formal synthesis through a unified mathematical treatment. Let us look
first at the wave aspect familiar from classical electrodynamics, which seems to be the most natural description. It makes all the different interference phenomena understandable, such as the
“interference of the photon with itself” on the one hand and the appearance of spatial and temporal intensity correlations in a thermal radiation field on the other (which are obviously brought about
by superposition of elementary waves emitted independently from different atoms). It might come as a surprise (at least for those having quantum mechanical preconceptions) that the classical theory
is valid down to arbitrarily small intensities: the visibility of the interference pattern does not deteriorate even for very small intensities – the zero point fluctuations of the electromagnetic
field advocated by quantum mechanics do not have a disturbing effect – and is valid not only for conventional interference experiments but also for interference between independently generated light
beams (in the form of laser light). The fact that the natural linewidth of the radiation and the mean lifetime of an excited atomic level are related in accordance with the classical theory supports
the assumption that the photon is emitted in a continuous process as a wave. A direct consequence of the unlimited validity of the classical description of the interference processes is the perfect
functioning of conventional spectrometers, even at very low intensities (we disregard disturbances whose effects are more pronounced at low intensities than at high ones). The spectral decomposition
of a single photon 215
Summarizing what we know about the photon
is possible (at least in principle). A wave corresponding to a single photon is split into a reflected and a transmitted partial wave in a high resolution Fabry–Perot interferometer. The frequency
domains of the partial waves are mutually complementary; i.e. they sum to the spectrum of the original wave. The wave train thus becomes longer. Similarly, a photon in the form of a spherical wave
passes partly through a hole in a mirror, while the other part is reflected. The frequency spectrum of a photon can not only be narrowed but also widened. The latter can be achieved using a fast
shutter which can cut out or cut away a part of the wave. According to the Fourier theorem, the spectrum of the resulting wave must be broader than that of the original wave (which is assumed to be
coherent). A fundamental problem arises essentially in the process of measurement. It is related to the fact that the detection is based almost entirely on the photoeffect. As a consequence, the
photon is lost in the measurement process. The fact that in the case of beamsplitting (of exactly one incident photon) we can detect something at all is already in clear contradiction to classical
wave concepts. The decomposition into partial waves should be accompanied by a corresponding splitting of the energy, and hence the detectors should not respond because they require the full energy
hν of the photon and such an energy is seemingly no longer available. Fortunately, such an energy “dissipation” does not take place. Rather, the photon (as an energetic whole) can be found in one
partial beam – for instance in the partial wave running through a Fabry–Perot interferometer. Thus, the single measurement delivers only very limited information. For example, we find the photon in
one of the many output channels of a spectrometer which indicates a certain frequency value to be measured. We cannot measure a frequency spectrum on a single photon; rather, frequent repetition of
the experiment enables us to record a spectrum. The particle character of light manifests itself in the same spectacular way as the wave character. It shows up in the process of spontaneous emission:
positioning a detector close enough to the excited atom, the detector clicks in some cases at times that are shorter than the lifetime of the upper level of the atom. According to this the whole
energy of the photon hν is sometimes already available at the moment when the emission, in the sense of wave generation, has just started! Conversely, absorption acts take place in such a short time
that it is not possible for the atom to accumulate the necessary amount of energy from the field if it were distributed there continuously. The energy must be present in an agglomerated form. The
photons conceived as localized energy packets, however, do not possess individuality: it is not possible to trace back the “course of life” of a photon (for example, in a thermal light field) to its
moment of birth, even in a Gedanken experiment. If this were the case, there would be no spatial or temporal correlations
Summarizing what we know about the photon
between events detected by two different detectors. As mentioned previously, propagation processes can be described only with the aid of a wave theory. The wave character of light manifests itself in
experiments through a relation between amplitude and probability. The momentary intensity of the light determines the probability of a detector response at the respective position. Another
fundamental property of the photon is its energetic indivisibility, illustrated and discussed in detail using the example of a beamsplitter in Section 7.1. The consequence is that electromagnetic
energy cannot be arbitrarily “diluted” – either we find a photon or we do not – what becomes smaller is the probability of the first event when the cross section of the light beam becomes larger
through propagation. It is a characteristic of the particle picture that when it is applied chance also comes into play. This is equally valid for the “instant” of emission as for that of absorption.
In addition, in the case of absorption by an ensemble of atoms illuminated by not too strong a wave, only a fraction of the atoms will be excited. The question concerning which atom will “receive” a
photon will be decided by “playing dice.” Similarly, in beamsplitting – assuming a single incident photon – it is left to chance which detector will receive the photon. From the randomness of the
events, we can see regularities acting which are completely alien to classical mechanics and electrodynamics. In fact, specific quantum mechanical features of natural phenomena are thus revealed. The
reason why we have difficulty understanding them, especially in the case of light, is that the “crazy things” that we became used to in the microworld appear now on a macroscopic scale. We do not
find it exciting that a bound electron in a hydrogen atom should be “smeared out” over a typical scale of 10−10 m, but we are reluctant to believe that in the case of the “interference of a photon
with itself” the photon is present in both partial beams whereby the spatial separation of the two “halves” approaches meters or kilometers (in principle, there is no limit!). Obviously, we have to
accept these concepts, and the recently realized Gedanken experiments of Einstein, Podolsky and Rosen described in Section 11.4 clearly demonstrate that specific quantum mechanical correlations can
extend over macroscopic distances. The world is indeed more complicated than one is led to believe, and photons contribute in great part to this.
15 Appendix. Mathematical description
15.1 Quantization of a single-mode field A single-mode field is characterized by a sharp (circular) frequency ω and a spatial distribution of the electric field strength (given at the initial time t
= 0; see also Section 4.2). When we assume that the field suffers no interaction and hence is propagating undisturbed, the positive and negative frequency parts of the field amplitude can be written,
according to classical electrodynamics, in the form E (+) (r, t) = F(r)e−iωt A,
E (−) (r, t) = F ∗ (r)eiωt A∗ .
F(r) characterizes the given spatial field distribution (it is a normalized solution of the time independent wave equation) and A is a complex amplitude. The quantization of the field is formally
realized by replacing the classical amplitude A by a (non-Hermitian) operator aˆ and accordingly the complex conjugate amplitude A∗ by the Hermitian conjugate operator aˆ † . Equations (15.1) are
thus replaced by Eˆ (+) (r, t) = E (r)e−iωt a, ˆ
Eˆ (−) (r, t) = E ∗ (r)eiωt aˆ † ,
where the functions F(r) and E (r) differ only in their normalization. The operators a, ˆ aˆ † satisfy the fundamental commutation relation [a, ˆ aˆ † ] = 1,
and the energy of the radiation field, or more precisely the Hamiltonian, reads ˆ Hˆ = h¯ ωaˆ † a.
From the commutation relation it follows that the operator Nˆ ≡ aˆ † aˆ has the eigenvalues n = 0, 1, 2, . . . , and due to this and Equation (15.4) it deserves the name “photon number operator.”
Indeed (for a single-mode field) the response 219
Appendix. Mathematical description
probability of a photodetector is proportional to the expectation value of Nˆ . However, for the coincidence measurement it is not simply the expectation value of Nˆ 2 ; we have to cast the latter
operator into a normally ordered form. This means that all the operators aˆ have to be placed on the right of the operators aˆ † . The probability that two detectors simultaneously respond is
proportional to aˆ †2 aˆ 2 , which, due to Equation (15.3), can be rewritten as Nˆ ( Nˆ − 1). The eigenvectors |n of the photon number operator form a complete orthogonal system of functions. They
describe field states with sharp photon numbers n and are called Fock states. The action of the operators aˆ and aˆ † reads as √ √ (15.5) aˆ |n = n |n − 1, aˆ † |n = n + 1 |n + 1 (n = 0, 1, 2, . .
.). These relations explain why the operators aˆ and aˆ † are called photon annihilation and creation operators. Applying the creation operator n times to the vacuum state, we obtain the state with n
photons 1
|n = (n!)− 2 (aˆ † )n |0.
Starting from Equations (15.2) and (15.6), we conclude that the expectation value of the electric field strength equals zero for states with well defined photon numbers. In addition, from the
definition of the photon number operator and the representation in Equation (15.2) of the field strength operator, it follows that the two operators do not commute. As a consequence, there are no
states for which the electric field strength and the energy (photon number) are simultaneously sharp. The description of the electric field with the aid of the creation and annihilation operators is
tailored to the particle aspect of the field. The description of processes involving energy exchange such as emission or absorption takes an intuitive and transparent form when the language of
creation and annihilation operators is used, as in these processes photons are really created or annihilated. When the particle aspect of the field is suppressed in favor of the wave aspect, as in
the case of light field squeezing or the homodyne detection technique, description through the so-called quadrature components becomes favorable. Let us consider the simple case of a linearly
polarized plane wave. According to Equaˆ t) = tion (15.2), the complex representation of the electric field strength E(r, (−) (+) ˆ ˆ E (r, t) + E (r, t) takes the form ˆ t) = E 0 ei(kr−ωt) aˆ + e−i
(kr−ωt) aˆ † , (15.7) E(r, where E 0 is a constant and k is the wave vector. Using the decomposition of the exponential into real and imaginary parts, we obtain the real valued representation √ ˆ t)
= 2E 0 xˆ cos(ωt − kr) + pˆ sin(ωt − kr) . E(r, (15.8)
15.1 Quantization of a single-mode field
The two Hermitian operators √ xˆ = (1/ 2)(aˆ † + a), ˆ
√ pˆ = (1/ 2)i(aˆ † − a), ˆ
are the counterparts of the real and imaginary parts of the classical complex amplitude A, and they represent the two quadratures of the field. They are the exact analogs of position and momentum
when we compare the single-mode field with a harmonic oscillator. The chosen notation should remind us of this analogy. Using Equation (15.3) it is easy to check the validity of the commutation
relation [x, ˆ p] ˆ = i1.
Equation (15.8) has a simple interpretation. The electric field is split into two parts: one component (x) ˆ is in phase with a classical reference wave oscillating as cos(ωt − kr) and the other
component ( p) ˆ is out of phase. The chosen reference wave was a very special one. Any wave that differs from the considered one by a phase shift of , i.e. whose electric field strength is given by
E 1 cos(ωt − kr − ), is equally well suited for our purpose. This is not only of theoretical but also of practical interest for field detection using the homodyne technique (see Sections 9.3 and
15.6). Equations (15.8) and (15.9) can be replaced by √ ˆ t) = 2E 0 xˆ cos(ωt − kr − ) + pˆ sin(ωt − kr − ) , (15.11) E(r, where √ xˆ = (1/ 2)(ei aˆ † + e−i a), ˆ
√ pˆ = (1/ 2)i(ei aˆ † − e−i a). ˆ
The new quadrature components are related to the old ones through a simple rotation by the angle : xˆ cos sin xˆ = . (15.13) − sin cos pˆ pˆ Because this is a unitary transformation, the operators xˆ
and pˆ also satisfy the canonical commutation relation, Equation (15.10). From Equation (15.13) it follows that p is identical to xˆ+ π2 . By changing the angle from 0 to π, we obtain all possible
quadrature components. (Including the interval π, . . . , 2π will change only the sign of the quadratures.) The quadratures correspond to different measurements, hence we are dealing with a
continuous manifold of observables. The amazing fact is that these observables are measurable in a rather simple way (see Sections 9.3 and 15.6).
Appendix. Mathematical description
15.2 Definition and properties of coherent states Let us find quantum mechanical states of a single-mode field which come as close as possible to the classical states with well defined phases and
amplitudes. To this end we require that the electric field strength has minimum fluctuations. In addition we require that the mean energy expressed through the mean photon number has a given value.
Without the additional condition we would obtain eigenfunctions of the field strength operators which do not have a finite mean energy. Let us straight away relax the requirement for the electric
field strength. We are content when the electric field strength fluctuations become as small as possible in the time average (over a light oscillation period). We thus wish to minimize t
ˆ t)2 = minimum, E 2 (r, t) ≡ Eˆ 2 (r, t) − E(r,
with the constraint ˆ = N (fixed). Nˆ ≡ aˆ † a
Using Equations (15.2) and (15.3), we rewrite Equation (15.14) as follows: t (15.16) E 2 (r, t) = 2|E (r)|2 aˆ † a ˆ − aˆ † a ˆ + 12 = minimum. Combining Equations (15.16) and (15.15), we see that
minimizing the mean field ˆ = a ˆ ∗ a ˆ strength fluctuations is equivalent to maximizing the product aˆ † a † (the squared absolute value of a) ˆ for a fixed mean photon number aˆ a. ˆ For the
unknown state we make the ansatz |ψ =
cn |n,
and express the expectation value of aˆ through the unknown expansion coefficients cn . With the help of Equation (15.5) we obtain √ cn∗ cn+1 n + 1. (15.18) ψ|a|ψ ˆ = n
The right hand side of Equation (15.18) can be understood √ as the scalar product of two vectors with components an = cn and bn = cn+1 n + 1. The scalar product satisfies Schwarz’s inequality | an∗
bn |2 ≤ |an |2 |bn |2 , (15.19) n
stating simply that the modulus of the scalar product cannot exceed the product of the lengths of the two vectors, which is geometrically evident. Applied to our case
15.2 Definition and properties of coherent states
we obtain the inequality 2 ≤ |ψ|a|ψ| ˆ
|cn |2
|cn+1 |2 (n + 1) = N .
Fortunately, on the right hand side of this equation the mean photon number appears, which is assumed to be given. The question of when the left hand side of the inequality becomes maximum is easily
answered: it occurs only when the two vectors are parallel to one another; i.e. when the equation bn = αan holds, or, in our case, √ cn+1 n + 1 = αcn , (15.21) where α is an arbitrary complex
constant. Equation (15.21) is a simple recursion formula, and can be easily solved to obtain the explicit representation of the coefficients cn , namely c0 (15.22) cn = √ α n . n!
Satisfying the normalization condition |cn |2 = 1, we find the desired wave funcn tion to be ∞ αn 2 (15.23) |α = e−|α| /2 √ |n. n=0 n! This relation completely defines the coherent states. They are
very often named after R. Glauber, who was the first to realize their usefulness for the quantum mechanical description of the radiation field. The definition in Equation (15.23) can be used to
derive several interesting formal properties of Glauber states. Applying the photon annihilation operator aˆ on the state |α, we find the simple relation a|α ˆ = α|α,
which is a kind of eigenvalue equation. However, we have to note that aˆ is not a Hermitian operator. For the Hermitian conjugate operator aˆ † , we obtain another relation, namely α|aˆ † = α ∗ α|.
To be precise, we should say that the Glauber states are right hand side eigenvectors of aˆ and left hand side eigenvectors of aˆ † . However, they are not eigenvectors of any Hermitian operator and
therefore we cannot ascribe to them any precise values of any observable, and, as a consequence, such states cannot be produced by a measurement process, even in a Gedanken experiment.
Appendix. Mathematical description
Equations (15.24) and (15.25) allow a very elegant calculation of quantum mechanical expectation values. Above all, this is true for the normally ordered operators that we are dealing with when we
describe measurements performed with photodetectors (see Sections 5.2 and 15.1). From Equations (15.24) and (15.25), we derive first the simple relations α| Nˆ |α = α|aˆ † a|α ˆ = |α|2 ,
α|a|α ˆ = α, α|aˆ † |α = α ∗ , †2
(15.27) ∗2
α|aˆ |α = α , α|aˆ |α = α . 2
Let us recall that the operator aˆ corresponds to the classical complex amplitude A. Because of this, the complex parameter α has the meaning of such an amplitude, and, according to Equation (15.26),
the normalization is chosen in such a way that |α|2 indicates the mean photon number. In particular, using Equations (15.2) and (15.26) to (15.28) we obtain ˆ t)|α = E (r)e−iωt α + E ∗ (r)eiωt α ∗ ,
α| E(r, α| Eˆ 2 (r, t)|α = E 2 (r)e−2iωt α 2 + E ∗2 (r)e2iωt α ∗2 + 2|E (r)|2
(15.29) |α|2 + 1/2 , (15.30)
where the commutation relation in Equation (15.3) was used in Equation (15.30). Apart from the additional term |E (r)|2 in Equation (15.30), which represents the contribution from the vacuum
fluctuations of the field and hence is not understandable from a classical point of view, the right hand sides expressions of Equations (15.29) and (15.30) are exactly the classical expressions. This
implies in particular that the phase of the complex number α corresponds to the phase of the field. However, from the quantum mechanical point of view, it is not a well defined value, i.e. an
eigenvalue (of a Hermitian phase operator), but rather it is the center of a phase distribution of finite width. Let us calculate from Equations (15.30) and (15.29) the dispersion of the electric
field strength. We find the same result as that obtained by time averaging, namely ˆ t)|α2 = |E (r)|2 . E 2 (r, t) ≡ α| Eˆ 2 (r, t)|α − α| E(r,
As the previous discussion clearly demonstrated, the right hand side of Equation (15.31) is really the absolute minimum which the variance of the electric field strength can reach under the
constraint of a finite field energy. As already mentioned, |E (r)|2 represents the contribution from the vacuum fluctuations. It is easy to check that Equation (15.31) holds true also for the vacuum
state |0 (this comes as no surprise because the vacuum state can be understood as a limiting case of the Glauber state |α for α → 0). This leads us to the conclusion that the Glauber
15.2 Definition and properties of coherent states
states come as close as possible to classical states with well defined amplitudes and phases. The vacuum fluctuations impose an unbeatable limit. They are in the end responsible for the fact that the
phase and the amplitude or the energy, respectively, of a light field fluctuate. From Equation (15.23) we learn that the probability of detecting (in the case of a perfect measurement) exactly n
photons is given by the Poissonian distribution n
e−N N e−|α| |α|2n wn ≡ |cn | = = , n! n! 2
with N = |α|2 being the mean photon number. The photon number variance then follows as N 2 ≡ Nˆ 2 − Nˆ 2 = N .
In the limit N → ∞, the relative variance of the photon number goes to zero as 1/N , and we come arbitrarily close to a classical state with sharp energy, as expected from the correspondence
principle. The coherent states have a drawback, however, they are not mutually orthogonal. Their scalar product is α|β = eα
∗ β−(|α|2 +|β|2 )/2
which gives for the squared absolute value |α|β|2 = e−|α−β| . 2
Due to this property they cannot be eigenvectors of any Hermitian operator. However, they form a complete (more precisely, overcomplete) system; i.e. the unit operator can be decomposed in the form 1
1= (15.36) d2 α |αα|, π where the integration is over the whole complex α plane. There exists a large class of quantum mechanical states that are distinguished by the fact that the corresponding
density operator can be represented in the special form ρˆ = d2 α P(α)|αα|, (15.37) where P(α) is a real function, not necessarily positive. When we are able to satisfy Equation (15.37) with a
non-pathological function P(α), we say there exists a Glauber P representation of the density matrix ρ. ˆ
Appendix. Mathematical description
Equation (15.37) allows us to work out a correspondence between the classical and the quantum mechanical description. We calculate the mean values of normally ordered products of creation and
annihilation operators using Equation (15.37). Using Equations (15.24) and (15.25), we can replace aˆ by α and aˆ † by α ∗ , and the following simple relations hold: aˆ †k aˆ l ≡ Tr(aˆ †k aˆ l ρ) ˆ 2
= d α P(α)α|aˆ †k aˆ l |α = d2 α P(α)α ∗k αl (k, l = 0, 1, 2, . . . ).
The last expression is just the classical ensemble average when we interpret P(α) as a classical distribution function. However, in this case we assume beforehand that P(α) is positive definite
because a negative probability makes no sense. We arrive at the following general conclusion. The quantum mechanical and the classical description lead to identical predictions when, first, we
restrict ourselves to normally ordered operators and, secondly, the density operator has a positive definite P representation. For states that do not satisfy these conditions – the Fock states are a
simple example – we can expect non-classical behavior.
15.3 The Weisskopf–Wigner solution for spontaneous emission In spontaneous emission, an excited atom emits light at different frequencies and into all possible directions. From a quantum mechanical
point of view this means that the atom is coupled virtually to all possible modes of the radiation field. We choose the light modes as linearly polarized running plane waves, characterized by an
index µ that stands for the polarization direction and the wave vector. In addition, we idealize the atom as a two-level system with an upper level b and a lower level a. We assume the atom to be
pinned to the origin of the coordinate system. This means that we assume the mass of the atom to be sufficiently large so that a good spatial localization of the atom associated only with negligibly
small fluctuations of its velocity around zero is guaranteed – due to Heisenberg’s uncertainty relation a small spatial uncertainty is always connected with a considerable momentum uncertainty. The
recoil of the atom from the emission can also be neglected. The wave function of the system formed by the atom and the radiation field can then be written in the form |ψ(t) = f (t)|b|0, 0, . . . + gµ
(t)|a|0, 0, . . . , 0, 1µ , 0, . . . , (15.39) µ
with |0, 0, . . . being the vacuum state of the field and |0, 0, . . . , 0, 1µ , 0, . . . denoting a field state with just one mode µ excited with exactly one photon.
15.3 The Weisskopf–Wigner solution for spontaneous emission
The motivation for Equation (15.39) is the following. We start from the idealized situation that the atom is excited with certainty and the field is completely “empty”. This is a pure state,
characterized by the initial values f (0) = 1,
gµ (0) = 0,
and it is known that the whole system will remain for all time in a pure state, which justifies the ansatz of the wave function. In Equation (15.39) the energy conservation law was taken into
account. The atom emits a photon when it goes from b to a and absorbs a photon in the reverse process. Even though this sounds plausible, it is still an approximation, the so-called rotating wave
approximation. Actually, the exact interaction operator (in the dipole approximation) also contains terms that contradict the energy conservation law (the transition of the atom from b to a is
associated with the absorption of a photon, etc.). The additional terms do not play a significant role in spontaneous emission; however, they have a serious physical meaning. Writing down the
Schr¨odinger equation for the whole system (in the dipole and rotating wave approximations), we obtain a coupled linear set of equations for the unknown functions f (t) and gµ (t). This set can be
solved exactly using the Laplace transformation method; however, the back transformation causes difficulties. It is not possible to find a closed form solution, but the following formula, known as
the Weisskopf–Wigner solution (Weisskopf and Wigner, 1930a, b), represents a good approximation f (t) = e− t/2 , gµ (t) =
a, 0, 0, . . . , 1µ , 0, . . . | Hˆ I |b, 0, . . . e−i(ωµ −ωba )t − e− t/2 , (15.42) ωµ − ωba + i /2 h¯
with ωba = ωb − ωa being the atomic level distance (in units of h¯ ) or, in other words, the atomic resonance frequency. In the considered approximation, the interaction Hamiltonian Hˆ I reads
(−) ˆ (+) ˆ (−) ˆ (+) E ˆ Hˆ I = − D E (0) + D (0) . (15.43) tot tot ˆ is the atomic dipole operator, Eˆ tot is the operator of the total electric The operator D field strength, and the ± sign
indicates the positive and negative frequency parts. The electric field strength has to be taken at the origin of the coordinate system r = 0 where the atom is located (the operators are time
independent as we are working in the Schr¨odinger picture). According to Equation (15.2), we have ˆ (+) (0) = ˆ (−) (0) = E eµ aˆ µ , E e∗µ aˆ µ† , (15.44) µ
Appendix. Mathematical description
where for simplicity we have written eµ instead of Eµ (0). Introducing the abbreviated notation Dab for the matrix element a|D(+) |b and using Equation (15.5) the matrix element of the interaction
Hamiltonian reads a, 0, 0, . . . , 1µ , 0, . . . | Hˆ I |b, 0, 0, . . . = −Dab e∗µ .
The calculation shows that the (positive) constant in Equations (15.41) and (15.42) is determined by the coupling parameters of Equation (15.45) in the form = 2ih¯ −2
= 2π h¯ −2
|Dab e∗µ |2 ωba −ωµ +iη
(η → +0)
d|Dab e∗µ |2 ρ(ω)|ω=ωba .
Here, the density of radiation field states is denoted by ρ(ω), p is the polarization direction of the emitted plane wave and the solid angle characterizes the propagation direction. The
Weisskopf–Wigner solution has the great advantage that it is not a perturbation theoretical approximation and hence is valid also for long times. It fulfils all expectations (see Section 6.5): it
exhibits an exponential decay of the upper level population (Equation (15.41)); the emitted radiation has a Lorentz-like line shape (Equation (15.42)); and it satisfies the relation between the
emission duration and the linewidth that is known from classical optics. Finally, for the damping constant we obtain using Equation (15.46) the connection with the transition dipole matrix elements
Dab known from perturbation theory, which shows that the decay is faster the bigger the atomic dipole moment, i.e. the stronger the coupling.
15.4 Theory of beamsplitting and optical mixing Let us assume that two optical waves 1 and 2 impinge on a lossless beamsplitter – a partially transmitting mirror (see Fig. 7.8). The energy
conservation law requires that the incident energy is completely transferred into the output beams 3 and 4. When the two incoming waves, assumed to be plane waves for simplicity, have the same
frequency, the classical description requires the intensities Ai∗ Ai (Ai being the amplitude of the respective wave) to fulfil the relation A∗1 A1 + A∗2 A2 = A∗3 A3 + A∗4 A4 .
This conservation law is generally valid when the amplitudes of the incident waves are connected with those of the outgoing waves through a unitary transformation. Choosing the latter in the form √ √
t √r A3 A1 √ = , (15.48) − r A4 t A2
15.4 Theory of beamsplitting and optical mixing
we describe a normal type of mirror with reflectivity r and transmittivity t (= 1 − r ). The transmitted beam passes without a phase shift; the beam reflected on the one side of the mirror, however,
acquires a phase jump of π. The quantum mechanical description of the beamsplitter is obtained simply by replacing the classical (complex) amplitudes in Equation (15.48) by the corresponding photon
annihilation operators
aˆ 3 aˆ 4
√ t √ − r
√ aˆ 1 √r , t aˆ 2
Apart from energy conservation, the unitarity matrix has another important function here: it guarantees the validity of the commutation relations †
[aˆ 3 , aˆ 4 ] = [aˆ 3 , aˆ 4 ] = [aˆ 3 , aˆ 4 ] = [aˆ 4 , aˆ 3 ] = 0, [aˆ 3 , aˆ 3 ] = [aˆ 4 , aˆ 4 ] = 1,
(15.50) (15.51)
(see Equation (15.3)) for the outgoing waves, when the corresponding relations are satisfied by the incident waves. The unitarity ensures the consistency of the quantum mechanical description. With
the help of Equation (15.49), the process of beamsplitting can be described rather easily. Because the transformation is chosen to be real, it holds also for the photon creation operators. Inverting
the relations we find √ √ † † aˆ 3 aˆ 1 t − √r = √ (15.52) † † . r t aˆ 2 aˆ 4
Let us assume that mode 1 is prepared in a state with exactly n photons and that mode 2 is empty. The initial state can be written, using the binomial theorem and Equations (15.6) and (15.52), in the
form †n
aˆ |n1 |02 |03 |04 = √1 |01 |02 |03 |04 n! n √ √ 1 n k †k †(n−k) =√ t (− r )n−k aˆ 3 aˆ 4 |01 |02 |03 |04 n! k=0 k n n √ k √ n−k = t (− r ) |01 |02 |k3 |n − k4 . (15.53) k k=0 The last expression
gives us the wave function of the outgoing light. It is obviously an entangled state. Let us analyze the simplest case, namely that of a single photon
Appendix. Mathematical description
(n = 1) impinging on a half transparent mirror; we learn from Equation (15.53) that the outgoing light is in the superposition state 1 |ψ (1) = √ (|13 |04 − |03 |14 ), 2
which, after reuniting the two beams in an interferometer, gives rise to an interference effect that can be understood as a consequence of the indistinguishability of the paths the photon might have
taken in the interferometer (see Section 7.2). Performing measurements on only one of the beams, for example the transmitted beam 3, we obtain the corresponding density matrix by tracing out the
unobserved sub-system in Equation (15.53): ρˆ3 =
wk |k33 k|
with (n) wk
n k n−k = t r . k
Obviously, Equation (15.55) represents a mixture, while the total system is in a pure state. Due to the linearity of the beamsplitting process with the result in Equation (15.53), it is easy to
predict how the beamsplitter transforms a general initial state. It induces the transformation ∞ ∞ cn |n1 |02 → cn | n , (15.57) n=0
where we abbreviated the last line of Equation (15.53) by | n (the common product vector |01 |02 was omitted). For the physically important case that the incident beam is in a Glauber state |α, the
transformation simplifies to √ √ |α1 |02 → | tα3 | − r α4 . (15.58) The outgoing light beams are – in complete agreement with the classical description – also in Glauber states (with correspondingly
attenuated amplitudes). It is remarkable that the state of the outgoing light is not entangled. The result in Equation (15.58) has great practical importance. Utilizing attenuation, a conventional
absorber might also be used instead of a beamsplitter; we can prepare arbitrarily weak Glauber light from intense Glauber light (laser radiation). We should point out that we can easily derive a
useful relation for the change of the photon number factorial moments, M ( j) ≡ n(n − 1) · · · (n − j + 1)
( j = 1, 2, . . . ),
15.4 Theory of beamsplitting and optical mixing
in transmission or reflection using the quantum mechanical formalism. These quantities are quantum mechanically simply the expectation values of the (normally ordered) operator products aˆ † j aˆ j .
Calculating them, for example for the transmitted wave, with the help of Equation (15.49) and the Hermitian conjugate equation under the assumption that only the first mode is excited, all the terms
to which the second mode contributes vanish. What remains is the simple relation ( j)
( j)
≡ aˆ 3 aˆ 3 = t j aˆ 1 aˆ 1 ≡ t j M1 ,
and an analogous equation holds true for the reflected wave (we have to replace the transmittivity t by the reflectivity r ). As we have seen, in the beamsplitter transformation of Equation (15.49)
we must also take into account, for reasons of consistency, the second mode, even when it is “empty.” By this we mean that the vacuum mode is coupled through the unused input port. The energy balance
is not influenced, but formally the vacuum field gives rise to additional fluctuations of the radiation field; vacuum fluctuations are, so to speak, entering into the apparatus. This suggestive
picture is especially useful when we do not count photons but measure the outcoming field with the help of the homodyne technique (see Section 10.2). The beamsplitter can also be used as an optical
mixer. To this end, light has to be sent also into the usually unused input port. The formal mathematical apparatus used to describe this mixing process is at hand in the form of Equation (15.49).
The experimentally simply realizable case of exactly one photon entering each of the input ports is of particular interest (see Section 7.6). With the help of Equation (15.52) we find † †
|11 |12 |03 |04 = aˆ 1 aˆ 2 |01 |02 |03 |04 √ (15.61) †2 †2 † † = r t aˆ 3 − aˆ 4 + (t − r )aˆ 3 aˆ 4 |01 |02 |03 |04 . Specializing to the case of a balanced mirror (t = r = 1/2), we arrive at the
following surprising result: 1 |11 |12 |03 |04 = √ [|23 |04 − |03 |24 ]|01 |02 , 2
which is obviously telling us that the photons are “inseparable” once mixed: when we “look” at them we find them always both in one of the output ports but never one of them in each output. This
means that the two detectors never indicate coincidences.
Appendix. Mathematical description
15.5 Quantum theory of interference The basic principle of interference is that two (or more) optical fields are superposed. A detector placed at a position r reacts naturally to the total electric
field strength Etot (r, t) = E(1) (r, t) + E(2) (r, t) residing on its sensitive surface. The response probability (per second) of the detector for quasimonochromatic light, according to quantum
mechanics, is (−) (+) W = βEˆ tot (r)Eˆ tot (r),
(see Equation (5.4)), where the constant β is proportional to the detection efficiency Let us denote (for reasons that will become clear later) the two beams that are made to interfere by 3 and 4;
then Equation (15.63) takes the form † † W = β |E3 (r)|2 aˆ 3 aˆ 3 + |E4 (r)|2 aˆ 4 aˆ 4 † † + E3∗ (r)E4 (r)aˆ 3 aˆ 4 + E3 (r)E4∗ (r)aˆ 3 aˆ 4 . (15.64) Here we have used Equation (15.2) and we have
assumed both waves to be linearly (and identically) polarized. To keep the analysis simple, let us assume the waves to be plane waves (which, due to the presence of mirrors, change their directions
before they become reunited). This means that the absolute values of E3 (r) and E4 (r) are the same and independent of r. The product E3∗ (r)E4 (r) contains an additional phase factor exp(iϕ). The
classical phase ϕ is (apart from possible phase jumps) determined by the path difference L, namely it is given by ϕ = 2πL/λ, with λ being the wavelength. Thus, Equation (15.64) simplifies to † † † †
W = const × aˆ 3 aˆ 3 + aˆ 4 aˆ 4 + eiϕ aˆ 3 aˆ 4 + e−iϕ aˆ 3 aˆ 4 . (15.65) As is well known, the condition sine qua non for the appearance of interference is the existence of a phase relation
between the partial waves. We recognize this in Equation (15.65) from the fact that the interference terms are determined by † † the correlation terms aˆ 3 aˆ 4 = aˆ 3 aˆ 4 ∗ . In conventional
interference experiments, the required phase relation is produced by splitting the primary beam either by beam or wavefront division. The result is two coherent beams, i.e. beams that can be made to
interfere. The beamsplitter discussed theoretically in Section (15.4) is an appropriate model for the description of the division process. The expectation values appearing in Equation (15.65) can be
easily expressed through the † mean photon number N 1 = aˆ 1 aˆ 1 of wave 1 assumed to be incident alone, using Equation (15.49) and the Hermitian conjugate relation. From energy conservation follows
aˆ 3 aˆ 3 + aˆ 4 aˆ 4 = aˆ 1 aˆ 1 + aˆ 2 aˆ 2 = N 1 .
15.5 Quantum theory of interference
The mixed terms are readily calculated with the help of the relation √ † † † † † aˆ 3 aˆ 4 = r t aˆ 2 aˆ 2 − aˆ 1 aˆ 1 − r aˆ 2 aˆ 1 + t aˆ 1 aˆ 2 ,
and because mode 2 is in the vacuum state we obtain √ † † aˆ 3 aˆ 4 = aˆ 3 aˆ 4 ∗ = − r t N 1 .
Thus, Equation (15.65) takes the simple form √ W = const × N 1 1 − 2 r t cos ϕ .
This result coincides exactly with the classical formula. For the case of a balanced beamsplitter, the visibility of the interference pattern attains unity. The point is that we have reproduced this
result quantum mechanically for an arbitrary input state. The quantum mechanical description of a conventional interference experiment does not reveal anything new. This holds true independently of
the intensity of the incident wave, and in particular for arbitrarily weak intensities (see Equation (15.69), which helps us to understand Dirac’s statement that we are always dealing with the
“interference of a photon with itself.” (We have to keep in mind that in the case N 1 1 only those members of the ensemble described by the wave function or the density matrix contribute to the
interference pattern on which the detector indeed registers a photon). The situation changes when we consider the interference of independent light waves (coming from two independent lasers, for
example). In this case, the ex† † pectation values in Equation (15.64) can be factorized as aˆ 3 aˆ 4 = aˆ 3 aˆ 4 and † † aˆ 3 aˆ 4 = aˆ 3 aˆ 4 , respectively. Interference can now be observed only
when the expectation values of aˆ are non-zero for both waves, i.e. when the waves have a more or less well defined absolute phase. This requirement is definitely met by Glauber light. Choosing the
radiation field to be in the state |ψ = |α3 3 |α4 4 , we can easily calculate the detector response probability with the help of Equations (15.24) and (15.25), the result being W = β{|E3 (r)|2 |α3 |2
+ |E4 (r)|2 |α4 |2 + E3∗ (r)E4 (r)α3∗ α4 + E3 (r)E4∗ (r)α3 α4∗ }. (15.70) Specializing as before to the case of a linearly polarized plane wave, we can simplify the previous equation to
W = const × |α3 |2 + |α4 |2 + eiϕ α3∗ α4 + e−iϕ α3 α4∗ , (15.71) and we arrive at the same expression as in classical theory. This is not surprising when we recall the general correspondence between
the classical and the quantum
Appendix. Mathematical description
mechanical description stated in Section 15.2. However, it is amazing that the visibility of the interference pattern does not change even for arbitrarily weak intensities and equals unity whenever
the (mean!) photon numbers |α3 |2 and |α4 |2 are equal. 15.6 Theory of balanced homodyne detection The homodyne technique described in Section 9.3 (see also Fig. 9.1) can be easily treated
theoretically when we also describe the photocurrent quantum mechanically. In the detection process (we consider 100% efficiency detectors), each photon is converted into an electron, and so it
appears natural to identify the photocurrent, apart from a factor e (elementary charge), with the photocurrent also in the sense of an operator relation. In the case of a quasimonochromatic wave, we
thus find the simple expression for the photocurrent to be Iˆ = s aˆ † a, ˆ
where aˆ † and aˆ are the creation and annihilation operators (see Section 15.1) and s is a constant (ensuring in particular the dimensional correctness of the relation). Using a balanced
beamsplitter (see Fig. 9.1), signal 1 is mixed with a local oscillator. This process is described by Equation (15.49). Assuming the local oscillator to be in a Glauber state |αL (in practice a laser
beam) with a large amplitude, we † can approximate the operators aˆ L and aˆ L by complex numbers αL and αL∗ . The photocurrent operators Iˆ3 and Iˆ4 take the form † Iˆ3 = 12 s(aˆ 1 + αL∗ )(aˆ 1 + αL
† Iˆ4 = 12 s(−aˆ 1 + αL∗ )(−aˆ 1 + αL ).
Subtracting these two equations, we obtain the simple relation † Iˆ ≡ Iˆ3 − Iˆ4 = s(aˆ 1 αL + aˆ 1 αL∗ )
for the actually measured quantity. Introducing the phase of the local oscillator through the relation αL = |αL | exp(i), we realize that we measure the observable 1 † xˆ ≡ √ ei aˆ 1 + e−i aˆ 1 ,
(15.76) 2 and this is, according to Equation (15.12), simply a special quadrature component. Through the variation of for an unchanged signal, it is possible to measure a whole set of quadrature
components xˆ . This is what we need for the tomographic reconstruction of the Wigner function (see Section 10.3).
Alguard, M. J. and C. W. Drake. 1973. Phys. Rev. A 8, 27. Arecchi, F. T., A. Berne and P. Burlamacchi. 1966. Phys. Rev. Lett. 16, 32. Arecchi, F. T., V. Degiorgio and B. Querzola. 1967a. Phys. Rev.
Lett. 19, 1168. Arecchi, F. T., M. Giglio and U. Tartari. 1967b. Phys. Rev. 163, 186. Arnold, W., S. Hunklinger and K. Dransfeld. 1979. Phys. Rev. B 19, 6049. Aspect, A., P. Grangier and G. Roger.
1981. Phys. Rev. Lett. 47, 460. Aspect, A., J. Dalibard and G. Roger. 1982. Phys. Rev. Lett. 49, 1804. Bandilla, A. and H. Paul. 1969. Ann. Physik 23, 323. Basch´e, T., W. E. Moerner, M. Orrit and H.
Talon. 1992. Phys. Rev. Lett. 69, 1516. Bell, J. S. 1964. Physics 1, 195. Bennett, C. H., G. Brassard and A. K. Ekert. 1992. Sci. Am. 26, 33. Bennett, C. H., G. Brassard, C. Cr´epeau, R. Josza, A.
Peres and W. K. Wootters. 1993. Phys. Rev. Lett. 70, 1895. Beth, R. A. 1936. Phys. Rev. 50, 115. Born, M. and E. Wolf. 1964. Principles of Optics, 2nd edn. Oxford: Pergamon Press, p. 10. Bouwmeester,
D., J.-W. Pan, K. Mattle, M. Eibl, H. Weinfurter and A. Zeilinger. 1997. Nature 390, 575. Braunstein, S. L. and A. Mann. 1995. Phys. Rev. A 51, R1727. Braunstein, S. L. and H. J. Kimble. 1998. Phys.
Rev. Lett. 80, 869. Brendel, J., S. Sch¨utrumpf, R. Lange, W. Martienssen and M. O. Scully. 1988. Europhys. Lett. 5, 223. Brunner, W., H. Paul and G. Richter. 1964. Ann. Physik 14, 384. 1965. Ann.
Physik 15, 17. Clauser, J. F. and A. Shimony. 1978. Rep. Prog. Phys. 41, 1881. Clauser, J. F., M. A. Horne, A. Shimony and R. A. Holt. 1969. Phys. Rev. Lett. 23, 880. Dagenais, M. and L. Mandel.
1978. Phys. Rev. A 18, 2217. Davis, C. C. 1979. IEEE J. Quantum Electron. QE-15, 26. De Martini, F., G. Innocenti, G. R. Jacobovitz and P. Mataloni. 1987. Phys. Rev. Lett. 26, 2955. Dempster, A. J.
and H. F. Batho. 1927. Phys. Rev. 30, 644. Diedrich, F. and H. Walther. 1987. Phys. Rev. Lett. 58, 203. Dirac, P. A. M. 1927. Proc. Roy. Soc. (London) A 114, 243. 1958. The Principles of Quantum
Mechanics, 4th edn. London: Oxford University Press, p. 9. 235
Drexhage, K. H. 1974. Interaction of light with monomolecular dye layers. In E. Wolf (ed.), Progress in Optics, vol. 12. Amsterdam: North-Holland Publishing Company, p. 163. D¨urr, S., T. Nonn and G.
Rempe. 1998. Nature 395, 33. Einstein, A. 1905. Ann. Physik 17, 132. 1906. Ann. Physik 20, 199. 1917. Phys. Ztschr. 18, 121. Einstein, A., B. Podolsky and N. Rosen. 1935. Phys. Rev. 47, 777. Foord,
R., E. Jakeman, R. Jones, C. J. Oliver and E. R. Pike. 1969. IERE Conference Proceedings No. 14. Forrester, A. T., R. A. Gudmundsen and P. O. Johnson. 1955. Phys. Rev. 99, 1691. Franck, J. and G.
Hertz. 1913. Verh. d. Dt. Phys. Ges. 15, 34, 373, 613, 929. 1914. Verh. d. Dt. Phys. Ges. 16, 12, 457, 512. Franson, J. D. 1989. Phys. Rev. Lett. 62, 2205. Freedman, S. J. and J. F. Clauser. 1972.
Phys. Rev. Lett. 28, 938. Freyberger, M., K. Vogel and W. Schleich. 1993. Quantum Opt. 5, 65. Furusawa, A., J. L. Sørensen, S. L. Braunstein, C. A. Fuchs, H. J. Kimble and E. S. Polzik. 1998. Science
282, 706. Ghosh, R. and L. Mandel. 1987. Phys. Rev. Lett. 59, 1903. Glauber, R. J. 1965. Optical coherence and photon statistics. In C. De Witt, A. Blandin and C. Cohen-Tannoudji (eds.), Quantum
Optics and Electronics. New York: Gordon and Breach. 1986. Amplifiers, attenuators and the quantum theory of measurement. In E. R. Pike and S. Sarkar (eds.), Frontiers in Quantum Optics. Bristol:
Adam Hilger Ltd. Gordon, J. P., H. J. Zeiger and C. H. Townes. 1954. Phys. Rev. 95, 282. Grishaev, I. A., I. S. Guk, A. S. Mazmanishvili and A. S. Tarasenko. 1972. Zurn. eksper. teor. Fiz. 63, 1645.
Hanbury Brown, R. 1964. Sky and Telescope 28, 64. Hanbury Brown, R. and R. Q. Twiss. 1956a. Nature 177, 27. 1956b. Nature 178, 1046. Hauser, U., N. Neuwirth and N. Thesen. 1974. Phys. Lett. 49 A, 57.
Heitler, W. 1954. The Quantum Theory of Radiation, 3rd edn. London: Oxford University Press. Hellmuth, T., H. Walther, A. Zajonc and W. Schleich. 1987. Phys. Rev. A 35, 2532. Hong, C. K. and L.
Mandel. 1986. Phys. Rev. Lett. 56, 58. Hong, C. K., Z. Y. Ou and L. Mandel. 1987. Phys. Rev. Lett. 59, 2044. Hulet, R. G., E. S. Hilfer and D. Kleppner. 1985. Phys. Rev. Lett. 55, 2137. Huygens, C.
1690. Trait´e de la Lumi`ere. Leiden: Pierre von der Aa. Itano, W. M., J. C. Bergquist, R. G. Hulet and D. J. Wineland. 1987. Phys. Rev. Lett. 59, 2732. Itano, W. M., D. J. Heinzen, J. J. Bollinger
and D. J. Wineland. 1990. Phys. Rev. A 41, 2295. Jakeman, E., E. R. Pike, P. N. Pusey and J. M. Vaughan. 1977. J. Phys. A 10, L 257. J´anossy, L. 1973. Experiments and theoretical considerations
concerning the dual nature of light. In H. Haken and M. Wagner (eds.), Cooperative Phenomena. Berlin: Springer-Verlag, p. 308. J´anossy, L. and Z. N´aray. 1957. Acta Phys. Acad. Sci. Hung. 7, 403.
1958. Nuovo Cimento, Suppl. 9, 588. Javan, A., E. A. Ballik and W. L. Bond. 1962. J. Opt. Soc. Am. 52, 96. Kimble, H. J., M. Dagenais and L. Mandel. 1977. Phys. Rev. Lett. 39, 691.
Kwiat, P. G., A. M. Steinberg and R. Y. Chiao. 1993. Phys. Rev. A 47, R 2472. 1994. Phys. Rev. A 49, 61. Kwiat, P. G., K. Mattle, H. Weinfurter, A. Zeilinger, A. V. Sergienko and Y. H. Shih. 1995.
Phys. Rev. Lett. 75, 4337. Landau, L. D. and E. M. Lifschitz. 1965. Lehrbuch der theoretischen Physik, vol. III, Quantenmechanik. Berlin: Akademie-Verlag, p. 158. Lawrence, E. O. and J. W. Beams.
1927. Phys. Rev. 29, 903. Lebedew, P. N. 1910. Fis. Obosrenie 11, 98. Lenard, P. 1902. Ann. Physik. 8, 149. Lenz, W. 1924. Z. Phys. 25, 299. Leonhardt, U. 1997. Measuring the Quantum State of Light.
Cambridge: Cambridge University Press. Leonhardt, U. and H. Paul. 1993a. Phys. Rev. A 47, R 2460. 1993b. Phys. Rev. A 48, 4598. 1995. Prog. Quantum Electron. 19, 89. London, F. 1926. Z. Phys. 40,
193. Lorentz, H. A. 1910. Phys. Ztschr. 11, 1234. Machida, S., Y. Yamamoto and Y. Itaya. 1987. Phys. Rev. Lett. 58, 1000. Magyar, G. and L. Mandel. 1963. Nature 198 (4877), 255. Mandel, L. 1963.
Fluctuations of light beams. In E. Wolf (ed.), Progress in Optics, vol.2. Amsterdam: North-Holland Publishing Company, p. 181. 1976. J. Opt. Soc. Am. 66, 968. 1983. Phys. Rev. A 28, 929. Mandel, L.
and E. Wolf. 1965. Rev. Mod. Phys. 37, 231. Martienssen, W. and E. Spiller. 1964. Am. J. Phys. 32, 919. Meschede, D., H. Walther and G. M¨uller. 1984. Phys. Rev. Lett. 54, 551. Michelson, A. A. and
F. G. Pease. 1921. Astrophys. J. 53, 249. Middleton, D. 1960. An Introduction to Statistical Communication Theory. New York: McGraw-Hill. Millikan, R. A. 1916. Phys. Rev. 7, 373. Neuhauser, W., M.
Hohenstatt, P. E. Toschek and H. Dehmelt. 1980. Phys. Rev. A 22, 1137. Newton, I. 1730. Opticks: Or a Treatise of the Reflections, Refractions, Inflections and Colours of Light, 4th edn. London.
Reprinted 1952, New York: Dover Publications. Noh, J. W., A. Foug´eres and L. Mandel. 1991. Phys. Rev. Lett. 67, 1426. 1992. Phys. Rev. A 45, 424. Osnaghi, S., P. Bertet, A. Auffeves, P. Maioli, M.
Brune, J. M. Raimond and S. Haroche. 2001. Phys. Rev. Lett. 87, 037902–1. Ou, Z. Y., L. J. Wang and L. Mandel. 1989. Phys. Rev. A 40, 1428. Paul, H. 1966. Fortschr. Phys. 14, 141. 1969. Lasertheorie
I. Berlin: Akademie-Verlag. 1973a. Nichtlineare Optik I. Berlin: Akademie-Verlag. 1973b. Nichtlineare Optik II. Berlin: Akademie-Verlag. 1974. Fortschr. Phys. 22, 657. 1980. Fortschr. Phys. 28, 633.
1981. Opt. Acta 28, 1. 1985. Am. J. Phys. 53, 318. 1986. Rev. Mod. Phys. 58, 209. 1996. Opt. Quantum Electron. 28, 1111.
Paul, H., W. Brunner and G. Richter. 1963. Ann. Physik. 12, 325. 1965. Ann. Physik. 16, 93. Paul, H. and R. Fischer. 1983. Usp. fiz. nauk. 141, 375. Pauli, W. 1933. Die allgemeinen Prinzipien der
Wellenmechanik. In H. Geiger and K. Scheel (eds.), Handbuch der Physik 24/1, 2nd edn. Berlin: Springer-Verlag. English translation: Pauli, W. 1980. General Principles of Quantum Mechanics. Berlin:
Springer-Verlag, p. 21. Pease, F. G. 1931. Ergeb. exakt. Naturwissensch. 10, 84. Pfleegor, R. L. and L. Mandel. 1967. Phys. Rev. 159, 1084. 1968. J. Opt. Soc. Am. 58, 946. Pike, E. R. 1970. Photon
statistics. In S. M. Kay and A. Maitland (eds.), Quantum Optics. London: Academic Press. Planck, M. 1943. Naturwissensch. 31, 153. 1966. Theorie der W¨armestrahlung, 6th edn. Leipzig: J. A. Barth,
pp. 190 ff. Power, E. A. 1964. Introductory Quantum Electrodynamics. London: Longmans. Prodan, J.V., W. D. Phillips and H. Metcalf. 1982. Phys. Rev. Lett. 49, 1149. Radloff, W. 1968. Phys. Lett. A
27, 366. 1971. Ann. Physik. 26, 178. Rebka, G. A. and R. V. Pound. 1957. Nature 180, 1035. Rempe, G., F. Schmidt-Kaler and H. Walther. 1990. Phys. Rev. Lett. 64, 2783. Renninger, M. 1960. Ztschr.
Phys. 158, 417. Reynolds, G. T., K. Spartalian and D. B. Scarl. 1969. Nuovo Cimento 61 B, 355. Roditschew, W. I. and U. I. Frankfurt (eds.) 1977. Die Sch¨opfer der physikalischen Optik. Berlin:
Akademie-Verlag. Sauter, T., R. Blatt, W. Neuhauser and P. E. Toschek. 1986. Opt. Commun. 60, 287. Schr¨odinger, E. 1935. Proc. Camb. Phil. Soc. 31, 555. Shapiro, J. H. and S. S. Wagner. 1984. IEEE
J. Quantum Electron. QE-20, 803. Sigel, M., C. S. Adams and J. Mlynek. 1992. Atom optics. In T. W. Haensch and M. Inguscio (eds.), Frontiers in Laser Spectroscopy. Proceedings of the International
School of Physics ‘Enrico Fermi,’ Course CXX, Varenna. Sillitoe, R. M. 1972. Proc. Roy. Soc. Edinburgh, Sect. A, 70 A, 267. Sleator, T., O. Carnal, T. Pfau, A. Faulstich, H. Takuma and J. Mlynek.
1992. In M. Ducloy et al. (eds.), Proceedings of the Tenth International Conference on Laser Spectroscopy. Singapore: World Scientific, p. 264. Slusher, R. E., L. W. Hollberg, B. Yurke, J. C. Mertz
and J. F. Valley. 1985. Phys. Rev. Lett. 55, 2409. Slusher, R. E. and B. Yurke. 1986. Squeezed state generation experiments in an optical cavity. In E. R. Pike and S. Sarkar (eds.), Frontiers in
Quantum Optics, Bristol: Adam Hilger Ltd. Smithey, D. T., M. Beck, M. G. Raymer and A. Faridani. 1993. Phys. Rev. Lett. 70, 1244. Sommerfeld, A. 1949. Vorlesungen u¨ ber Theoretische Physik, vol.
III, Elektrodynamik. Leipzig: Akademische Verlagsgesellschaft. 1950. Vorlesungen u¨ ber Theoretische Physik, vol. IV, Optik. Wiesbaden: Dieterich’sche Verlagsbuchhandlung. Taylor, G. I. 1909. Proc.
Camb. Phil. Soc. 15, 114. Teich, M. C. and B. E. A. Saleh. 1985. J. Opt. Soc. Am. B 2, 275. Tittel, W., J. Brendel, H. Zbinden and N. Gisin. 1998. Phys. Rev. Lett. 81, 3563. Vaidman, L. 1994. Phys.
Rev. A 49, 1473. Vogel, K. and H. Risken. 1989. Phys. Rev. A 40, 2847.
Walker, J. G. and E. Jakeman. 1985. Opt. Acta 32, 1303. Wawilow, S. I. 1954. Die Mikrostruktur des Lichtes. Berlin: Akademie-Verlag. Weisskopf, V. and E. Wigner. 1930a. Z. f. Phys. 63, 54. 1930b. Z.
f. Phys. 65, 18. Weihs, G., T. Jennewein, C. Simon, H. Weinfurter and A. Zeilinger. 1998. Phys. Rev. Lett. 81, 5039. Wootters, W. K. and W. H. Zurek. 1982. Nature 299, 802. Wu, L.-A., H. J. Kimble,
J. L. Hall and H. Wu. 1986. Phys. Rev. Lett. 57, 2520. Young, T. 1802. Phil. Trans. Roy. Soc. London 91, part 1, 12. 1807. Lectures on Natural Philosophy, vol. 1. London. Zou, X. Y., L. J. Wang and
L. Mandel. 1991. Phys. Rev. Lett. 67, 318.
absorption 41, 51, 55 absorption line 41 accumulation time 49 amplifier 195 anticorrelations 118 arrival time of photons 48 beam–foil technique 59, 70 beats 37, 47, 54, 77, 93, 100 Bernoulli
transformation 90 Bohr’s atoms model 42 Bose–Einstein distribution 137, 145 Casimir force 36 cavity radiation 88 coherence volume 37 coincidence count rate 140, 184 coincidences 120, 121, 132, 139,
147, 184 delayed 48, 135, 137, 144, 149 collision, inelastic 68, 75 collision broadening 68 correspondence principle 226 count rate of a detector 140 decay law, exponential 61, 75 dipole moment,
electric 22, 31, 52, 79, 141 emission line 25, 41 energy density of the electromagnetic field 18, 39 energy flow 18, 52, 53 energy flux density 18 exit work 14 Fabry–Perot interferometer 26 frequency
measurement on photons 70, 96 Glauber P representation 225 Glauber state 39, 224 guiding wave 96 harmonics generation 151 Hertz dipole 22
image converter 46 induced emission 141 intensity correlations 130, 134 spatial 47, 108, 117 time 47, 134 intensity of light 19, 46, 73 interference fringes 10, 99 interference phenomena 9 laser
radiation 141 light pressure 84 Mach–Zehnder interferometer 97 Mandel’s Q parameter 141 Michelson interferometer 91 micromaser 152 mode of the field 33, 36 momentum of a photon 84 needle radiation 85
non-objectifiability 183, 194, 204 one-atom maser 152 optical homodyne tomography 175 optical mixer 121 parametric amplification, degenerate 157 perception 50 phase distribution 166, 168 phase
matching condition 80 phase state 165 photocell 44 photodetector 45 photoeffect 12, 48, 56 photography 43 photomultiplier 44 photon 14, 35, 37, 57, 71, 74, 78, 82, 87 photon antibunching 145, 148,
153 photon bunching 134 photon number 38, 105, 118, 137, 146, 193 photon pair 80, 83, 120, 177, 183, 187, 203 Poisson distribution 40, 138, 143
Index polarization state of a photon 182, 195, 204 Poynting relation 17 Poynting vector 18, 23 pressure-broadened spectral lines 65, 68 pseudothermal light 139
streak camera 46 subharmonic generation 158 sub-Poissonian statistics 148, 152 superposition principle 21, 32 synchrotron radiation 93
Q-function 168, 170 quantum beats 77 quantum jump 61, 62 quantum Zeno paradox 32 quasiprobability distribution 170, 171
teleportation 207 thermal light 131, 134, 137, 141 three-wave interaction 79 time of flight broadening 27 total photon angular momentum 86 two-photon absorber 150
Rayleigh scattering 24 recoil of an atom 108 resonance fluorescence 61, 148, 150 resonance frequency, atomic 42, 49, 72 resonator 66 resonator eigenoscillation 33 scattered light 136, 139 secondary
electron multiplication 44 shot noise 163 spin of a photon 85 squeezed vacuum 158 squeezing 157, 158, 162 stellar interferometer Hanbury Brown–Twiss 130 Michelson 127, 130
uncertainty relation phase and quantum number 106 time and energy 27 vacuum fluctuations 38, 82, 107, 167 vacuum state 35, 38, 162 Vernam code 202 visibility of an interference pattern 92, 129
Weisskopf–Wigner theory 71 Wigner function 169, 174 Young’s interference experiment 9, 108 zero point energy 35
|
{"url":"https://silo.pub/introduction-to-quantum-optics.html","timestamp":"2024-11-09T13:02:18Z","content_type":"text/html","content_length":"669008","record_id":"<urn:uuid:65bd4824-385d-4612-b3de-a128cf356465>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00847.warc.gz"}
|
Molecular collisions, effect on the HD infrared spectrum and development of a Moyal quantum mechanical description
Interference is possible between the allowed dipole moment of the molecule HD and the pair dipole moment induced by collision with a foreign gas atom. The resulting line shape can be described by the
sum of a Lorentzian and an asymmetric profile. The mixing of rotational levels by an anisotropic interaction potential call permit components of the induced dipole moment that do not have the same
symmetry of the allowed moment to interfere with it. For the rotational spectrum of HD-He and HD-Ar the effect of each component of the induced dipole moment on the line shape parameters is
determined for various temperatures and transitions. For line intensity, the component with the same symmetry as the allowed moment always dominates, but the effect of the other components is shown
to be significant. The line shape parameters for the vibrorotational spectrum of HD-He are calculated for $P\sb1(1),\ R\sb1(0),\ R\sb1(1)$ transitions at 77, 195, and 295 K. Moyal quantum mechanics
is an alternative to Heisenberg or Schrodinger quantum mechanics. The method yields a semiclassical expansion of phase space trajectories in terms of Planck's constant, h. The Moyal correction to the
classical part of the solution is found to $O(h\sp2)$. The first computational version of Moyal quantum mechanics to calculate average values for three dimensional systems with physically relevant
parameters is developed. The system treated is the scattering of a Gaussian wave packet by the helium, neon, and argon interaction potentials. The Gaussian is squeezed in momentum so that the
momentum average an be done analytically. This introduces a momentum correction and the Gaussian is taken to have a single initial velocity. We examine scattering at velocities of 300-1200 m/s.
Sensitive areas of the phase space average are identified. Integrals over coordinate phase space (impact parameter and displacement y) are examined in detail. The region of phase space which produces
rainbow scattering is determined to result in the largest quantum effects. The Moyal correction is found to be small for impact parameters greater than $2b\sb{rainbow} - b\sb{glory}$. The corrections
to average values are examined in detail for helium at a velocity of 300 m/s. It is shown that the Moyal corrections have an asymptotic time behaviour which is the same as that of the classical part
of the average, but that they may grow as $t \to \infty$ to dominate the total average. The corrections to the average value are examined as functions of mass and velocity. The Moyal correction is
seen to change sign relative to the classical part of the average in both cases. The momentum correction is shown to have a mass$\sp{-2}$ dependence. More complex asymptotic behaviour of the Moyal
correction is examined for both the mass and the velocity. Comparison between the size of the correction for the systems consisting of two helium, neon, and argon atoms is performed. (Abstract
shortened by UMI.)
|
{"url":"https://mspace.lib.umanitoba.ca/items/0eb0c392-e5da-46b5-bee5-6ce9bdd1ae39","timestamp":"2024-11-08T17:07:42Z","content_type":"text/html","content_length":"432791","record_id":"<urn:uuid:12afc59d-7d8b-4b38-977b-26056807136f>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00613.warc.gz"}
|
High Schooler Develops Groundbreaking Proof for Puzzler That Long Stumped Math Experts
Thought Question: Reflect on a time when you were presented with a difficult question. How did you go about finding your answer?
What question can you not stop thinking about? For Daniel Larsen of Indiana, his was about math. He spent more than a year poring over an unproven theorem. In his senior year of high school, he
developed his own mathematical proof for it. He emailed it to some of the top people working in number theory.
Larsen’s question was about Carmichael numbers. These look like prime numbers. They aren't, though. A prime number is a number that can only be divided by 1 and itself. Carmichaels are made by
multiplying at least three prime numbers together. The results often appear prime themselves. The smallest Carmichael is 561 (3 x 11 x 17).
Carmichael numbers were found by mathematicians over 100 years ago. Carmichaels got in the way as arithmetic addicts sought to identify prime numbers quickly. The quest has become more relevant in
modern cryptography (the art of writing secret codes)! That’s because today’s most-used secret codes involve math with huge primes.
So, how are Carmichael numbers distributed along the number line? Larsen proved that they must always appear between X and 2X, as long as X is a number bigger than 561. Mathematicians hadn't figured
that out yet.
Larson emailed his proof to experts. One of the people he emailed was Andrew Granville. In 1994, he helped prove there are infinitely many Carmichaels.
“It wasn’t the easiest read ever,” Granville told Quanta Magazine. “But … he wasn’t messing around.”
“(Larsen’s proof) changes a lot of things about how we might prove things about Carmichael numbers,” said one mathematician. He and others hope Larsen’s work will help reveal more about how these
strange numbers behave.
Moral of the story? Don’t stop scratching at those questions that tickle your brain.
Photo by Clayton Robbins courtesy of Unsplash.
|
{"url":"https://nyclimateeducation.org/news/high-schooler-develops-groundbreaking-proof-for-puzzler-that-long-stumped-math-experts","timestamp":"2024-11-10T10:53:29Z","content_type":"text/html","content_length":"124801","record_id":"<urn:uuid:be7c7d5c-5b9f-4660-b650-df7128a6ddf0>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00814.warc.gz"}
|
39. The Pew Research Center finds that the de
At Solvely.ai, we highly respect the intellectual property rights of others and are committed to protecting copyrights and other intellectual property.
Only the owner of intellectual property rights or an authorized representative can report potentially infringing content. If you believe your work has been infringed through our services, please
notify us at: copyright@solvely.ai copied!
Note: This contact information is exclusively for inquiries regardiing potential copyright and intellectual property infringements.
39. The Pew Research Center finds that the demographic make-up of political parties is changing drastically through the election cycles. Consider the following summary of education levels along party
From the Community
39. The Pew Research Center finds that the demographic make-up of political parties is changing drastically through the election cycles. Consider the following summary of education levels along party
lines. College Degree No College Degree Total Democrat Republican Total 37 63 100 31 69 100 68 132 200 (a) What is the probability a randomly selected participant has a college degree? (b) What is
the probability that a randomly selected participant is a democrat? (c) What is the probability that a randomly selected participant is a democrat and has a college degree? (d) Of those who have
college degrees, what is the probability of being a democrat? (e) What is the probability of begin a democrat or having a college degree?
Let's break down the problem step by step and answer each part of the question using the provided data.
Given Data:
• Total participants: 200
• Participants with a college degree: 68
• Participants without a college degree: 132
• Democrats with a college degree: 37
• Republicans with a college degree: 31
• Democrats without a college degree: 63
• Republicans without a college degree: 69
(a) What is the probability a randomly selected participant has a college degree?
To find the probability that a randomly selected participant has a college degree, we use the formula for probability:
\[ P(\text{College Degree}) = \frac{\text{Number of participants with a college degree}}{\text{Total number of participants}} \]
\[ P(\text{College Degree}) = \frac{68}{200} = 0.34 \]
So, the probability is 0.34 or 34%.
(b) What is the probability that a randomly selected participant is a Democrat?
To find the probability that a randomly selected participant is a Democrat, we sum the number of Democrats with and without a college degree and divide by the total number of participants:
\[ P(\text{Democrat}) = \frac{\text{Number of Democrats}}{\text{Total number of participants}} \]
\[ P(\text{Democrat}) = \frac{37 + 63}{200} = \frac{100}{200} = 0.5 \]
So, the probability is 0.5 or 50%.
(c) What is the probability that a randomly selected participant is a Democrat and has a college degree?
To find the probability that a randomly selected participant is a Democrat and has a college degree, we use the number of Democrats with a college degree and divide by the total number of
\[ P(\text{Democrat and College Degree}) = \frac{\text{Number of Democrats with a college degree}}{\text{Total number of participants}} \]
\[ P(\text{Democrat and College Degree}) = \frac{37}{200} = 0.185 \]
So, the probability is 0.185 or 18.5%.
(d) Of those who have college degrees, what is the probability of being a Democrat?
To find the conditional probability that a participant with a college degree is a Democrat, we use the formula for conditional probability:
\[ P(\text{Democrat} | \text{College Degree}) = \frac{P(\text{Democrat and College Degree})}{P(\text{College Degree})} \]
We already know: \[ P(\text{Democrat and College Degree}) = 0.185 \] \[ P(\text{College Degree}) = 0.34 \]
\[ P(\text{Democrat} | \text{College Degree}) = \frac{0.185}{0.34} \approx 0.544 \]
So, the probability is approximately 0.544 or 54.4%.
(e) What is the probability of being a Democrat or having a college degree?
To find the probability of being a Democrat or having a college degree, we use the formula for the union of two events:
\[ P(\text{Democrat or College Degree}) = P(\text{Democrat}) + P(\text{College Degree}) - P(\text{Democrat and College Degree}) \]
We already know: \[ P(\text{Democrat}) = 0.5 \] \[ P(\text{College Degree}) = 0.34 \] \[ P(\text{Democrat and College Degree}) = 0.185 \]
\[ P(\text{Democrat or College Degree}) = 0.5 + 0.34 - 0.185 = 0.655 \]
So, the probability is 0.655 or 65.5%.
These calculations provide a clear understanding of the probabilities based on the given data.
|
{"url":"https://solvely.ai/homework/268761747-39-the-pew-research-center-finds-that-the-demographic-makeup-of-political-parties-is-chang","timestamp":"2024-11-05T21:51:03Z","content_type":"text/html","content_length":"61951","record_id":"<urn:uuid:23b33ee4-070e-4b2a-b986-0a4b4dd82964>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00643.warc.gz"}
|
Theory Satisfiability
chapter ‹Satisfiability\label{s:Sat}›
theory Satisfiability
imports Wellformed NP
text ‹
This chapter introduces the language \SAT{} and shows that it is in $\NP$, which
constitutes the easier part of the Cook-Levin theorem. The other part, the
$\NP$-hardness of \SAT{}, is what all the following chapters are concerned with.
We first introduce Boolean formulas in conjunctive normal form and the concept
of satisfiability. Then we define a way to represent such formulas as bit
strings, leading to the definition of the language \SAT{} as a set of strings
For the proof that \SAT{} is in $\NP$, we construct a Turing machine that, given
a CNF formula and a string representing a variable assignment, outputs
\textbf{1} iff. the assignment satisfies the formula. The TM will run in
polynomial time, and there are always assignments polynomial (in fact, linear)
in the length of the formula (Section~\ref{s:Sat-np}).
section ‹The language \SAT{}\label{s:sat-sat}›
text ‹
\SAT{} is the language of all strings representing satisfiable Boolean
formulas in conjunctive normal form (CNF). This section introduces a minimal
version of Boolean formulas in conjunctive normal form, including the
concepts of assignments and satisfiability.
subsection ‹CNF formulas and satisfiability\label{s:CNF}›
text ‹
Arora and Barak~\cite[p.~44]{ccama} define Boolean formulas in general as
expressions over $\land$, $\lor$, $\lnot$, parentheses, and variables $v_1, v_2,
\dots$ in the usual way. Boolean formulas in conjunctive normal form are defined as
$\bigwedge_i\left(\bigvee_j v_{i_j}\right)$, where the $v_{i_j}$ are literals.
This definition does not seem to allow for empty clauses. Also whether the
``empty CNF formula'' exists is somewhat doubtful. Nevertheless, our
formalization allows for both empty clauses and the empty CNF formula, because
this enables us to represent CNF formulas as lists of clauses and clauses as
lists of literals without having to somehow forbid the empty list. This seems to
be a popular approach for formalizing CNF formulas in the context of \SAT{} and
We identify a variable $v_i$ with its index $i$, which can be any natural
number. A \emph{literal} can either be positive or negative, representing a
variable or negated variable, respectively.
datatype literal = Neg nat | Pos nat
type_synonym clause = "literal list"
type_synonym formula = "clause list"
text ‹
An \emph{assignment} maps all variables, given by their index, to a Boolean:
type_synonym assignment = "nat ⇒ bool"
abbreviation satisfies_literal :: "assignment ⇒ literal ⇒ bool" where
"satisfies_literal α x ≡ case x of Neg n ⇒ ¬ α n | Pos n ⇒ α n"
definition satisfies_clause :: "assignment ⇒ clause ⇒ bool" where
"satisfies_clause α c ≡ ∃x∈set c. satisfies_literal α x"
definition satisfies :: "assignment ⇒ formula ⇒ bool" (infix ‹⊨› 60) where
"α ⊨ φ ≡ ∀c∈set φ. satisfies_clause α c"
text ‹
As is customary, the empty clause is satisfied by no assignment, and the empty
CNF formula is satisfied by every assignment.
proposition "¬ satisfies_clause α []"
by (simp add: satisfies_clause_def)
proposition "α ⊨ []"
by (simp add: satisfies_def)
lemma satisfies_clause_take:
assumes "i < length clause"
shows "satisfies_clause α (take (Suc i) clause) ⟷
satisfies_clause α (take i clause) ∨ satisfies_literal α (clause ! i)"
using assms satisfies_clause_def by (auto simp add: take_Suc_conv_app_nth)
lemma satisfies_take:
assumes "i < length φ"
shows "α ⊨ take (Suc i) φ ⟷ α ⊨ take i φ ∧ satisfies_clause α (φ ! i)"
using assms satisfies_def by (auto simp add: take_Suc_conv_app_nth)
lemma satisfies_append:
assumes "α ⊨ φ⇩[1] @ φ⇩[2]"
shows "α ⊨ φ⇩[1]" and "α ⊨ φ⇩[2]"
using assms satisfies_def by simp_all
lemma satisfies_append':
assumes "α ⊨ φ⇩[1]" and "α ⊨ φ⇩[2]"
shows "α ⊨ φ⇩[1] @ φ⇩[2]"
using assms satisfies_def by auto
lemma satisfies_concat_map:
assumes "α ⊨ concat (map f [0..<k])" and "i < k"
shows "α ⊨ f i"
using assms satisfies_def by simp
lemma satisfies_concat_map':
assumes "⋀i. i < k ⟹ α ⊨ f i"
shows "α ⊨ concat (map f [0..<k])"
using assms satisfies_def by simp
text ‹
The main ingredient for defining \SAT{} is the concept of \emph{satisfiable} CNF
definition satisfiable :: "formula ⇒ bool" where
"satisfiable φ ≡ ∃α. α ⊨ φ"
text ‹
The set of all variables used in a CNF formula is finite.
definition variables :: "formula ⇒ nat set" where
"variables φ ≡ {n. ∃c∈set φ. Neg n ∈ set c ∨ Pos n ∈ set c}"
lemma finite_variables: "finite (variables φ)"
proof -
define voc :: "clause ⇒ nat set" where
"voc c = {n. Neg n ∈ set c ∨ Pos n ∈ set c}" for c
let ?vocs = "set (map voc φ)"
have "finite (voc c)" for c
proof (induction c)
case Nil
then show ?case
using voc_def by simp
case (Cons a c)
have "voc (a # c) = {n. Neg n ∈ set (a # c) ∨ Pos n ∈ set (a # c)}"
using voc_def by simp
also have "... = {n. Neg n ∈ set c ∨ Neg n = a ∨ Pos n ∈ set c ∨ Pos n = a}"
by auto
also have "... = {n. (Neg n ∈ set c ∨ Pos n ∈ set c) ∨ (Pos n = a ∨ Neg n = a)}"
by auto
also have "... = {n. Neg n ∈ set c ∨ Pos n ∈ set c} ∪ {n. Pos n = a ∨ Neg n = a}"
by auto
also have "... = voc c ∪ {n. Pos n = a ∨ Neg n = a}"
using voc_def by simp
finally have "voc (a # c) = voc c ∪ {n. Pos n = a ∨ Neg n = a}" .
moreover have "finite {n. Pos n = a ∨ Neg n = a}"
using finite_nat_set_iff_bounded by auto
ultimately show ?case
using Cons by simp
moreover have "variables φ = ⋃?vocs"
using variables_def voc_def by auto
moreover have "finite ?vocs"
by simp
ultimately show ?thesis
by simp
lemma variables_append: "variables (φ⇩[1] @ φ⇩[2]) = variables φ⇩[1] ∪ variables φ⇩[2]"
using variables_def by auto
text ‹
Arora and Barak~\cite[Claim~2.13]{ccama} define the \emph{size} of a CNF formula
as the numbr of $\wedge / \vee$ symbols. We use a slightly different definition,
namely the number of literals:
definition fsize :: "formula ⇒ nat" where
"fsize φ ≡ sum_list (map length φ)"
subsection ‹Predicates on assignments›
text ‹
Every CNF formula is satisfied by a set of assignments. Conversely, for certain
sets of assignments we can construct CNF formulas satisfied by exactly these
assignments. This will be helpful later when we construct formulas for reducing
arbitrary languages to \SAT{} (see Section~\ref{s:Reducing}).
subsubsection ‹Universality of CNF formulas›
text ‹
A set (represented by a predicate) $F$ of assignments depends on the first
$\ell$ variables iff.\ any two assignments that agree on the first $\ell$
variables are either both in the set or both outside of the set.
definition depon :: "nat ⇒ (assignment ⇒ bool) ⇒ bool" where
"depon l F ≡ ∀α⇩[1] α⇩[2]. (∀i<l. α⇩[1] i = α⇩[2] i) ⟶ F α⇩[1] = F α⇩[2]"
text ‹
Lists of all strings of the same length:
fun str_of_len :: "nat ⇒ string list" where
"str_of_len 0 = [[]]" |
"str_of_len (Suc l) = map ((#) 𝕆) (str_of_len l) @ map ((#) 𝕀) (str_of_len l)"
lemma length_str_of_len: "length (str_of_len l) = 2 ^ l"
by (induction l) simp_all
lemma in_str_of_len_length: "xs ∈ set (str_of_len l) ⟹ length xs = l"
by (induction l arbitrary: xs) auto
lemma length_in_str_of_len: "length xs = l ⟹ xs ∈ set (str_of_len l)"
proof (induction l arbitrary: xs)
case 0
then show ?case
by simp
case (Suc l)
then obtain y ys where "xs = y # ys"
by (meson length_Suc_conv)
then have "length ys = l"
using Suc by simp
show ?case
proof (cases y)
case True
then have "xs ∈ set (map ((#) 𝕀) (str_of_len l))"
using `length ys = l` Suc `xs = y # ys` by simp
then show ?thesis
by simp
case False
then have "xs ∈ set (map ((#) 𝕆) (str_of_len l))"
using `length ys = l` Suc `xs = y # ys` by simp
then show ?thesis
by simp
text ‹
A predicate $F$ depending on the first $\ell$ variables $v_0, \dots, v_{\ell-1}$
can be regarded as a truth table over $\ell$ variables. The next lemma shows
that for every such truth table there exists a CNF formula with at most $2^\ell$
clauses and $\ell\cdot2^\ell$ literals over the first $\ell$ variables. This is
the well-known fact that every Boolean function (over $\ell$ variables) can be
represented by a CNF formula~\cite[Claim~2.13]{ccama}.
lemma depon_ex_formula:
assumes "depon l F"
shows "∃φ.
fsize φ ≤ l * 2 ^ l ∧
length φ ≤ 2 ^ l ∧
variables φ ⊆ {..<l} ∧
(∀α. F α = α ⊨ φ)"
proof -
define cl where "cl = (λv. map (λi. if v ! i then Neg i else Pos i) [0..<l])"
have cl1: "satisfies_clause a (cl v)" if "length v = l" and "v ≠ map a [0..<l]" for v a
proof -
obtain i where i: "i < l" "a i ≠ v ! i"
using ‹length v = l› ‹v ≠ map a [0..<l]›
by (smt (verit, best) atLeastLessThan_iff map_eq_conv map_nth set_upt)
then have *: "cl v ! i = (if v ! i then Neg i else Pos i)"
using cl_def by simp
then have "case (cl v ! i) of Neg n ⇒ ¬ a n | Pos n ⇒ a n"
using i(2) by simp
then show ?thesis
using cl_def * that(1) satisfies_clause_def i(1) by fastforce
have cl2: "v ≠ map a [0..<l]" if "length v = l" and "satisfies_clause a (cl v)" for v a
assume assm: "v = map a [0..<l]"
from that(2) have "∃x∈set (cl v). case x of Neg n ⇒ ¬ a n | Pos n ⇒ a n"
using satisfies_clause_def by simp
then obtain i where i: "i < l" and "case (cl v ! i) of Neg n ⇒ ¬ a n | Pos n ⇒ a n"
using cl_def by auto
then have "v ! i ≠ a i"
using cl_def by fastforce
then show False
using i assm by simp
have filter_length_nth: "f (vs ! j)" if "vs = filter f sol" and "j < length vs"
for vs sol :: "'a list" and f j
using that nth_mem by (metis length_removeAll_less less_irrefl removeAll_filter_not)
have sum_list_map: "sum_list (map g xs) ≤ k * length xs" if "⋀x. x ∈ set xs ⟹ g x = k"
for xs :: "'a list" and g k
using that
proof (induction "length xs" arbitrary: xs)
case 0
then show ?case
by simp
case (Suc x)
then obtain y ys where "xs = y # ys"
by (metis length_Suc_conv)
then have "length ys = x"
using Suc by simp
have "y ∈ set xs"
using `xs = y # ys` by simp
have "sum_list (map g xs) = sum_list (map g (y # ys))"
using `xs = y # ys` by simp
also have "... = g y + sum_list (map g ys)"
by simp
also have "... = k + sum_list (map g ys)"
using Suc `y ∈ set xs` by simp
also have "... ≤ k + k * length ys"
using Suc `length ys = x` ‹xs = y # ys› by auto
also have "... = k * length xs"
by (metis Suc.hyps(2) ‹length ys = x› mult_Suc_right)
finally show ?case
by simp
define vs where
"vs = filter (λv. F (λi. if i < l then v ! i else False) = False) (str_of_len l)"
define φ where "φ = map cl vs"
have "a ⊨ φ" if "F a" for a
proof -
define v where "v = map a [0..<l]"
then have "(λi. if i < l then v ! i else False) j = a j" if "j < l" for j
by (simp add: that)
then have *: "F (λi. if i < l then v ! i else False)"
using assms(1) depon_def that by (smt (verit, ccfv_SIG))
have "satisfies_clause a c" if "c ∈ set φ" for c
proof -
obtain j where j: "c = φ ! j" "j < length φ"
using φ_def `c ∈ set φ` by (metis in_set_conv_nth)
then have "c = cl (vs ! j)"
using φ_def by simp
have "j < length vs"
using φ_def j by simp
then have "F (λi. if i < l then (vs ! j) ! i else False) = False"
using vs_def filter_length_nth by blast
then have "vs ! j ≠ v"
using * by auto
moreover have "length (vs ! j) = l"
using vs_def length_str_of_len ‹j < length vs›
by (smt (verit) filter_eq_nths in_str_of_len_length notin_set_nthsI nth_mem)
ultimately have "satisfies_clause a (cl (vs ! j))"
using v_def cl1 by simp
then show ?thesis
using `c = cl (vs ! j)` by simp
then show ?thesis
using satisfies_def by simp
moreover have "F α" if "α ⊨ φ" for α
proof (rule ccontr)
assume assm: "¬ F α"
define v where "v = map α [0..<l]"
have *: "F (λi. if i < l then v ! i else False) = False"
proof -
have "(λi. if i < l then v ! i else False) j = α j" if "j < l" for j
using v_def by (simp add: that)
then show ?thesis
using assm assms(1) depon_def by (smt (verit, best))
have "length v = l"
using v_def by simp
then obtain j where "j < length (str_of_len l)" and "v = str_of_len l ! j"
by (metis in_set_conv_nth length_in_str_of_len)
then have "v ∈ set vs"
using vs_def * by fastforce
then have "cl v ∈ set φ"
using φ_def by simp
then have "satisfies_clause α (cl v)"
using that satisfies_def by simp
then have "v ≠ map α [0..<l]"
using `length v = l` cl2 by simp
then show False
using v_def by simp
ultimately have "∀α. F α = α ⊨ φ"
by auto
moreover have "fsize φ ≤ l * 2 ^ l"
proof -
have "length c = l" if "c ∈ set φ" for c
using that cl_def φ_def by auto
then have "fsize φ ≤ l * length φ"
unfolding fsize_def using sum_list_map by auto
also have "... ≤ l * length (str_of_len l)"
using φ_def vs_def by simp
also have "... = l * 2 ^ l"
using length_str_of_len by simp
finally show ?thesis .
moreover have "length φ ≤ 2 ^ l"
proof -
have "length φ ≤ length (str_of_len l)"
using φ_def vs_def by simp
also have "... = 2 ^ l"
using length_str_of_len by simp
finally show ?thesis .
moreover have "variables φ ⊆ {..<l}"
fix x assume "x ∈ variables φ"
then obtain c where c: "c ∈ set φ" "Neg x ∈ set c ∨ Pos x ∈ set c"
using variables_def by auto
then obtain v where v: "v ∈ set (str_of_len l)" "c = cl v"
using φ_def vs_def by auto
then show "x ∈ {..<l}"
using cl_def c by auto
ultimately show ?thesis
by auto
subsubsection ‹Substitutions of variables›
text ‹
We will sometimes consider CNF formulas over the first $\ell$ variables and
derive other CNF formulas from them by substituting these variables. Such a
substitution will be represented by a list $\sigma$ of length at least $\ell$,
meaning that the variable $v_i$ is replaced by $v_{\sigma(i)}$. We will call
this operation on formulas \emph{relabel}, and the corresponding one on literals
fun rename :: "nat list ⇒ literal ⇒ literal" where
"rename σ (Neg i) = Neg (σ ! i)" |
"rename σ (Pos i) = Pos (σ ! i)"
definition relabel :: "nat list ⇒ formula ⇒ formula" where
"relabel σ φ ≡ map (map (rename σ)) φ"
lemma fsize_relabel: "fsize (relabel σ φ) = fsize φ"
using relabel_def fsize_def by (metis length_concat length_map map_concat)
text ‹
A substitution $\sigma$ can also be applied to an assignment and to a list of
variable indices:
definition remap :: "nat list ⇒ assignment ⇒ assignment" where
"remap σ α i ≡ if i < length σ then α (σ ! i) else α i"
definition reseq :: "nat list ⇒ nat list ⇒ nat list" where
"reseq σ vs ≡ map ((!) σ) vs"
lemma length_reseq [simp]: "length (reseq σ vs) = length vs"
using reseq_def by simp
text ‹
Relabeling a formula and remapping an assignment are equivalent in a sense.
lemma satisfies_sigma:
assumes "variables φ ⊆ {..<length σ}"
shows "α ⊨ relabel σ φ ⟷ remap σ α ⊨ φ"
assume sat: "α ⊨ relabel σ φ"
have "satisfies_clause (remap σ α) c" if "c ∈ set φ" for c
proof -
obtain i where "i < length φ" "φ ! i = c"
by (meson ‹c ∈ set φ› in_set_conv_nth)
then have "satisfies_clause α (map (rename σ) c)"
(is "satisfies_clause α ?c")
using sat satisfies_def relabel_def by auto
then obtain x where "x∈set ?c" "case x of Neg n ⇒ ¬ α n | Pos n ⇒ α n"
using satisfies_clause_def by auto
then obtain j where j: "j < length ?c" "case (?c ! j) of Neg n ⇒ ¬ α n | Pos n ⇒ α n"
by (metis in_set_conv_nth)
have "case c ! j of Neg n ⇒ ¬ (remap σ α) n | Pos n ⇒ (remap σ α) n"
proof (cases "c ! j")
case (Neg n)
then have 1: "?c ! j = Neg (σ ! n)"
using j(1) by simp
have "n ∈ variables φ"
using Neg j(1) nth_mem that variables_def by force
then have "n < length σ"
using assms by auto
then show ?thesis
using Neg 1 j(2) remap_def by auto
case (Pos n)
then have 1: "?c ! j = Pos (σ ! n)"
using j(1) by simp
have "n ∈ variables φ"
using Pos j(1) nth_mem that variables_def by force
then have "n < length σ"
using assms by auto
then show ?thesis
using Pos 1 j(2) remap_def by auto
then show ?thesis
using satisfies_clause_def j by auto
then show "remap σ α ⊨ φ"
using satisfies_def by simp
assume sat: "remap σ α ⊨ φ"
have "satisfies_clause α c" if "c ∈ set (relabel σ φ)" for c
proof -
let ?phi = "relabel σ φ"
let ?beta = "remap σ α"
obtain i where i: "i < length ?phi" "?phi ! i = c"
by (meson ‹c ∈ set ?phi› in_set_conv_nth)
then have "satisfies_clause ?beta (φ ! i)"
(is "satisfies_clause _ ?c")
using sat satisfies_def relabel_def by simp
then obtain x where "x∈set ?c" "case x of Neg n ⇒ ¬ ?beta n | Pos n ⇒ ?beta n"
using satisfies_clause_def by auto
then obtain j where j: "j < length ?c" "case (?c ! j) of Neg n ⇒ ¬ ?beta n | Pos n ⇒ ?beta n"
by (metis in_set_conv_nth)
then have ren: "c ! j = rename σ (?c ! j)"
using i relabel_def by auto
have "case c ! j of Neg n ⇒ ¬ α n | Pos n ⇒ α n"
proof (cases "?c ! j")
case (Neg n)
then have *: "c ! j = Neg (σ ! n)"
by (simp add: ren)
have "n ∈ variables φ"
using Neg i j variables_def that length_map mem_Collect_eq nth_mem relabel_def by force
then have "n < length σ"
using assms by auto
moreover have "¬ (remap σ α) n"
using j(2) Neg by simp
ultimately have "¬ α (σ ! n)"
using remap_def by simp
then show ?thesis
by (simp add: *)
case (Pos n)
then have *: "c ! j = Pos (σ ! n)"
by (simp add: ren)
have "n ∈ variables φ"
using Pos i j variables_def that length_map mem_Collect_eq nth_mem relabel_def by force
then have "n < length σ"
using assms by auto
moreover have "(remap σ α) n"
using j(2) Pos by simp
ultimately have "α (σ ! n)"
using remap_def by simp
then show ?thesis
by (simp add: *)
moreover have "length c = length (φ ! i)"
using relabel_def i by auto
ultimately show ?thesis
using satisfies_clause_def j by auto
then show "α ⊨ relabel σ φ"
by (simp add: satisfies_def)
subsection ‹Representing CNF formulas as strings\label{s:sat-sat-repr}›
text ‹
By identifying negated literals with even numbers and positive literals with odd
numbers, we can identify literals with natural numbers. This yields a
straightforward representation of a clause as a list of numbers and of a CNF
formula as a list of lists of numbers. Such a list can, in turn, be represented
as a symbol sequence over a quaternary alphabet as described in
Section~\ref{s:tm-numlistlist}, which ultimately can be encoded over a binary
alphabet (see Section~\ref{s:tm-quaternary}). This is essentially how we
represent CNF formulas as strings.
We have to introduce a bunch of functions for mapping between these
fun literal_n :: "literal ⇒ nat" where
"literal_n (Neg i) = 2 * i" |
"literal_n (Pos i) = Suc (2 * i)"
definition n_literal :: "nat ⇒ literal" where
"n_literal n ≡ if even n then Neg (n div 2) else Pos (n div 2)"
lemma n_literal_n: "n_literal (literal_n x) = x"
using n_literal_def by (cases x) simp_all
lemma literal_n_literal: "literal_n (n_literal n) = n"
using n_literal_def by simp
definition clause_n :: "clause ⇒ nat list" where
"clause_n cl ≡ map literal_n cl"
definition n_clause :: "nat list ⇒ clause" where
"n_clause ns ≡ map n_literal ns"
lemma n_clause_n: "n_clause (clause_n cl) = cl"
using n_clause_def clause_n_def n_literal_n by (simp add: map_idI)
lemma clause_n_clause: "clause_n (n_clause n) = n"
using n_clause_def clause_n_def literal_n_literal by (simp add: map_idI)
definition formula_n :: "formula ⇒ nat list list" where
"formula_n φ ≡ map clause_n φ"
definition n_formula :: "nat list list ⇒ formula" where
"n_formula nss ≡ map n_clause nss"
lemma n_formula_n: "n_formula (formula_n φ) = φ"
using n_formula_def formula_n_def n_clause_n by (simp add: map_idI)
lemma formula_n_formula: "formula_n (n_formula nss) = nss"
using n_formula_def formula_n_def clause_n_clause by (simp add: map_idI)
definition formula_zs :: "formula ⇒ symbol list" where
"formula_zs φ ≡ numlistlist (formula_n φ)"
text ‹
The mapping between formulas and well-formed symbol sequences for
lists of lists of numbers is bijective.
lemma formula_n_inj: "formula_n φ⇩[1] = formula_n φ⇩[2] ⟹ φ⇩[1] = φ⇩[2]"
using n_formula_n by metis
definition zs_formula :: "symbol list ⇒ formula" where
"zs_formula zs ≡ THE φ. formula_zs φ = zs"
lemma zs_formula:
assumes "numlistlist_wf zs"
shows "∃!φ. formula_zs φ = zs"
proof -
obtain nss where nss: "numlistlist nss = zs"
using assms numlistlist_wf_def by auto
let ?phi = "n_formula nss"
have *: "formula_n ?phi = nss"
using nss formula_n_formula by simp
then have "formula_zs ?phi = zs"
using nss formula_zs_def by simp
then have "∃φ. formula_zs φ = zs"
by auto
moreover have "φ = ?phi" if "formula_zs φ = zs" for φ
proof -
have "numlistlist (formula_n φ) = zs"
using that formula_zs_def by simp
then have "nss = formula_n φ"
using nss numlistlist_inj by simp
then show ?thesis
using formula_n_inj * by simp
ultimately show ?thesis
by auto
lemma zs_formula_zs: "zs_formula (formula_zs φ) = φ"
by (simp add: formula_n_inj formula_zs_def numlistlist_inj the_equality zs_formula_def)
lemma formula_zs_formula:
assumes "numlistlist_wf zs"
shows "formula_zs (zs_formula zs) = zs"
using assms zs_formula zs_formula_zs by metis
text ‹
There will of course be Turing machines that perform computations on formulas.
In order to bound their running time, we need bounds for the length of the
symbol representation of formulas.
lemma nlength_literal_n_Pos: "nlength (literal_n (Pos n)) ≤ Suc (nlength n)"
using nlength_times2plus1 by simp
lemma nlength_literal_n_Neg: "nlength (literal_n (Neg n)) ≤ Suc (nlength n)"
using nlength_times2 by simp
lemma nlllength_formula_n:
fixes V :: nat and φ :: formula
assumes "⋀v. v ∈ variables φ ⟹ v ≤ V"
shows "nlllength (formula_n φ) ≤ fsize φ * Suc (Suc (nlength V)) + length φ"
using assms
proof (induction φ)
case Nil
then show ?case
using formula_n_def by simp
case (Cons cl φ)
then have 0: "⋀v. v ∈ variables φ ⟹ v ≤ V"
using variables_def by simp
have 1: "n ≤ V" if "Pos n ∈ set cl" for n
using that variables_def by (simp add: Cons.prems)
have 2: "n ≤ V" if "Neg n ∈ set cl" for n
using that variables_def by (simp add: Cons.prems)
have 3: "nlength (literal_n v) ≤ Suc (nlength V)" if "v ∈ set cl" for v
proof (cases v)
case (Neg n)
then have "nlength (literal_n v) ≤ Suc (nlength n)"
using nlength_literal_n_Neg by blast
moreover have "n ≤ V"
using Neg that 2 by simp
ultimately show ?thesis
using nlength_mono by fastforce
case (Pos n)
then have "nlength (literal_n v) ≤ Suc (nlength n)"
using nlength_literal_n_Pos by blast
moreover have "n ≤ V"
using Pos that 1 by simp
ultimately show ?thesis
using nlength_mono by fastforce
have "nllength (clause_n cl) = length (numlist (map literal_n cl))"
using clause_n_def nllength_def by simp
have "nllength (clause_n cl) = (∑n←(map literal_n cl). Suc (nlength n))"
using clause_n_def nllength by simp
also have "... = (∑v←cl. Suc (nlength (literal_n v)))"
proof -
have "map (λn. Suc (nlength n)) (map literal_n cl) = map (λv. Suc (nlength (literal_n v))) cl"
by simp
then show ?thesis
by metis
also have "... ≤ (∑v←cl. Suc (Suc (nlength V)))"
using sum_list_mono[of cl "λv. Suc (nlength (literal_n v))" "λv. Suc (Suc (nlength V))"] 3
by simp
also have "... = Suc (Suc (nlength V)) * length cl"
using sum_list_const by blast
finally have 4: "nllength (clause_n cl) ≤ Suc (Suc (nlength V)) * length cl" .
have "concat (map (λns. numlist ns @ [5]) (map clause_n (cl # φ))) =
(numlist (clause_n cl) @ [5]) @ concat (map (λns. numlist ns @ [5]) (map clause_n φ))"
by simp
then have "length (concat (map (λns. numlist ns @ [5]) (map clause_n (cl # φ)))) =
length ((numlist (clause_n cl) @ [5]) @ concat (map (λns. numlist ns @ [5]) (map clause_n φ)))"
by simp
then have "nlllength (formula_n (cl # φ)) =
length ((numlist (clause_n cl) @ [5]) @ concat (map (λns. numlist ns @ [5]) (map clause_n φ)))"
using formula_n_def numlistlist_def nlllength_def by simp
also have "... = length (numlist (clause_n cl) @ [5]) + length (concat (map (λns. numlist ns @ [5]) (map clause_n φ)))"
by simp
also have "... = length (numlist (clause_n cl) @ [5]) + nlllength (formula_n φ)"
using formula_n_def numlistlist_def nlllength_def by simp
also have "... = Suc (nllength (clause_n cl)) + nlllength (formula_n φ)"
using nllength_def by simp
also have "... ≤ Suc (Suc (Suc (nlength V)) * length cl) + nlllength (formula_n φ)"
using 4 by simp
also have "... ≤ Suc (Suc (Suc (nlength V)) * length cl) + fsize φ * Suc (Suc (nlength V)) + length φ"
using Cons 0 by simp
also have "... = fsize (cl # φ) * Suc (Suc (nlength V)) + length (cl # φ)"
by (simp add: add_mult_distrib2 mult.commute fsize_def)
finally show ?case
by simp
text ‹
Since \SAT{} is supposed to be a set of strings rather than symbol
sequences, we need to map symbol sequences representing formulas to strings:
abbreviation formula_to_string :: "formula ⇒ string" where
"formula_to_string φ ≡ symbols_to_string (binencode (numlistlist (formula_n φ)))"
lemma formula_to_string_inj:
assumes "formula_to_string φ⇩[1] = formula_to_string φ⇩[2]"
shows "φ⇩[1] = φ⇩[2]"
proof -
let ?xs1 = "binencode (numlistlist (formula_n φ⇩[1]))"
let ?xs2 = "binencode (numlistlist (formula_n φ⇩[2]))"
have bin1: "binencodable (numlistlist (formula_n φ⇩[1]))"
by (simp add: Suc_le_eq numeral_2_eq_2 proper_symbols_numlistlist symbols_lt_numlistlist)
then have "bit_symbols ?xs1"
using bit_symbols_binencode by simp
then have 1: "string_to_symbols (symbols_to_string ?xs1) = ?xs1"
using bit_symbols_to_symbols by simp
have bin2: "binencodable (numlistlist (formula_n φ⇩[2]))"
by (simp add: Suc_le_eq numeral_2_eq_2 proper_symbols_numlistlist symbols_lt_numlistlist)
then have "bit_symbols ?xs2"
using bit_symbols_binencode by simp
then have "string_to_symbols (symbols_to_string ?xs2) = ?xs2"
using bit_symbols_to_symbols by simp
then have "?xs1 = ?xs2"
using 1 assms by simp
then have "numlistlist (formula_n φ⇩[1]) = numlistlist (formula_n φ⇩[2])"
using binencode_inj bin1 bin2 by simp
then have "formula_n φ⇩[1] = formula_n φ⇩[2]"
using numlistlist_inj by simp
then show ?thesis
using formula_n_inj by simp
text ‹
While @{const formula_to_string} maps every CNF formula to a string, not every
string represents a CNF formula. We could just ignore such invalid strings and
define \SAT{} to only contain well-formed strings. But this would implicitly
place these invalid strings in the complement of \SAT{}. While this does not
cause us any issues here, it would if we were to introduce co-$\NP$ and wanted
to show that $\overline{\mathtt{SAT}}$ is in co-$\NP$, as we would then have to
deal with the invalid strings. So it feels a little like cheating to ignore the
invalid strings like this.
Arora and Barak~\cite[p.~45 footnote~3]{ccama} recommend mapping invalid strings
to ``some fixed formula''. A natural candidate for this fixed formula is the
empty CNF, since an invalid string in a sense represents nothing, and the empty
CNF formula is represented by the empty string. Since the empty CNF formula is
satisfiable this implies that all invalid strings become elements of \SAT{}.
We end up with the following definition of the protagonist of this article:
definition SAT :: language where
"SAT ≡ {formula_to_string φ | φ. satisfiable φ} ∪ {x. ¬ (∃φ. x = formula_to_string φ)}"
section ‹\SAT{} is in $\NP$\label{s:Sat-np}›
text ‹
In order to show that \SAT{} is in $\NP$, we will construct a polynomial-time
Turing machine $M$ and specify a polynomial function $p$ such that for every
$x$, $x\in \SAT$ iff. there is a $u\in\bbOI^{p(|x|)}$ such that $M$ outputs
\textbf{1} on $\langle x; u\rangle$.
The idea is straightforward: Let $\phi$ be the formula represented by the
string $x$. Interpret the string $u$ as a list of variables and interpret this
as the assignment that assigns True to only the variables in the list. Then
check if the assignment satisfies the formula. This check is ``obviously''
possible in polynomial time because $M$ simply has to iterate over all clauses
and check if at least one literal per clause is true under the assignment.
Checking if a literal is true is simply a matter of checking whether the
literal's variable is in the list $u$. If the assignment satisfies $\phi$,
output \textbf{1}, otherwise the empty symbol sequence.
If $\phi$ is unsatisfiable then no assignment, hence no $u$ no matter the length
will make $M$ output \textbf{1}. On the other hand, if $\phi$ is satisfiable
then there will be a satisfying assignment where a subset of the variables in
$\phi$ are assigned True. Hence there will be a list of variables of at most
roughly the length of $\phi$. So setting the polynomial $p$ to something like
$p(n) = n$ should suffice.
In fact, as we shall see, $p(n) = n$ suffices. This is so because in our
representation, the string $x$, being a list of lists, has slightly more
overhead per number than the plain list $u$ has. Therefore listing all variables
in $\phi$ is guaranteed to need fewer symbols than $x$ has.
There are several technical details to work out. First of all, the input to $M$
need not be a well-formed pair. And if it is, the pair $\langle x, u\rangle$ has
to be decoded into separate components $x$ and $u$. These have to be decoded
again from the binary to the quaternary alphabet, which is only possible if both
$x$ and $u$ comprise only bit symbols (\textbf{01}). Then $M$ needs to check if
the decoded $x$ and $u$ are valid symbol sequences for lists (of lists) of
numbers. In the case of $u$ this is particularly finicky because the definition
of $\NP$ requires us to provide a string $u$ of exactly the length $p(|x|)$ and
so we have to pad $u$ with extra symbols, which have to be stripped again before
the validation can take place.
In the first subsection we describe what the verifier TM has to do in terms of
symbol sequences. In the subsections after that we devise a Turing machine that
implements this behavior.
subsection ‹Verifying \SAT{} instances›
text ‹
Our verifier Turing machine for \SAT{} will implement the following function;
that is, on input @{term zs} it will output @{term "verify_sat zs"}. It
performs a number of decodings and well-formedness checks and outputs either
\textbf{1} or the empty symbol sequence.
definition verify_sat :: "symbol list ⇒ symbol list" where
"verify_sat zs ≡
ys = bindecode zs;
xs = bindecode (first ys);
vs = rstrip ♯ (bindecode (second ys))
if even (length (first ys)) ∧ bit_symbols (first ys) ∧ numlistlist_wf xs
then if bit_symbols (second ys) ∧ numlist_wf vs
then if (λv. v ∈ set (zs_numlist vs)) ⊨ zs_formula xs then [𝟭] else []
else []
else [𝟭]"
text ‹
Next we show that @{const verify_sat} behaves as is required of a verifier TM
for \SAT. Its polynomial running time is the subject of the next subsection.
text ‹
First we consider the case that @{term zs} encodes a pair $\langle x, u\rangle$
of strings where $x$ does not represent a CNF formula. Such an $x$ is in \SAT{},
hence the verifier TM is supposed to output \textbf{1}.
lemma ex_phi_x:
assumes "xs = string_to_symbols x"
assumes "even (length xs)" and "numlistlist_wf (bindecode xs)"
shows "∃φ. x = formula_to_string φ"
proof -
obtain nss where "numlistlist nss = bindecode xs"
using assms(3) numlistlist_wf_def by auto
moreover have "binencode (bindecode xs) = xs"
using assms(1,2) binencode_decode by simp
ultimately have "binencode (numlistlist nss) = xs"
by simp
then have "binencode (numlistlist (formula_n (n_formula nss))) = xs"
using formula_n_formula by simp
then have "formula_to_string (n_formula nss) = symbols_to_string xs"
by simp
then show ?thesis
using assms(1) symbols_to_string_to_symbols by auto
lemma verify_sat_not_wf_phi:
assumes "zs = ⟨x; u⟩" and "¬ (∃φ. x = formula_to_string φ)"
shows "verify_sat zs = [𝟭]"
proof -
define ys where "ys = bindecode zs"
then have first_ys: "first ys = string_to_symbols x"
using first_pair assms(1) by simp
then have 2: "bit_symbols (first ys)"
using assms(1) bit_symbols_first ys_def by presburger
define xs where "xs = bindecode (first ys)"
then have "¬ (even (length (first ys)) ∧ bit_symbols (first ys) ∧ numlistlist_wf xs)"
using first_ys ex_phi_x assms(2) by auto
then show "verify_sat zs = [𝟭]"
unfolding verify_sat_def Let_def using ys_def xs_def by simp
text ‹
The next case is that @{term zs} represents a pair $\langle x, u\rangle$ where
$x$ represents an unsatisfiable CNF formula. This $x$ is thus not in \SAT{} and
the verifier TM must output something different from \textbf{1}, such as the
empty string, regardless of $u$.
lemma verify_sat_not_sat:
fixes φ :: formula
assumes "zs = ⟨formula_to_string φ; u⟩" and "¬ satisfiable φ"
shows "verify_sat zs = []"
proof -
have bs_phi: "bit_symbols (binencode (formula_zs φ))"
using bit_symbols_binencode formula_zs_def proper_symbols_numlistlist symbols_lt_numlistlist
by (metis Suc_le_eq dual_order.refl numeral_2_eq_2)
define ys where "ys = bindecode zs"
then have "first ys = string_to_symbols (formula_to_string φ)"
using first_pair assms(1) by simp
then have first_ys: "first ys = binencode (formula_zs φ)"
using bit_symbols_to_symbols bs_phi formula_zs_def by simp
then have 2: "bit_symbols (first ys)"
using assms(1) bit_symbols_first ys_def by presburger
have 22: "even (length (first ys))"
using first_ys by simp
define xs where "xs = bindecode (first ys)"
define vs where "vs = rstrip 5 (bindecode (second ys))"
have wf_xs: "numlistlist_wf xs"
using xs_def first_ys bindecode_encode formula_zs_def numlistlist_wf_def numlistlist_wf_has2'
by (metis le_simps(3) numerals(2))
have φ: "zs_formula xs = φ"
using xs_def first_ys "2" binencode_decode formula_to_string_inj formula_zs_def formula_zs_formula wf_xs
by auto
have "verify_sat zs =
(if bit_symbols (second ys) ∧ numlist_wf vs
then if (λv. v ∈ set (zs_numlist vs)) ⊨ φ then [3] else []
else [])"
unfolding verify_sat_def Let_def using ys_def xs_def vs_def 2 22 wf_xs φ by simp
then show "verify_sat zs = []"
using assms(2) satisfiable_def by simp
text ‹
Next we consider the case in which @{term zs} represents a pair $\langle x,
u\rangle$ where $x$ represents a satisfiable CNF formula and $u$ a list of
numbers padded at the right with @{text ♯} symbols. This $u$ thus represents a
variable assignment, namely the one assigning True to exactly the variables in
the list. The verifier TM is to output \textbf{1} iff.\ this assignment
satisfies the CNF formula represented by $x$.
First we show that stripping away @{text ♯} symbols does not damage a symbol
sequence representing a list of numbers.
lemma rstrip_numlist_append: "rstrip ♯ (numlist vars @ replicate pad ♯) = numlist vars"
(is "rstrip ♯ ?zs = ?ys")
proof -
have "(LEAST i. i ≤ length ?zs ∧ set (drop i ?zs) ⊆ {♯}) = length ?ys"
proof (rule Least_equality)
show "length ?ys ≤ length ?zs ∧ set (drop (length ?ys) ?zs) ⊆ {♯}"
by auto
show "length ?ys ≤ m" if "m ≤ length ?zs ∧ set (drop m ?zs) ⊆ {♯}" for m
proof (rule ccontr)
assume "¬ length ?ys ≤ m"
then have "m < length ?ys"
by simp
then have "?ys ! m ∈ set (drop m ?ys)"
by (metis Cons_nth_drop_Suc list.set_intros(1))
moreover have "set (drop m ?ys) ⊆ {♯}"
using that by simp
ultimately have "?ys ! m = ♯"
by auto
moreover have "?ys ! m < ♯"
using symbols_lt_numlist `m < length ?ys` by simp
ultimately show False
by simp
then show ?thesis
using rstrip_def by simp
lemma verify_sat_wf:
fixes φ :: formula and pad :: nat
assumes "zs = ⟨formula_to_string φ; symbols_to_string (binencode (numlist vars @ replicate pad ♯))⟩"
shows "verify_sat zs = (if (λv. v ∈ set vars) ⊨ φ then [𝟭] else [])"
proof -
have bs_phi: "bit_symbols (binencode (formula_zs φ))"
using bit_symbols_binencode formula_zs_def proper_symbols_numlistlist symbols_lt_numlistlist
by (metis Suc_le_eq dual_order.refl numeral_2_eq_2)
have "binencodable (numlist vars @ replicate pad ♯)"
using proper_symbols_numlist symbols_lt_numlist binencodable_append[of "numlist vars" "replicate pad ♯"]
by fastforce
then have bs_vars: "bit_symbols (binencode (numlist vars @ replicate pad ♯))"
using bit_symbols_binencode by simp
define ys where "ys = bindecode zs"
then have "first ys = string_to_symbols (formula_to_string φ)"
using first_pair assms(1) by simp
then have first_ys: "first ys = binencode (formula_zs φ)"
using bit_symbols_to_symbols bs_phi formula_zs_def by simp
then have bs_first: "bit_symbols (first ys)"
using assms(1) bit_symbols_first ys_def by presburger
have even: "even (length (first ys))"
using first_ys by simp
have second_ys: "second ys = binencode (numlist vars @ replicate pad ♯)"
using bs_vars ys_def assms(1) bit_symbols_to_symbols second_pair by simp
then have bs_second: "bit_symbols (second ys)"
using bs_vars by simp
define xs where "xs = bindecode (first ys)"
define vs where "vs = rstrip ♯ (bindecode (second ys))"
then have "vs = rstrip ♯ (numlist vars @ replicate pad ♯)"
using second_ys ‹binencodable (numlist vars @ replicate pad ♯)› bindecode_encode by simp
then have vs: "vs = numlist vars"
using rstrip_numlist_append by simp
have wf_xs: "numlistlist_wf xs"
using xs_def first_ys bindecode_encode formula_zs_def numlistlist_wf_def numlistlist_wf_has2'
by (metis le_simps(3) numerals(2))
have "verify_sat zs =
(if even (length (first ys)) ∧ bit_symbols (first ys) ∧ numlistlist_wf xs
then if bit_symbols (second ys) ∧ numlist_wf vs
then if (λv. v ∈ set (zs_numlist vs)) ⊨ zs_formula xs then [𝟭] else []
else []
else [3])"
unfolding verify_sat_def Let_def using bs_second ys_def xs_def vs_def by simp
then have *: "verify_sat zs =
(if bit_symbols (second ys) ∧ numlist_wf vs
then if (λv. v ∈ set (zs_numlist vs)) ⊨ zs_formula xs then [𝟭] else []
else [])"
unfolding verify_sat_def Let_def using even bs_first wf_xs by simp
have "xs = formula_zs φ"
using xs_def bindecode_encode formula_zs_def first_ys proper_symbols_numlistlist symbols_lt_numlistlist
by (simp add: Suc_leI numerals(2))
then have φ: "φ = zs_formula xs"
by (simp add: zs_formula_zs)
have vars: "vars = zs_numlist vs"
using vs numlist_wf_def numlist_zs_numlist zs_numlist_ex1 by blast
then have wf_vs: "numlist_wf vs"
using numlist_wf_def vs by auto
have "verify_sat zs = (if (λv. v ∈ set (zs_numlist vs)) ⊨ zs_formula xs then [𝟭] else [])"
using * bs_second wf_xs wf_vs by simp
then show ?thesis
using φ vars by simp
text ‹
Finally we show that for every string $x$ representing a satisfiable CNF formula
there is a list of numbers representing a satisfying assignment and represented
by a string of length at most $|x|$. That means there is always a string of
exactly the length of $x$ consisting of a satisfying assignment plus some
padding symbols.
lemma nllength_remove1:
assumes "n ∈ set ns"
shows "nllength (n # remove1 n ns) = nllength ns"
using assms nllength sum_list_map_remove1[of n ns "λn. Suc (nlength n)"] by simp
lemma nllength_distinct_le:
assumes "distinct ns"
and "set ns ⊆ set ms"
shows "nllength ns ≤ nllength ms"
using assms
proof (induction ms arbitrary: ns)
case Nil
then show ?case
by simp
case (Cons m ms)
show ?case
proof (cases "m ∈ set ns")
case True
let ?ns = "remove1 m ns"
have "set ?ns ⊆ set ms"
using Cons by auto
moreover have "distinct ?ns"
using Cons by simp
ultimately have *: "nllength ?ns ≤ nllength ms"
using Cons by simp
have "nllength ns = nllength (m # ?ns)"
using True nllength_remove1 by simp
also have "... = Suc (nlength m) + nllength ?ns"
using nllength by simp
also have "... ≤ Suc (nlength m) + nllength ms"
using * by simp
also have "... = nllength (m # ms)"
using nllength by simp
finally show ?thesis .
case False
then have "set ns ⊆ set ms"
using Cons by auto
moreover have "distinct ns"
using Cons by simp
ultimately have "nllength ns ≤ nllength ms"
using Cons by simp
then show ?thesis
using nllength by simp
lemma nlllength_nllength_concat: "nlllength nss = nllength (concat nss) + length nss"
using nlllength nllength by (induction nss) simp_all
fun variable :: "literal ⇒ nat" where
"variable (Neg i) = i" |
"variable (Pos i) = i"
lemma sum_list_eq: "ns = ms ⟹ sum_list ns = sum_list ms"
by simp
lemma nllength_clause_le: "nllength (clause_n cl) ≥ nllength (map variable cl)"
proof -
have "variable x ≤ literal_n x" for x
by (cases x) simp_all
then have *: "Suc (nlength (variable x)) ≤ Suc (nlength (literal_n x))" for x
using nlength_mono by simp
let ?ns = "map literal_n cl"
have "nllength (clause_n cl) = nllength ?ns"
using clause_n_def by presburger
also have "... = (∑n←?ns. Suc (nlength n))"
using nllength by simp
also have "... = (∑x←cl. Suc (nlength (literal_n x)))"
by (smt (verit, del_insts) length_map nth_equalityI nth_map)
also have "... ≥ (∑x←cl. Suc (nlength (variable x)))"
using * by (simp add: sum_list_mono)
finally have "(∑x←cl. Suc (nlength (variable x))) ≤ nllength (clause_n cl)"
by simp
moreover have "(∑x←cl. Suc (nlength (variable x))) = nllength (map variable cl)"
proof -
have 1: "map (λx. Suc (nlength (variable x))) cl = map (λn. Suc (nlength n)) (map variable cl)"
by simp
then have "(∑x←cl. Suc (nlength (variable x))) = (∑n←(map variable cl). Suc (nlength n))"
using sum_list_eq[OF 1] by simp
then show ?thesis
using nllength by simp
ultimately show ?thesis
by simp
lemma nlllength_formula_ge: "nlllength (formula_n φ) ≥ nlllength (map (map variable) φ)"
proof (induction φ)
case Nil
then show ?case
by simp
case (Cons cl φ)
have "nlllength (map (map variable) (cl # φ)) =
nlllength (map (map variable) [cl]) + nlllength (map (map variable) φ)"
using nlllength by simp
also have "... = Suc (nllength (map variable cl)) + nlllength (map (map variable) φ)"
using nlllength by simp
also have "... ≤ Suc (nllength (map variable cl)) + nlllength (formula_n φ)"
using Cons by simp
also have "... ≤ Suc (nllength (clause_n cl)) + nlllength (formula_n φ)"
using nllength_clause_le by simp
also have "... = nlllength (formula_n (cl # φ))"
using nlllength by (simp add: formula_n_def)
finally show ?case .
lemma variables_shorter_formula:
fixes φ :: formula and vars :: "nat list"
assumes "set vars ⊆ variables φ" and "distinct vars"
shows "nllength vars ≤ nlllength (formula_n φ)"
proof -
let ?nss = "map (map variable) φ"
have "nllength (concat ?nss) ≤ nlllength ?nss"
using nlllength_nllength_concat by simp
then have *: "nllength (concat ?nss) ≤ nlllength (formula_n φ)"
using nlllength_formula_ge by (meson le_trans)
have "set vars ⊆ set (concat ?nss)"
fix n :: nat
assume "n ∈ set vars"
then have "n ∈ variables φ"
using assms(1) by auto
then obtain c where c: "c ∈ set φ" "Neg n ∈ set c ∨ Pos n ∈ set c"
using variables_def by auto
then obtain x where x: "x ∈ set c" "variable x = n"
by auto
then show "n ∈ set (concat (map (map variable) φ))"
using c by auto
then have "nllength vars ≤ nllength (concat ?nss)"
using nllength_distinct_le assms(2) by simp
then show ?thesis
using * by simp
lemma ex_assignment_linear_length:
assumes "satisfiable φ"
shows "∃vars. (λv. v ∈ set vars) ⊨ φ ∧ nllength vars ≤ nlllength (formula_n φ)"
proof -
obtain α where α: "α ⊨ φ"
using assms satisfiable_def by auto
define poss where "poss = {v. v ∈ variables φ ∧ α v}"
then have "finite poss"
using finite_variables by simp
let ?beta = "λv. v ∈ poss"
have sat: "?beta ⊨ φ"
unfolding satisfies_def
fix c :: clause
assume "c ∈ set φ"
then have "satisfies_clause α c"
using satisfies_def α by simp
then obtain x where x: "x ∈ set c" "satisfies_literal α x"
using satisfies_clause_def by auto
show "satisfies_clause ?beta c"
proof (cases x)
case (Neg n)
then have "¬ α n"
using x(2) by simp
then have "n ∉ poss"
using poss_def by simp
then have "¬ ?beta n"
by simp
then have "satisfies_literal ?beta x"
using Neg by simp
then show ?thesis
using satisfies_clause_def x(1) by auto
case (Pos n)
then have "α n"
using x(2) by simp
then have "n ∈ poss"
using poss_def Pos ‹c ∈ set φ› variables_def x(1) by auto
then have "?beta n"
by simp
then have "satisfies_literal ?beta x"
using Pos by simp
then show ?thesis
using satisfies_clause_def x(1) by auto
obtain vars where vars: "set vars = poss" "distinct vars"
using `finite poss` by (meson finite_distinct_list)
moreover from this have "set vars ⊆ variables φ"
using poss_def by simp
ultimately have "nllength vars ≤ nlllength (formula_n φ)"
using variables_shorter_formula by simp
moreover have "(λv. v ∈ set vars) ⊨ φ"
using vars(1) sat by simp
ultimately show ?thesis
by auto
lemma ex_witness_linear_length:
fixes φ :: formula
assumes "satisfiable φ"
shows "∃us.
bit_symbols us ∧
length us = length (formula_to_string φ) ∧
verify_sat ⟨formula_to_string φ; symbols_to_string us⟩ = [𝟭]"
proof -
obtain vars where vars:
"(λv. v ∈ set vars) ⊨ φ"
"nllength vars ≤ nlllength (formula_n φ)"
using assms ex_assignment_linear_length by auto
define pad where "pad = nlllength (formula_n φ) - nllength vars"
then have "nllength vars + pad = nlllength (formula_n φ)"
using vars(2) by simp
moreover define us where "us = numlist vars @ replicate pad ♯"
ultimately have "length us = nlllength (formula_n φ)"
by (simp add: nllength_def)
then have "length (binencode us) = length (formula_to_string φ)" (is "length ?us = _")
by (simp add: nlllength_def)
moreover have "verify_sat ⟨formula_to_string φ; symbols_to_string ?us⟩ = [𝟭]"
using us_def vars(1) assms verify_sat_wf by simp
moreover have "bit_symbols ?us"
proof -
have "binencodable (numlist vars)"
using proper_symbols_numlist symbols_lt_numlist
by (metis Suc_leI lessI less_Suc_numeral numeral_2_eq_2 numeral_le_iff numeral_less_iff
order_less_le_trans pred_numeral_simps(3) semiring_norm(74))
moreover have "binencodable (replicate pad ♯)"
by simp
ultimately have "binencodable us"
using us_def binencodable_append by simp
then show ?thesis
using bit_symbols_binencode by simp
ultimately show ?thesis
by blast
lemma bit_symbols_verify_sat: "bit_symbols (verify_sat zs)"
unfolding verify_sat_def Let_def by simp
subsection ‹A Turing machine for verifying formulas›
text ‹
The core of the function @{const verify_sat} is the expression @{term " (λv. v ∈
set (zs_numlist vs)) ⊨ zs_formula xs"}, which checks if an assignment
represented by a list of variable indices satisfies a CNF formula represented by
a list of lists of literals. In this section we devise a Turing machine
performing this check.
Recall that the numbers 0 and 1 are represented by the empty symbol sequence
and the symbol sequence \textbf{1}, respectively. The Turing machines in this
section are described in terms of numbers.
We start with a Turing machine that checks a clause. The TM accepts on tape
$j_1$ a list of numbers representing an assignment $\alpha$ and on tape $j_2$ a
list of numbers representing a clause. It outputs on tape $j_3$ the number 1 if
$\alpha$ satisfies the clause, and otherwise 0. To do this the TM iterates over
all literals in the clause and determines the underlying variable and the sign
of the literal. If the literal is positive and the variable is in the list
representing $\alpha$ or if the literal is negative and the variable is not in
the list, the number 1 is written to the tape $j_3$. Otherwise the tape remains
unchanged. We assume $j_3$ is initialized with 0, and so it will be 1 if and
only if at least one literal is satisfied by $\alpha$.
The TM requires five auxiliary tapes $j_3 + 1, \dots, j_3 + 5$. Tape $j_3 + 1$
stores the literals one at a time, and later the variable; tape $j_3 + 2$ stores
the sign of the literal; tape $j_3 + 3$ stores whether the variable is contained
in $\alpha$; tapes $j_3 + 4$ and $j_3 + 5$ are the auxiliary tapes of @{const
definition tm_sat_clause :: "tapeidx ⇒ tapeidx ⇒ tapeidx ⇒ machine" where
"tm_sat_clause j1 j2 j3 ≡
WHILE [] ; λrs. rs ! j2 ≠ □ DO
tm_nextract 4 j2 (j3 + 1) ;;
tm_mod2 (j3 + 1) (j3 + 2) ;;
tm_div2 (j3 + 1) ;;
tm_contains j1 (j3 + 1) (j3 + 3) ;;
IF λrs. rs ! (j3 + 3) = □ ∧ rs ! (j3 + 2) = □ ∨ rs ! (j3 + 3) ≠ □ ∧ rs ! (j3 + 2) ≠ □ THEN
tm_setn j3 1
ENDIF ;;
tm_setn (j3 + 1) 0 ;;
tm_setn (j3 + 2) 0 ;;
tm_setn (j3 + 3) 0
DONE ;;
tm_cr j2"
lemma tm_sat_clause_tm:
assumes "k ≥ 2" and "G ≥ 5" and "j3 + 5 < k" "0 < j1" "j1 < k" "j2 < k" "j1 < j3"
shows "turing_machine k G (tm_sat_clause j1 j2 j3)"
using tm_sat_clause_def tm_mod2_tm tm_div2_tm tm_nextract_tm tm_setn_tm tm_contains_tm Nil_tm tm_cr_tm
assms turing_machine_loop_turing_machine turing_machine_branch_turing_machine
by simp
locale turing_machine_sat_clause =
fixes j1 j2 j3 :: tapeidx
definition "tmL1 ≡ tm_nextract 4 j2 (j3 + 1)"
definition "tmL2 ≡ tmL1 ;; tm_mod2 (j3 + 1) (j3 + 2)"
definition "tmL3 ≡ tmL2 ;; tm_div2 (j3 + 1)"
definition "tmL4 ≡ tmL3 ;; tm_contains j1 (j3 + 1) (j3 + 3)"
definition "tmI ≡ IF λrs. rs ! (j3 + 3) = □ ∧ rs ! (j3 + 2) = □ ∨ rs ! (j3 + 3) ≠ □ ∧ rs ! (j3 + 2) ≠ □ THEN tm_setn j3 1 ELSE [] ENDIF"
definition "tmL5 ≡ tmL4 ;; tmI"
definition "tmL6 ≡ tmL5 ;; tm_setn (j3 + 1) 0"
definition "tmL7 ≡ tmL6 ;; tm_setn (j3 + 2) 0"
definition "tmL8 ≡ tmL7 ;; tm_setn (j3 + 3) 0"
definition "tmL ≡ WHILE [] ; λrs. rs ! j2 ≠ □ DO tmL8 DONE"
definition "tm2 ≡ tmL ;; tm_cr j2"
lemma tm2_eq_tm_sat_clause: "tm2 = tm_sat_clause j1 j2 j3"
unfolding tm2_def tmL_def tmL8_def tmL7_def tmL6_def tmL5_def tmL4_def tmL3_def tmI_def
tmL2_def tmL1_def tm_sat_clause_def
by simp
fixes tps0 :: "tape list" and k :: nat and vars :: "nat list" and clause :: clause
assumes jk: "0 < j1" "j1 ≠ j2" "j3 + 5 < k" "j1 < j3" "j2 < j3" "0 < j2" "length tps0 = k"
assumes tps0:
"tps0 ! j1 = nltape' vars 0"
"tps0 ! j2 = nltape' (clause_n clause) 0"
"tps0 ! j3 = (⌊0⌋⇩[N], 1)"
"tps0 ! (j3 + 1) = (⌊0⌋⇩[N], 1)"
"tps0 ! (j3 + 2) = (⌊0⌋⇩[N], 1)"
"tps0 ! (j3 + 3) = (⌊0⌋⇩[N], 1)"
"tps0 ! (j3 + 4) = (⌊0⌋⇩[N], 1)"
"tps0 ! (j3 + 5) = (⌊0⌋⇩[N], 1)"
abbreviation "sat_take t ≡ satisfies_clause (λv. v ∈ set vars) (take t clause)"
definition tpsL :: "nat ⇒ tape list" where
"tpsL t ≡ tps0
[j2 := nltape' (clause_n clause) t,
j3 := (⌊sat_take t⌋⇩[B], 1)]"
lemma tpsL0: "tpsL 0 = tps0"
proof -
have "nltape' (clause_n clause) 0 = tps0 ! j2"
using tps0(2) by presburger
moreover have "⌊sat_take 0⌋⇩[B] = ⌊0⌋⇩[N]"
using satisfies_clause_def by simp
ultimately show ?thesis
using tpsL_def tps0 jk by (metis list_update_id)
definition tpsL1 :: "nat ⇒ tape list" where
"tpsL1 t ≡ tps0
[j2 := nltape' (clause_n clause) (Suc t),
j3 := (⌊sat_take t⌋⇩[B], 1),
j3 + 1 := (⌊literal_n (clause ! t)⌋⇩[N], 1)]"
lemma tmL1 [transforms_intros]:
assumes "ttt = 12 + 2 * nlength (clause_n clause ! t)" and "t < length (clause_n clause)"
shows "transforms tmL1 (tpsL t) ttt (tpsL1 t)"
unfolding tmL1_def
proof (tform tps: assms tps0 tpsL_def tpsL1_def jk)
have len: "t < length clause"
using assms(2) clause_n_def by simp
show "ttt = 12 + 2 * nlength 0 + 2 * nlength (clause_n clause ! t)"
using assms(1) by simp
have *: "j2 ≠ j3"
using jk by simp
have **: "clause_n clause ! t = literal_n (clause ! t)"
using len by (simp add: clause_n_def)
show "tpsL1 t = (tpsL t)
[j2 := nltape' (clause_n clause) (Suc t),
j3 + 1 := (⌊clause_n clause ! t⌋⇩[N], 1)]"
unfolding tpsL_def tpsL1_def using list_update_swap[OF *, of tps0] by (simp add: **)
definition tpsL2 :: "nat ⇒ tape list" where
"tpsL2 t ≡ tps0
[j2 := nltape' (clause_n clause) (Suc t),
j3 := (⌊sat_take t⌋⇩[B], 1),
j3 + 1 := (⌊literal_n (clause ! t)⌋⇩[N], 1),
j3 + 2 := (⌊literal_n (clause ! t) mod 2⌋⇩[N], 1)]"
lemma tmL2 [transforms_intros]:
assumes "ttt = 12 + 2 * nlength (clause_n clause ! t) + 1"
and "t < length (clause_n clause)"
shows "transforms tmL2 (tpsL t) ttt (tpsL2 t)"
unfolding tmL2_def by (tform tps: assms tps0 tpsL2_def tpsL1_def jk)
definition tpsL3 :: "nat ⇒ tape list" where
"tpsL3 t ≡ tps0
[j2 := nltape' (clause_n clause) (Suc t),
j3 := (⌊sat_take t⌋⇩[B], 1),
j3 + 1 := (⌊literal_n (clause ! t) div 2⌋⇩[N], 1),
j3 + 2 := (⌊literal_n (clause ! t) mod 2⌋⇩[N], 1)]"
lemma tmL3 [transforms_intros]:
assumes "ttt = 16 + 4 * nlength (clause_n clause ! t)"
and "t < length (clause_n clause)"
shows "transforms tmL3 (tpsL t) ttt (tpsL3 t)"
unfolding tmL3_def
proof (tform tps: assms(2) tps0 tpsL3_def tpsL2_def jk)
have len: "t < length clause"
using assms(2) clause_n_def by simp
have **: "clause_n clause ! t = literal_n (clause ! t)"
using len by (simp add: clause_n_def)
show "ttt = 12 + 2 * nlength (clause_n clause ! t) + 1 + (2 * nlength (literal_n (clause ! t)) + 3)"
using assms(1) ** by simp
definition tpsL4 :: "nat ⇒ tape list" where
"tpsL4 t ≡ tps0
[j2 := nltape' (clause_n clause) (Suc t),
j3 := (⌊sat_take t⌋⇩[B], 1),
j3 + 1 := (⌊literal_n (clause ! t) div 2⌋⇩[N], 1),
j3 + 2 := (⌊literal_n (clause ! t) mod 2⌋⇩[N], 1),
j3 + 3 := (⌊literal_n (clause ! t) div 2 ∈ set vars⌋⇩[B], 1)]"
lemma tmL4 [transforms_intros]:
assumes "ttt = 20 + 4 * nlength (clause_n clause ! t) + 67 * (nllength vars)⇧^2"
and "t < length (clause_n clause)"
shows "transforms tmL4 (tpsL t) ttt (tpsL4 t)"
unfolding tmL4_def
proof (tform tps: assms(2) tps0 tpsL4_def tpsL3_def jk time: assms(1))
have "tpsL3 t ! (j3 + 4) = (⌊0⌋⇩[N], 1)"
using tpsL3_def tps0 jk by simp
then show "tpsL3 t ! (j3 + 3 + 1) = (⌊0⌋⇩[N], 1)"
by (metis ab_semigroup_add_class.add_ac(1) numeral_plus_one semiring_norm(2) semiring_norm(8))
have "tpsL3 t ! (j3 + 5) = (⌊0⌋⇩[N], 1)"
using tpsL3_def tps0 jk by simp
then show "tpsL3 t ! (j3 + 3 + 2) = (⌊0⌋⇩[N], 1)"
by (simp add: numeral_Bit1)
definition tpsL5 :: "nat ⇒ tape list" where
"tpsL5 t ≡ tps0
[j2 := nltape' (clause_n clause) (Suc t),
j3 := (⌊sat_take (Suc t)⌋⇩[B], 1),
j3 + 1 := (⌊literal_n (clause ! t) div 2⌋⇩[N], 1),
j3 + 2 := (⌊literal_n (clause ! t) mod 2⌋⇩[N], 1),
j3 + 3 := (⌊literal_n (clause ! t) div 2 ∈ set vars⌋⇩[B], 1)]"
lemma tmI [transforms_intros]:
assumes "ttt = 16" and "t < length (clause_n clause)"
shows "transforms tmI (tpsL4 t) ttt (tpsL5 t)"
unfolding tmI_def
proof (tform tps: jk tpsL4_def time: assms(1))
show "10 + 2 * nlength (if sat_take t then 1 else 0) + 2 * nlength 1 + 2 ≤ ttt"
using assms(1) nlength_0 nlength_1_simp by simp
have len: "t < length clause"
using assms(2) by (simp add: clause_n_def)
let ?l = "clause ! t"
have 1: "read (tpsL4 t) ! (j3 + 3) = □ ⟷ literal_n ?l div 2 ∉ set vars"
using tpsL4_def jk read_ncontents_eq_0[of "tpsL4 t" "j3 + 3"] by simp
have 2: "read (tpsL4 t) ! (j3 + 2) = □ ⟷ literal_n ?l mod 2 = 0"
using tpsL4_def jk read_ncontents_eq_0[of "tpsL4 t" "j3 + 2"] by simp
let ?a = "λv. v ∈ set vars"
let ?cond = "read (tpsL4 t) ! (j3 + 3) = □ ∧ read (tpsL4 t) ! (j3 + 2) = □ ∨
read (tpsL4 t) ! (j3 + 3) ≠ □ ∧ read (tpsL4 t) ! (j3 + 2) ≠ □"
have *: "?cond ⟷ satisfies_literal ?a ?l"
proof (cases ?l)
case (Neg v)
then have "literal_n ?l div 2 = v" "literal_n ?l mod 2 = 0"
by simp_all
moreover from this have "satisfies_literal ?a ?l ⟷ v ∉ set vars"
using Neg by simp
ultimately show ?thesis
using 1 2 by simp
case (Pos v)
then have "literal_n ?l div 2 = v" "literal_n ?l mod 2 = 1"
by simp_all
moreover from this have "satisfies_literal ?a ?l ⟷ v ∈ set vars"
using Pos by simp
ultimately show ?thesis
using 1 2 by simp
have **: "sat_take (Suc t) ⟷ sat_take t ∨ satisfies_literal ?a ?l"
using satisfies_clause_take[OF len] by simp
show "tpsL5 t = (tpsL4 t)[j3 := (⌊1⌋⇩[N], 1)]" if ?cond
proof -
have "(if sat_take (Suc t) then 1::nat else 0) = 1"
using that * ** by simp
then show ?thesis
unfolding tpsL5_def tpsL4_def using that by (simp add: list_update_swap)
show "tpsL5 t = (tpsL4 t)" if "¬ ?cond"
proof -
have "sat_take t = sat_take (Suc t)"
using * ** that by simp
then show ?thesis
unfolding tpsL5_def tpsL4_def using that by (simp add: list_update_swap)
lemma tmL5 [transforms_intros]:
assumes "ttt = 36 + 4 * nlength (clause_n clause ! t) + 67 * (nllength vars)⇧^2"
and "t < length (clause_n clause)"
shows "transforms tmL5 (tpsL t) ttt (tpsL5 t)"
unfolding tmL5_def by (tform tps: assms)
definition tpsL6 :: "nat ⇒ tape list" where
"tpsL6 t ≡ tps0
[j2 := nltape' (clause_n clause) (Suc t),
j3 := (⌊sat_take (Suc t)⌋⇩[B], 1),
j3 + 1 := (⌊0⌋⇩[N], 1),
j3 + 2 := (⌊literal_n (clause ! t) mod 2⌋⇩[N], 1),
j3 + 3 := (⌊literal_n (clause ! t) div 2 ∈ set vars⌋⇩[B], 1)]"
lemma tmL6 [transforms_intros]:
assumes "ttt = 46 + 4 * nlength (clause_n clause ! t) + 67 * (nllength vars)⇧^2 + 2 * nlength (literal_n (clause ! t) div 2)"
and "t < length (clause_n clause)"
shows "transforms tmL6 (tpsL t) ttt (tpsL6 t)"
unfolding tmL6_def by (tform tps: assms tps0 tpsL6_def tpsL5_def jk)
definition tpsL7 :: "nat ⇒ tape list" where
"tpsL7 t ≡ tps0
[j2 := nltape' (clause_n clause) (Suc t),
j3 := (⌊sat_take (Suc t)⌋⇩[B], 1),
j3 + 1 := (⌊0⌋⇩[N], 1),
j3 + 2 := (⌊0⌋⇩[N], 1),
j3 + 3 := (⌊literal_n (clause ! t) div 2 ∈ set vars⌋⇩[B], 1)]"
lemma tmL7 [transforms_intros]:
assumes "ttt = 56 + 4 * nlength (clause_n clause ! t) + 67 * (nllength vars)⇧^2 + 2 * nlength (literal_n (clause ! t) div 2) +
2 * nlength (literal_n (clause ! t) mod 2)"
and "t < length (clause_n clause)"
shows "transforms tmL7 (tpsL t) ttt (tpsL7 t)"
unfolding tmL7_def by (tform tps: assms tps0 tpsL7_def tpsL6_def jk)
definition tpsL8 :: "nat ⇒ tape list" where
"tpsL8 t ≡ tps0
[j2 := nltape' (clause_n clause) (Suc t),
j3 := (⌊sat_take (Suc t)⌋⇩[B], 1),
j3 + 1 := (⌊0⌋⇩[N], 1),
j3 + 2 := (⌊0⌋⇩[N], 1),
j3 + 3 := (⌊0⌋⇩[N], 1)]"
lemma tmL8:
assumes "ttt = 66 + 4 * nlength (clause_n clause ! t) + 67 * (nllength vars)⇧^2 +
2 * nlength (literal_n (clause ! t) div 2) +
2 * nlength (literal_n (clause ! t) mod 2) +
2 * nlength (if literal_n (clause ! t) div 2 ∈ set vars then 1 else 0)"
and "t < length (clause_n clause)"
shows "transforms tmL8 (tpsL t) ttt (tpsL8 t)"
unfolding tmL8_def by (tform tps: assms tps0 tpsL8_def tpsL7_def jk)
lemma tmL8':
assumes "ttt = 70 + 6 * nllength (clause_n clause) + 67 * (nllength vars)⇧^2"
and "t < length (clause_n clause)"
shows "transforms tmL8 (tpsL t) ttt (tpsL8 t)"
proof -
let ?l = "literal_n (clause ! t)"
let ?ll = "clause_n clause ! t"
let ?t = "66 + 4 * nlength ?ll + 67 * (nllength vars)⇧^2 +
2 * nlength (?l div 2) + 2 * nlength (?l mod 2) + 2 * nlength (if ?l div 2 ∈ set vars then 1 else 0)"
have "?t = 66 + 4 * nlength ?ll + 67 * (nllength vars)⇧^2 +
2 * nlength (?ll div 2) + 2 * nlength (?ll mod 2) + 2 * nlength (if ?ll div 2 ∈ set vars then 1 else 0)"
using assms(2) clause_n_def by simp
also have "... ≤ 66 + 4 * nlength ?ll + 67 * (nllength vars)⇧^2 +
2 * nlength ?ll + 2 * nlength (?ll mod 2) + 2 * nlength (if ?ll div 2 ∈ set vars then 1 else 0)"
using nlength_mono[of "?ll div 2" "?ll"] by simp
also have "... = 66 + 6 * nlength ?ll + 67 * (nllength vars)⇧^2 +
2 * nlength (?ll mod 2) + 2 * nlength (if ?ll div 2 ∈ set vars then 1 else 0)"
by simp
also have "... ≤ 66 + 6 * nlength ?ll + 67 * (nllength vars)⇧^2 +
2 * nlength 1 +
|
{"url":"https://devel.isa-afp.org/browser_info/current/AFP/Cook_Levin/Satisfiability.html","timestamp":"2024-11-14T08:07:52Z","content_type":"application/xhtml+xml","content_length":"1049321","record_id":"<urn:uuid:b4a87582-ee03-4067-8565-a9af22f00354>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00104.warc.gz"}
|
Binary and ternary recombination of H2D+ and HD2+ ions with electrons at 80 K
The recombination of deuterated trihydrogen cations with electrons has been studied in afterglow plasmas containing mixtures of helium, argon, hydrogen and deuterium. By monitoring the fractional
abundances of H-3(+), H2D+, HD2+ and D-3(+) as a function of the [D-2]/[H-2] ratio using infrared absorption observed in a cavity ring down absorption spectrometer (CRDS), it was possible to deduce
effective recombination rate coefficients for H2D+ and HD2+ ions at a temperature of 80 K.
From pressure dependences of the measured effective recombination rate coefficients the binary and the ternary recombination rate coefficients for both ions have been determined. The inferred binary
and ternary recombination rate coefficients are: alpha(binH2D)(80 K) = (7.1 +/- 4.2) x 10(-8) cm(3) s(-1), alpha(binHD2) (80 K) = (8.7 +/- 2.5) x 10(-8) cm(3) s(-1), K-H2D(80 K) = (1.1 +/- 0.6) x 10
(-25) cm(6) s(-1) and K-HD2 (80 K) = (1.5 +/- 0.4) x 10(-25) cm(6) s(-1).
|
{"url":"https://explorer.cuni.cz/publication/522523?query=Deuterium","timestamp":"2024-11-12T19:41:43Z","content_type":"text/html","content_length":"39605","record_id":"<urn:uuid:14dfb3d9-6c59-4524-812b-1edd453be957>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00650.warc.gz"}
|
Excel Physics
This is a model simulating a two-dimensional random walk in two variants, one by using a digital angle (in 90 degrees increments) and one using an analog angle between zero and 2*pi. This type of
model (similar to the walk of a very drunk man) pertains very well to numerically solving Monte Carlo diffusion type problems. It is a brute force model using… Read More... "A 2D Random Walk Model –
the “drunk man” animation"
The Melting Snow Castle – a diffusion model application
This is an application to the previously derived 2D diffusion (heat transfer) model. The snow melting process is very similar to diffusion or heat transfer. Just open the model and hit “Start-Pause”
to see for yourself. It is a 2003 model or earlier. I wasn’t able to run it in 2007 with animation but I’ve got some friends who managed to run… Read More... "The Melting Snow Castle – a diffusion
model application"
Building a Dynamic Two Dimensional Heat Transfer Model – part #2
This is the second half of the tutorial which shows how to build a basic animated 2D heat transfer model in Excel. [sociallocker][/sociallocker] Building a Dynamic Two Dimensional Heat Transfer Model
– part #2 by George Lungu – This is the second half of a tutorial which shows how to build a basic dynamic heat conduction model of a square… Read More... "Building a Dynamic Two Dimensional Heat
Transfer Model – part #2"
Building a Dynamic Two Dimensional Heat Transfer Model – part #1
Here is the first part of a tutorial which shows how to build a two dimensional heat transfer model in Excel. The presentation shows how to partition a square plate in elementary elements on which
the simplest form of the heat storage and heat transfer equations can be applied. The numerical form of the final temperature formula is derived . Building… Read More... "Building a Dynamic Two
Dimensional Heat Transfer Model – part #1"
A Basic 2D Animated Heat Transfer Model (a diffusion model too)
Here is a basic 2D heat transfer model. The first five worksheets model square plates of 30 x 30 elements. The last worksheet is the model of a 50 x 50 plate. You can modify the initial temperature
by hand within the range C21:AF240. It is also a diffusion model. It uses the storage and transport equations derived in the previous tutorials.… Read More... "A Basic 2D Animated Heat Transfer Model
(a diffusion model too)"
Animated Heat Transfer Modeling for the Average Joe – part #4
This is the last tutorial of the series and it shows how to implement the previously derived formulas into a spreadsheet model. The spreadsheet formulas, the macros and charting of the dynamic data
is explained. [sociallocker][/sociallocker] Animated heat transfer modeling for the average Joe – part #4 by George Lungu – This is the last section of the beginner series… Read More... "Animated
Heat Transfer Modeling for the Average Joe – part #4"
Animated Heat Transfer Modeling for the Average Joe – part #3
This section shows how to model heat transfer in a linear bar by dividing it in elementary sections in which the basic linear equations introduced in the previous tutorials can be used. Animated heat
transfer modeling for the average Joe – part #3 by George Lungu – This is a new section of the beginner series of tutorials in heat… Read More... "Animated Heat Transfer Modeling for the Average Joe
– part #3"
Animated Heat Transfer Modeling for the Average Joe – part #2
This is a continuation of the first part of the beginner series of tutorials in heat transfer modeling. The first part introduced the reader to the concept of heat capacity (being analogous to the
electrical capacity). This section continues with the concept of heat conductance which is analogous to the electrical conductance. Ohm’s law applies. Toward the end, the principle… Read More...
"Animated Heat Transfer Modeling for the Average Joe – part #2"
Animated Heat Transfer Modeling for the Average Joe – part #1
This the first tutorial on modeling heat transfer at a very introductory level. If you follow this series and spend your own effort in developing your own models you will be able to model heat
transfer in very complex shapes (1D, 2D, 3D) in a short time and with the basic understanding of a 12 year old school boy. Animated heat transfer modeling… Read More... "Animated Heat Transfer
Modeling for the Average Joe – part #1"
A One-Dimensional Dynamic Heat Transfer Model – a diffusion model
Hi guys, Here is a 1D dynamic model I built today simulating heat transfer in a 21-segment bar. Just click on the orange “Demo” button for a quick demo. Hitting “Reset” sets the 21 segments of the
bar to the initial conditions which is a fully customizable initial temperature map. Clicking “Start/Pause” starts the simulation and you can watch the bar temperature profile… Read More... "A
One-Dimensional Dynamic Heat Transfer Model – a diffusion model"
How to Model a Frequency Modulated (FM) Signal – an insight
Both frequency and phase modulation are important not only in electronics but also in science and physics in general. It seems like a trivial chore but when I first tried to model such a signal some
time back I hit a hard wall. Our minds easily understand kinematics concepts such as coordinate, speed, acceleration and the relations between them in… Read More... "How to Model a Frequency
Modulated (FM) Signal – an insight"
Casual Introduction to Numerical Methods – spring-mass-damper system model – part#5
In this tutorial, most of the calculations for the numerical simulation a SMD (spring-mas-damper) system will be consolidated into a single formula, the coordinate formula. In this case, in order to
calculate the coordinate at the end of a any time step, we will need just the coordinates from the previous two time steps and of course the input parameters (constants). These… Read More... "Casual
Introduction to Numerical Methods – spring-mass-damper system model – part#5"
Casual Introduction to Numerical Methods – spring-mass-damper system model – part#4
This tutorial explains the principles to generating animation for the spring-mass-damper system analyzed in the previous presentations. [sociallocker][/sociallocker] A casual approach to numerical
modeling – part #4 – a Spring-Mass-Damper-System – creating the animation by George Lungu – We are trying to generate animation for the system sketched above knowing the deviation from the
equilibrium function of time. This deviation is… Read More... "Casual Introduction to Numerical Methods – spring-mass-damper system model – part#4"
2D Projectile Motion Tutorial #7
In the this tutorial, after we got most of the trajectory calculation concentrated in just two columns, we will write a custom VBA function (dual output) to replace the spreadsheet computations used.
This process of starting with very simple models, then refining the calculations and then learning how to write custom functions for those calculations will be extremely useful later for developing
more complex models. [sociallocker][/sociallocker]… Read More... "2D Projectile Motion Tutorial #7"
|
{"url":"https://excelunusual.com/category/excel-physics/page/5/","timestamp":"2024-11-09T17:35:19Z","content_type":"text/html","content_length":"162813","record_id":"<urn:uuid:a76d0107-3157-44a9-b346-187d0127857a>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00229.warc.gz"}
|
Measures of prestige
1. Unit prestige measures
□ ordinary and valued degree
□ influence domain
□ proximity prestige
□ hubs and authorities
2. Similarity / dissimilarity of prestige measures
3. Examples
Find prestige measures in given networks.
Find influence domain, proximity prestige and hubs and authorities. Draw network where hubs and authorities are represented by vertex size (hubs-size in x direction, authorities-size in y direction).
Compare results to centrality measures - compute Pearson correlation coefficients among all pairs of prestige measures (input degree, input closeness, betweenness, (proximity prestige), authority
weights). To make it faster you can use exports to R, SPSS or Excel:
• R: Tools/R, x<-data.frame(v?,v?,...,v?), cor(x)...
• SPSS: Tools/SPSS, Run All, Analyze/Correlate/Bivariate...
• Excel: Tools/Excel...
Present results in the table and interprete them.
Slides (PDF)
|
{"url":"http://mrvar.fdv.si/sola/info4/andrej/prpart4a.htm","timestamp":"2024-11-08T10:20:22Z","content_type":"text/html","content_length":"2137","record_id":"<urn:uuid:70042c27-c78e-4003-a8e0-8097c9f2c42b>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00463.warc.gz"}
|
Contact Us - Market Business News
Understanding the Calculations
This calculator uses the Effective Annual Rate (EAR) to accurately compute the interest rate per payment period when your payment frequency differs from the interest compounding frequency.
Why Use the Effective Annual Rate?
The EAR accounts for the impact of interest compounding over the year, ensuring that the interest rate per payment period reflects the true cost of borrowing.
How We Calculate the Interest Rate Per Payment Period:
1. Calculate the EAR: Convert the nominal annual interest rate to the EAR using the formula:
EAR = (1 + (Nominal Rate / Compounding Periods))^Compounding Periods – 1
2. Determine the Interest Rate Per Period: Calculate the rate per payment period based on your payment frequency:
Interest Rate Per Period = (1 + EAR)^1 / Payments Per Year – 1
Why Results May Differ from Other Calculators:
Some calculators may use different methods to compute the interest rate per period, leading to varying results. Our method aligns with standard financial practices to provide accurate calculations.
Example Calculation:
Suppose you have a nominal annual interest rate of 5% with monthly compounding. Here’s how the EAR and interest rate per period are calculated:
• Calculate the EAR:EAR = (1 + (0.05 / 12))^12 – 1 ≈ 0.05116 or 5.116%
• Determine the Interest Rate Per Period (Monthly):Interest Rate Per Period = (1 + 0.05116)^1 / 12 – 1 ≈ 0.0042 or 0.42%
Key Benefits of Using the EAR Method:
• Accuracy: Reflects the true cost of borrowing by accounting for compound interest.
• Flexibility: Adjusts interest rates based on different payment frequencies.
• Comparability: Allows for accurate comparison between different loan offers and payment schedules.
1. What is the difference between nominal and effective interest rates?
The nominal interest rate is the stated annual rate without accounting for compounding within the year. The effective annual rate (EAR) includes the effects of intra-year compounding, providing a
more accurate representation of the actual interest accrued.
2. Can I use this calculator for different payment frequencies?
Yes! This calculator allows you to choose between monthly, bi-weekly, and quarterly payment frequencies, adjusting the interest rate per period accordingly to ensure accurate calculations.
|
{"url":"https://marketbusinessnews.com/contact-us/","timestamp":"2024-11-12T01:58:41Z","content_type":"text/html","content_length":"106971","record_id":"<urn:uuid:0edb667e-019a-4902-9f0f-ac865b73fefb>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00051.warc.gz"}
|
No evidence for a difference in Bayesian reasoning for egocentric versus allocentric spatial cognition
Bayesian reasoning (i.e. prior integration, cue combination, and loss minimization) has emerged as a prominent model for some kinds of human perception and cognition. The major theoretical issue is
that we do not yet have a robust way to predict when we will or will not observe Bayesian effects in human performance. Here we tested a proposed divide in terms of Bayesian reasoning for egocentric
spatial cognition versus allocentric spatial cognition (self-centered versus world-centred). The proposal states that people will show stronger Bayesian reasoning effects when it is possible to
perform the Bayesian calculations within the egocentric frame, as opposed to requiring an allocentric frame. Three experiments were conducted with one egocentric-allowing condition and one
allocentric-requiring condition but otherwise matched as closely as possible. No difference was found in terms of prior integration (Experiment 1), cue combination (Experiment 2), or loss
minimization (Experiment 3). The contrast in previous reports, where Bayesian effects are present in many egocentric-allowing tasks while they are absent in many allocentric-requiring tasks, is
likely due to other differences between the tasks–for example, the way allocentric-requiring tasks are often more complex and memory intensive.
Citation: Negen J (2024) No evidence for a difference in Bayesian reasoning for egocentric versus allocentric spatial cognition. PLoS ONE 19(10): e0312018. https://doi.org/10.1371/
Editor: Piers Douglas Lionel Howe, University of Melbourne, AUSTRALIA
Received: April 10, 2024; Accepted: September 30, 2024; Published: October 10, 2024
Copyright: © 2024 James Negen. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Data Availability: All data are available at https://osf.io/53vef/.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
Bayesian reasoning is a general mathematical framework for making decisions while in a state of uncertainty [1–3]. It has three general hallmarks. First, prior integration is when the observer takes
advantage of the way that certain states of the world have a long-term distribution, integrating this with short-term information to increase precision [4–6]. For example, both a cold and throat
cancer can cause a sore throat, but a cold is much more common and thus a more likely diagnosis. Second, cue combination is when multiple cues to the same aspect of the world are combined in a
reliability-weighted average that increases precision [2, 7–9]. For example, people can be more precise localizing an audiovisual signal than localizing just the constituent audio signal alone or the
constituent visual signal alone [7]. Third, loss minimization is when an observer takes into account the different costs of making different kinds of errors and thus minimizes the expected loss for a
decision [10, 11]. For example, a person might drive a little closer to the mountain side of the road than the cliff side of the road because making an error where they scrape a fender against a
mountain rock is less of a cost than an error where they fall off a cliff. An observer that can demonstrate each of these to their precision-maximizing, expected-cost-minimizing level is called Bayes
optimal or near-optimal. Surprisingly, Bayesian reasoning has shown itself to be a useful model for certain kinds of human perception and cognition [1–3], though there are also well-known exceptions
The main limit in this theoretical framework is that there are not yet well-understood general principles suggesting when we should versus should not expect Bayesian reasoning effects. Even famous
examples of near-optimal behaviour, like the combination of audio-visual cues for location [7], have failed to reliably replicate for reasons that may not be fully understood [13–16]. In addition, at
least one famous example of anti-Bayesian behaviour, the size-weight illusion, is now thought to have a valid explanation in Bayesian reasoning [17, 18]. This means that Bayesian reasoning, as a
current model for human perception and cognition, is arguably more of a post-hoc description than a predictive theory [19]. As such, one major research goal is the discovery of principles suggesting
when we should versus should not expected Bayesian reasoning.
This paper in particular focuses on the proposal that there is a major divide in Bayesian reasoning for tasks that allow egocentric spatial cognition versus tasks that require allocentric spatial
cognition. Egocentric spatial cognition is defined by coordinates in own-body-centred terms e.g. 3m to my left. Allocentric spatial cognition is defined by coordinates in world-centred terms e.g. 3m
north of the door. Tasks that allow egocentric reasoning are generally easier [20–23] and children generally master them earlier in development [24, 25]. Large networks of grid cells and place cells
are required to track allocentric information with granular precision [26], making it very costly in terms of biological investment. A previous study proposed this may lead to different evolutionary
pressures [27]. For example, prior integration requires long-term representations of locations to function. The associated storage cost may only be low enough to make it worthwhile if that long-term
representation can be stored in the egocentric frame. This leads to the proposal that we should see Bayesian effects taking place stronger/sooner in the egocentric-allowing version of a task than the
allocentric-requiring version (if not a total dissociation where it is present only in the egocentric-allowing version)–provided, of course, that the task makes it logically possible and beneficial
to carry out Bayesian reasoning. Testing this core proposal is the main aim of this paper.
A note about terminology will help here. For the remainder of the paper, for brevity and ease of reading, phrases like ‘egocentric condition’ will be used as shorthand for a condition that allows the
relevant Bayesian reasoning to take place in an entirely egocentric frame. The idea is that this might provide an easier way of performing the Bayesian reasoning (and thus show stronger effects)–not
that the task precludes alternative allocentric solutions. The phrase ‘allocentric condition’ will mean a condition that requires some use of the allocentric frame for the relevant Bayesian
The proposal of an egocentric versus allocentric divide in terms of Bayesian reasoning fits with the existing literature in three key ways. First, it readily explains the extensive findings that
people can integrate egocentric priors [4, 28–36]. In practice, this means that they begin to bias their responses towards egocentric locations that have been more likely to be correct on earlier
trials. This lends plausibility to the idea that egocentric tasks will be readily completed with Bayesian reasoning.
Second, recent work has provided several examples of participants failing to show Bayesian reasoning in allocentric tasks that otherwise have much in common with egocentric tasks that are typically
used to demonstrate Bayesian reasoning. The one that inspired this paper directly was a study of allocentric prior integration [27]. Participants were shown targets in a virtual environment. They had
to recall them after a change in perspective, forcing an allocentric frame. Despite finding reliable biases of other types in the responses, there was no evidence of allocentric prior integration. A
related study failed to find cue combination with two sets of landmarks [37]. This fit the hypothesis for young children–but it was true even for adults. Both of these studies hypothesized and then
failed to find a Bayesian effect in an allocentric spatial task, lending plausibility to the idea that allocentric tasks could be much less readily completed with Bayesian reasoning.
Third, the core proposal here also fits well with visual search results [38–40]. In this kind of task, the participant is asked to quickly find a target among a field of similar distractors. There is
a particular part of the screen where the target is more likely. If the participant can stand in one place and do the task, making the target-rich area possible to track in egocentric coordinates,
then they use the target distribution to significantly increase their speed. On the other hand, if they have to move relative to the screen between trials, then the target rich area must be tracked
in allocentric terms to be useful. In that case, there is no similar speed improvement. This indicates that basic differences in egocentric vs allocentric probabilistic reasoning are generally
plausible. Moreover, it suggests that attention to the long-term distribution of allocentric coordinates may be generally poor–which would make it hard to develop the long-term statistical
understanding that Bayesian reasoning requires. This would all fit well with an egocentric versus allocentric divide in terms of Bayesian reasoning.
Of course, there will still be situations where Bayesian reasoning is either not logically possible or not beneficial enough to be worthwhile, so the methods here will need to avoid that to be a good
test of the core proposal. Optimal prior integration requires the participant to have enough learning time to be able to estimate the mean and variance of the long-term prior distribution. Optimal
prior integration effects are largest (and thus most readily detected) when the variance in the long-term prior distribution is relatively small and the task is hard enough that responses have a
relatively large variance. Optimal cue combination effects are largest when the two cues are comparable in their reliability. Optimal loss minimization effects are largest when the asymmetry is large
and the task is again hard enough that the responses have a relatively large variance. The arrangement of these parameters will guard against the possibility that the test fails to find a difference
just because the potential Bayesian effect is too small to detect.
The main aim here is a tightly matched test of the main proposal, namely an egocentric versus allocentric divide in terms of Bayesian reasoning. This further study is needed because existing studies
do not yet provide a full test of the hallmark Bayesian effects where the methods are designed to isolate the egocentric vs allocentric factor. Corresponding to the three hallmarks of Bayesian
reasoning, the present article reports three experiments that examine prior integration, cue combination, and loss minimization in egocentric vs allocentric versions of otherwise-matched tasks. For
each, the hypothesis is that the egocentric version of the task will show Bayesian reasoning while the allocentric will not.
All experiments were pre-registered at https://osf.io/5bq7e/wiki/home/. There are also example videos for each method at https://osf.io/53vef/ as well as a copy of the method, the data, and the
analysis code.
Experiment 1
This experiment tested the hypothesis that participants use an egocentric prior, but not an allocentric prior. The task here asks people to recall a target location from memory that was shown on a
disk before it was covered and spun. This will always come with some noise in memory and perception. To help them, in the two main conditions, there is a particular area where the targets are much
more likely to be. Informally, the best strategy is for participants to hedge their bets between their long-term understanding of where targets tend to be (the prior distribution) and their immediate
perception/memory of where the target was and how much the disk spun (termed the likelihood function). Formally, if prior integration is occurring, we should see a larger bias in their responses
towards the mode of the prior distribution when compared to a baseline with no informative prior distribution.
In every condition, the task was to see a target relative to a red line and then indicate where it lands after a rotation under a cover (Fig 1). In the baseline condition, the target’s final position
was uniformly distributed in both the egocentric frame (position on the screen) and the allocentric frame (position relative to the red line). In the allocentric condition, there was a normal prior
distribution in the allocentric frame. In the egocentric condition, there was a normal prior distribution in the egocentric frame. All conditions shared 8 key trials that were exactly the same across
conditions. These key trials were the only ones used in the analysis. The difference between conditions was the context of the other 88 trials that induced either the normal (informative) or uniform
(uninformative) prior distributions in the relevant frames. Any difference in performance on the key trials can therefore only be explained by the presence of the different prior distributions; the
only trials that were used in the analysis were exactly the same in every respect.
The participant was shown the target (top). It was covered and spun (middle). They clicked on their guess. They were given feedback (bottom). Everything here in orange is added for illustration and
was not shown to the participants. The number in the upper left is a trial counter.
75 participants were ultimately included (33 female, 40 male, 1 non-binary, 1 no response; ages 18 to 62 with mean 25, standard deviation 9) with 25 in each condition. An additional 22 were excluded
due to the pre-registered rule that the circular correlation between target and response must be at least 0.4 during the second block (16 female, 5 male, 1 no response; ages 18 to 66 with mean 30,
standard deviation 15). 31 participants were recruited through a university participant pool system where students and researchers volunteer for each other’s studies. The remaining participants were
recruited through Prolific and given £4 as compensation. Approval was granted by the Liverpool John Moores University Research Ethics Committee (Ref: 21/PSY/022). Consent was obtained in written
form. Recruitment began on 29 September 2021 and ended on 24 May 2022.
Sample sizes were based on conventions in the field. Since there was no specific previous work that used this exact method or addressed the egocentric versus allocentric factor, there was no
qualifying effect size to use for the desired power analysis. Studies in this area often have as few as 4–8 participants [4, 7, 8]. The study that directly inspired this one used 12 per condition [27
]. Since we are looking for between-group differences that could be smaller, that was doubled to 24 and then rounded up to 25. This gives 80% power to detect differences of d = 0.71 (90% for 0.84;
95% for 0.94). The general convention in the field is that we want the power to see a difference from either a null effect or an optimal effect [41], so each of the following experiments tests to be
sure that is satisfied.
Apparatus and stimuli.
The experiments were programmed through Pavlovia. Participants used their own tablets or laptops.
General stimuli. Inside a grey void there was a large circle. In the center was a black dot. Around the edges there were 4 squares that were attached to the circle. There was also a red line that
touched the center dot and the edge of the circle. There was also a target, a small blue triangle. Finally there was a black disc that can cover all of this except for the squares.
There were a total of 48 stimuli for each condition (one per trial). The distance from the target to the center dot was evenly distributed from 5% to 95% of the radius of the large circle. Of these
48 trials, 8 were designated as key trials and shared between all three conditions. These key trials all resulted in the red line being at 0 radians (straight right) and the target being in the upper
left corner of the circle. Specifically, the program first generated an even distribution of rotations around the circle. The key trials were the 8 trials that were nearest to 0.75π radians (but not
exactly equal to it). All trials also had a total rotation, a total amount that the target/line/disc/squares spun after the target was shown. This was generated as an even distribution from 0.25π to
1.75π. Added to this was a whole multiple of 2π, with a minimum multiple of 5 and a maximum of 10 (i.e. 10.25π to 21.75π).
Specific to allocentric condition. The remaining 40 stimuli were allocentrically normally distributed (i.e. informative prior). This means that the rotation from the line to the target was an
approximate normal distribution. Specifically, it a linear spacing from .025 to .975 was inputted into an inverse normal CDF with a mean of 0.75π and a standard deviation of 0.1π, with the 8 points
nearest the key trials removed. These trials were egocentrically uniformly distributed (i.e. non-informative prior). This means that the final target’s position on the screen is evenly spaced around
the disc.
Specific to baseline condition. The remaining 40 stimuli are allocentrically uniformly distributed and egocentrically uniformly distributed.
Specific to egocentric condition. The remaining 40 stimuli are allocentrically uniformly distributed and egocentrically normally distributed (same mean and standard deviation as the allocentric
Instructions were given to click on the target after the spin. The trial procedure then began. There were 96 trials split into two blocks of 48 (training and testing). Each block used the same
stimuli in a random order, including the key trials. On each trial, the disc, squares, and red line were shown. At that point, the target’s distance to the center, as well as its angle to red line,
was set and did not change. The target pulsed for 3 seconds and then it was no longer visible. The black disc covered the large circle and red line. Over 2s, the line/target/squares/circle all spun
for the total rotation amount. This placed the line and target in the intended final position. The black disc faded away. The participant tried to click on the new position of the target. They were
shown the correct target location for 3s. The next trial began. Nothing marked the transition between blocks. Nothing marked a key trial as unusual in any way.
Planned analysis.
Participants were removed as outliers if the circular correlation between target and response was not at least 0.4. Trials were removed as outliers if the absolute theta error was more than 90°. For
each participant, we examined the key trials in the second block. We calculated the bias towards the prior mode. This has two parts: (a) the average distance from the target to the prior mode and (b)
the average distance from the response to the prior mode. If there is a bias towards the prior mode, then we expect A to be larger than B on average. The bias index is therefore calculated as A minus
B. The hypothesis was that the bias would be greater in the egocentric condition than the baseline condition, while the bias would not be greater in the allocentric condition than the baseline
condition, which implies that the bias would be greater in the egocentric condition than the allocentric condition. This was tested with a trio of one-tailed t-tests. T-tests are preferred here over
an ANOVA just because follow-up testing would be required after the ANOVA anyway.
Results were not consistent with the overall hypothesis. Bias indices were not significantly higher on average in the allocentric group versus the baseline group, t(48) = 1.30, p = 0.100, d = 0.37;
nor the egocentric versus baseline, t(48) = 1.34, p = 0.093, d = 0.38; nor the egocentric versus allocentric, t(48) = -0.06, p = 0.523, d = -0.02 (Fig 2). In other words, the overall hypothesis
correctly predicted that there would be no significant difference between allocentric versus baseline–but the overall hypothesis also predicted two further differences (egocentric above baseline;
egocentric above allocentric) that were not found. This does not provide meaningful support for the larger hypothesis of a major divide in Bayesian reasoning for egocentric versus allocentric spatial
Bias index is in radians. Error bars are 95% confidence intervals. Crosses are individual participants.
Further examination unfortunately revealed that the pre-registered analysis was not working as intended and needed post-hoc modification. The exclusions were meant to screen out participants who did
not understand the task (circular correlation < .4) or trials where they were not paying attention (absolute error > 90°). This did not work well on the final data. The included participant pool
features 3 participants who had more than 50% of their responses excluded for an error over 90° (8 over 25%; 17 over 10%), suggesting that they were likely just guessing. Further, participants who
had fewer trials with an error under 90° also tended to have a lower bias index, r = 0.69, suggesting that the inclusion of lower performance bands tends to push the mean bias index downwards. While
it was not effective at screening out the performance issue, it did screen out the highest bias indices in the overall sample. The circular correlation coefficient used in the pre-registration has a
feature that is unlike linear correlation. Any systematic bias, including the prior integration effect of interest here, lowers the circular correlation. In summary, when applied to the final data,
the exclusion criteria did not effectively screen out low performance but did screen out participants with a high level of the effect of interest.
To correct this, further post-hoc analyses changed to a new exclusion rule where the median absolute error must be under 45°. This seems like a reasonable indication that the participant understands
the task as it represents half the error size that would be achieved by pure guessing on average. In contrast, the prior integration effect of interest here would not particularly increase the median
absolute error. Results below are similar if other round cutoffs are inserted instead (30ׄ°, 60°, 90°; detailed below). This should be a much more effective way of excluding participants who did not
understand the task while not excluding the effect of interest.
This analysis with the updated exclusion criteria found that the allocentric bias indices were higher on average than the baseline, t(41) = 2.77, p = 0.004 (.003 for 30° exclusion cutoff; .009 for
60°; .016 for 90°), d = 0.79, and the same for the egocentric group, t(51) = 1.75, p = 0.043 (.030 for 30°; .053 for 60°; .034 for 90°), d = 0.48. However, the bias indices were still not
significantly higher in the egocentric group, t(52) = -1.47, p = 0.926 (.935 for 30°; .869 for 60°; .773 for 90°), d = -0.40. This suggests there may have been an effect of prior integration in the
non-baseline conditions but does not suggest any particular difference in this effect between the two non-baseline conditions.
We also checked to be sure that there was scope for the prior to be of use (i.e. that participants were not so accurate that the prior’s contribution is not helpful) and that power concerns were
satisfied. The standard deviation of the prior is π/10 or 0.314. The root mean squared error was 0.421. This means that the optimal observer would place a 64% weight on the prior. We interpret this
to mean that prior did have meaningful scope to be useful in this experiment. Further, the optimal observer would have a bias index of 0.21. Both conditions were significantly different from this: t
(21) = -7.32, p < .001, d = -1.56 for allocentric; t(31) = -11.43, p < .001, d = -2.02 for egocentric. This passes the check on statistical power by showing that the observed effect is
distinguishable from either zero or optimal (both in this case).
Experiment 1 did not yield any evidence for an egocentric versus allocentric divide in terms of Bayesian reasoning. There was scope for such prior integration to be helpful. There was some evidence
that prior integration was happening, at least with a more appropriate exclusion criterion, but not that it was any different for the egocentric versus allocentric conditions.
Experiment 2
This experiment tested the hypothesis that participants combine egocentric cues, but not allocentric cues. The task was to locate a target relative to one landmark, a different landmark, or both
together. Informally, the best strategy is to use each landmark independently to estimate the target location and then average those estimates, weighing the closer one a little more. Formally, cue
combination should result in the variable error (the standard deviation of perceptual/memory noise) being lower with both cues present versus the nearest single cue. The crucial manipulation between
conditions is the nature of the cues: in the allocentric version, the target’s new location must be found relative to the landmarks; in the egocentric version, the landmarks emit a motion cue that
can be used in an entirely egocentric frame.
Beyond the egocentric versus allocentric manipulation, the two conditions were otherwise matched as closely as possible. On a given trial, the participant was given a near cue, a far cue, or both
cues to a target location. If cue combination is occurring, we should see better precision with both cues than the near cue. For the allocentric condition, the cues were seeing the target relative to
near/far/both landmarks before the scene spun (Fig 3). For the egocentric condition, the cues were near/far/both moving squares that came out of two landmarks (Fig 4).
The target was shown relative to the landmarks (top). The target disappeared and the landmarks spun around the screen (middle). The participant clicked where they thought the target would now be and
the correct answer was shown (bottom). Everything here in orange is added for illustration and was not shown to the participants. A near cue trial would only have the grey landmark and a far cue
trial would only have the black one. The number in the upper left is a trial counter.
The landmark spun to their final positions. They then emitted a motion cue: a black box moved along the first half of the direct path from the landmark to the target. The top screen shows the
furthest point the black box moved. The bottom screen shows the target this indicates. A near cue trial would only have the motion cue from the grey landmark and a far cue trial would only have the
motion cue from the black landmark. The number in the upper left is a trial counter.
50 participants were ultimately included (34 female, 13 male, 3 no response; ages 18 to 54 with mean 24, standard deviation 18) with 25 in each condition. An additional 15 participants were excluded
under the pre-registered rule that the linear correlation between target and response must be at least 0.40 on both axes (13 female, 2 male; 18 to 54 years old with mean 27, standard deviation 11).
36 participants were recruited through a university participant pool system where students and researchers volunteer for each other’s studies. The remaining participants were recruited through
Prolific and given £4 as compensation. Approval was granted by the Liverpool John Moores University Research Ethics Committee (Ref: 21/PSY/022). Consent was obtained in written form. Recruitment
began on 29 September 2021 and ended on 24 May 2022.
Apparatus and stimuli.
The experiment was programmed with Pavlovia. Participants used their own tablets or laptops.
General stimuli. On a white background, there were two small triangles (light grey and black) that served as landmarks. Each landmark had a small black box attached that could be moved towards the
target for the egocentric condition. There was also a target, a small blue triangle.
The targets were on a 6x6 grid, omitting corners (32 targets). These were 5/16, 3/16, 1/16, and so on from the center in each axis. Each target had an assigned total rotation with two components. The
first was evenly distributed from .25π to 1.75π in 8 steps (each used 4 times). The second was an even multiple of 2π, with a random whole multiple between 10 and 20 (i.e. 20.25π to 41.75π). To make
the stimuli for test trials, this was repeated with either the black landmark, the grey landmark, or both (96 trials). All that varied across trial types was the set of cues presented.
Specific to egocentric condition. To indicate an egocentric position, the box(s) attached to the landmark(s) moved half-way to the target position over a period of 1s, moving faster at the beginning
and slowing their velocity linearly to a stop. When stopped, they disappeared. There was one moving square, the other, or both depending on the trial type.
Specific to allocentric condition. To indicate an allocentric position, the target pulsed in place relative to the landmark(s) for 3s. This then disappeared before the landmark(s) spun. There was one
landmark, the other, or both depending on the trial type.
Participants were instructed to find the target after the spin. Instructions explained how the relevant cue functioned: “Try to click where the target lands after the spin” (Allocentric) and “Try to
click where the squares would end up if they went twice as far” (Egocentric).
There were 3 warmup trials. The 96 test trials were then delivered in a random order. On each trial, the black landmark began at the top of the screen if it was used and the grey landmark began at
the bottom of the screen if it was used. In the allocentric condition, the target pulsed for 3s. The landmark(s) spun for 3s and came to a stop. The participant clicked where they thought the target
was, requiring them to remember how the target location related to the available landmark(s). The correct location was shown for 3s. In the egocentric condition, the target was not shown at the
beginning. Instead, the landmark(s) spun for 3s and came to a stop. The black box(s) then moved halfway towards the target location. This can be encoded, disregarding the landmarks, as movement
through nearby space in an egocentric frame. The participant then clicked where they thought the target was. The correct location was shown for 3s.
Planned analysis.
First, outlier participants were removed by screening for any participant who did not have a correlation between target and response of at least 0.4. Second, outlier data points were removed by
removing any responses that were more than 2.5 standard deviations from the target (i.e. find the Pythagorean distance from target to location for all responses, find the root mean squared distance,
and exclude anything more than 2.5x further).
For each participant, six measures were extracted: variable error with the near cue, the far cue, and both cues–each repeated along the x axis and the y axis. Variable error is a measure of the noise
in responses, separate from the systematic biases present (often called the constant error). The idea is to get a basic measure of noise in the responses, then undo any deflation from any systematic
biases [42]. The basic noise measure was found by calculating the standard deviation of the residuals after regressing the responses onto the target location, center point, and the landmarks. Of
course, that standard deviation might be smaller than the actual noise in perception and memory if there is a systematic bias. For example, moving every response 50% of the way to the center would
make the basic noise measure 50% smaller. This is corrected, as shown in previous work [42], by dividing the basic noise measure by the unstandardized beta value for the targets from the same
regression. This recovers the underlying noise in perception and memory. To restate, the variable error is calculated as the standard deviation of the residuals (regressing responses onto the
targets, center, and landmarks) divided by the unstandardized beta value for the targets.
We then did a paired one-tailed t-test for each condition, testing the hypothesis that near variable error (averaged over the two axes) was greater than both-cues variable error (again averaging).
The hypothesis was that this effect will be present for the egocentric condition, but not the allocentric condition. A further plan to compare the two condition’s outcomes, if they both showed the
effect of interest, was registered but ultimately unneeded.
Results were not consistent with the overall hypothesis. While Near VE was not significantly higher than Both VE in the allocentric group, t(24) = -1.32, p = 0.900, d = -0.26, it was also not higher
in the egocentric group, t(24) = -8.99, p > .999, d = -1.80 (Fig 5). In other words, neither group had significantly lower noise in their responses when given both cues versus the nearest single cue;
neither showed a significant cue combination effect. This does not provide meaningful support for the central hypothesis.
Variable error is given in screen units–the length of the shorter dimension of the screen would be 1.0. Error bars are 95% confidence intervals and crosses are individual participants.
Post-hoc analyses checked if the task was sensitive to differences in trial types. For both groups, the Far VE was higher than the Near VE: allocentric, t(24) = -6.66, p < .001, d = 1.33 and
egocentric, t(24) = -4.91, p < .001, d = 0.98. This confirms that the task was capable of capturing basic differences in variable error.
Further post-hoc analyses also checked that there was scope for cue combination to be of aid and that power concerns were satisfied. We compared performance with both cues against the theoretical
optimal VE: . Both VE was higher than Optimal VE for the allocentric group, t(24) = 5.40, p < .001, d = 1.08, and the egocentric group, t(24) = 12.44, p < .001, d = 2.49. This in turn suggests that
the issue here is not just lack of scope for cue combination to be of aid; if that were the case, then we would expect Both VE versus Optimal VE to be indistinguishable. This also passes the check on
statistical power by showing that the observed effect is distinguishable from either zero or optimal (optimal in this case).
Experiment 2 did not yield any evidence for an egocentric versus allocentric divide in terms of Bayesian reasoning. There was scope for cue combination to be helpful. There was strong evidence that
different trial types led to different levels of variable error. However, there was no evidence of any difference for the egocentric versus allocentric conditions in terms of cue combination.
Experiment 3
This experiment tested the hypothesis that participants will use an asymmetric egocentric loss function to their advantage, but not an asymmetric allocentric loss function. The core task is the same
as the Experiment 1 baseline condition. However, here, each answer received a score. The crucial manipulation is that the side with a lighter score penalty is either towards a present landmark
(allocentric) or just the top of the screen (egocentric). If loss minimization is happening, people should bias their responses towards the side with a lighter score penalty for being incorrect.
Participants were given the same spatial task as the Experiment 1 baseline condition, seeing a target relative to a red line and then indicating where it landed after a spin under a cover. Their
score had a base value of 100 per trial, with points removed for errors in terms of rotation or distance to the center. The conditions either penalized rotational errors symmetrically (baseline),
penalized rotational errors towards the top of the screen less (egocentric), or penalized rotational errors towards the line less (allocentric) (Fig 6).
The participant clicked on the small triangle that is lower on the screen. The dots then traced out to the correct target, the other small triangle that is higher on the screen. Their score is
displayed nearby.
75 participants were ultimately included (46 female, 29 male; ages 18 to 45 with mean 24, standard deviation 6) with 25 in each condition. An additional 19 were excluded under the pre-registered
criterion that circular correlation between target and response must be at least 0.4 (14 female, 5 male; ages 18 to 61 with mean 24, standard deviation 10). 36 participants were recruited through a
university participant pool system where students and researchers volunteer for each other’s studies. The remaining participants were recruited through Prolific and given £2 as compensation. Approval
was granted by the Liverpool John Moores University Research Ethics Committee (Ref: 21/PSY/022). Consent was obtained in written form. Recruitment began on 29 September 2021 and ended on 24 May 2022.
Apparatus and stimuli.
The experiment as programmed using Pavlovia. Participants used their own tablets or laptops.
Inside a grey void there was a large circle. In the center was a black dot. Around the edges there were 4 squares that were attached to the circle. There was also a red line that touched the center
dot and the edge of the circle. There was also a target, a small blue triangle. Finally there was a black disc that could cover all of this except for the squares.
There were a total of 45 stimuli (one per trial). The initial rotation of the red line was evenly spaced from 0 to 2π, as was the initial target rotation. The initial distance to the center for the
target was evenly spaced from 10% to 90% of the way from the center dot to the large circle’s edge. The total rotation had two components. The first was evenly spaced from .25π to 1.75π. The second
is an even multiple of 2π, with a whole number multiple between 5 and 15 (i.e. 10.25π to 31.75π). Each of these were randomly ordered once (independently) and used in the same order for all
Instructions were given to click on the target after the spin. They were also given brief instructions about the scoring. These read “Errors TOWARDS the line count less (x0.5). Errors AWAY FROM the
line count more (x2)” or “Errors TOWARDS the top count less (x0.5). Errors AWAY FROM the top count more (x2)”.
There were 45 trials. On each trial, the disc, squares, and red line were shown. The target pulsed for 3 seconds. Over 2s, the line/target/squares/circle all spun for the total rotation amount. The
black disc faded away. The participant tried to click on the new position of the target. They were shown the correct target location for 3s. Alongside this, a short animation gave them their score.
It marked out the error in terms of distance to the center first, then the error in terms of rotation around the center. If the rotational error was in a less-penalized direction (i.e. closer to the
line/top than the target), the animation was green and the penalty was halved. If it was in a more-penalized direction, the animation was red and the penalty was doubled.
Planned analysis.
Participants were removed as outliers if the circular correlation between target and response was not at least 0.4. Trials were removed as outliers if the absolute theta error was more than 90°.
From each participant, we extracted the bias towards the top and bias towards the line. This was the average distance from top/line to target minus the average distance from top/line to response. A
bias of zero would mean the same average distance from the top/line to the response and the target. A bias of 0.1 towards the line/top would mean the response was 0.1 radians (about 5.7°) further
towards the line/top than the target on average. The possible range was -0.5π to +0.5π (-1.57 to +1.57). We hypothesized that the up-bias would be higher in the egocentric condition than the baseline
condition, whereas the line-bias would not be higher in the allocentric condition than the baseline condition. This was tested with two one-tailed t-tests.
Comparing the non-baseline conditions required a chi-square test for nested models. The full model had a mean up-bias index for the baseline group, an egocentric vs baseline mean difference for the
up-bias index, a mean line-bias for the baseline group, an allocentric vs baseline mean difference for the line-bias index, and a standard deviation. The restricted model used the same parameter for
both mean differences. A significant model comparison result would therefore indicate a difference in the size of the biases, corrected for baseline effects, between the allocentric vs egocentric
Results were not consistent with the overall hypothesis. While the line-bias index in the allocentric group was not significantly greater than the baseline group, t(48) = 1.45, p = 0.077, d = 0.41,
the up-bias index in the egocentric group was also not significantly greater than the baseline group, t(48) = -0.29, p = 0.614, d = -0.08 (Fig 7). In other words, neither experimental group showed
significant evidence for a loss-minimizing bias in the direction of the less-penalized error. This does not provide meaningful support for the larger hypothesis.
Bias index is in radians. Error bars are 95% confidence intervals. Crosses are individual participants.
As in Experiment 1, we also checked post-hoc what would happen if we had instead used a different exclusion criterion–specifically one where the median absolute error must be below 45°. The
difference between the allocentric group’s bias towards the line and the baseline group’s bias towards the line was not significant, t(50) = 1.46, p = 0.075, d = 0.40. The egocentric versus baseline
comparison for bias up was significant, t(50) = 1.75, p = 0.043, d = 0.47. However, the difference between the two effects is not significant, χ^2(1) = 0.02, p = 0.883. As before, this could suggest
that an effect of loss minimization was occurring here but not that it was any different for egocentric versus allocentric.
Post-hoc analyses also checked that there was scope for the gain functions to have an effect and that power concerns were satisfied. The root mean square error was .497 radians, which leads to an
optimal bias of 0.418 radians. We interpret this to mean that there was meaningful scope for gain maximization to affect the responses. Further, both conditions are significantly different from the
optimal prediction: t(25) = -18.39, p < .001, d = -3.61 for allocentric; t(25) = -27.15, p < .001, d = -5.32 for egocentric. This also passes the check on statistical power by showing that the
observed effect is distinguishable from either zero or optimal (optimal for allocentric; both for egocentric).
Experiment 3 also did not yield any evidence for an egocentric versus allocentric divide in terms of Bayesian reasoning. There was scope for the asymmetry in the loss function to be helpful. There
was some evidence that loss minimization was happening, at least with the updated exclusion rule, but not that it was any different for the egocentric versus allocentric conditions.
General discussion
The three experiments here did not find any evidence for any difference between egocentric-allowing and allocentric-requiring conditions in terms of Bayesian reasoning effects. There was no greater
ability to integrate an egocentric-allowing prior (Experiment 1), no greater benefit for combining egocentric-allowing cues (Experiment 2), and no greater ability to use an asymmetric
egocentric-allowing loss function (Experiment 3). This was despite all three experiments providing strong evidence that the relevant hallmark of Bayesian reasoning would be useful (i.e. to increase
precision or score) and at least some evidence that it was indeed present in Experiments 1 and 3. This discredits the proposed divide between egocentric-allowing and allocentric-requiring spatial
tasks in terms of Bayesian reasoning. There is no evidence here that participants can take advantage of the opportunity to do Bayesian reasoning in an egocentric frame, either by failing to attempt a
different strategy or just by failing to derive any benefit. These results instead suggest that previous differences in results–for example, integrating egocentric-allowing priors in one study [5]
and not integrating allocentric-requiring priors in another study [27]–are probably due to other methodological differences.
It is possible that a true underlying principle, a factor separating Bayesian vs non-Bayesian behaviour in spatial tasks, might still have something to do with the associated factor of task
complexity. When designing an egocentric-allowing task (or designing a spatial task without a preference for egocentric vs allocentric), the researcher often wants trials to be short so that data
collection can move efficiently. In contrast, allocentric-requiring tasks often necessitate longer, more complex trials to be sure that they force the participant to use world-centred coordinates.
The experiments here are matched as closely as possible and thus have a similar level of overall task complexity. This could explain why no difference was found here while differences are found when
comparing across studies that are not matched in this manner. It would also explain why Experiment 2 failed to show any Bayesian effects at all since it required two cues rather than one (and thus
could be viewed as more complex than Experiments 1 and 3). Complexity could also explain why cue combination has been found in single-dimension spatial judgements so often [7, 13, 14, 43] but not the
two-dimensional conditions here and elsewhere [37]. However, of course, this would not particularly explain any failures found in one dimension [15]. Further research would be required to clarify
As with any given series of experiments that return a null result, it is true here that a larger study (with more participants, more trials, or both) would have more power to detect smaller
differences and thus would allow a more compelling conclusion. It could very well be that theory will evolve in a way that warrants such an exploration. For now, we have three experiments (N = 75,
50, 75) that all failed to find any evidence for the proposed distinction but have met the conventional threshold of showing either a significant difference from zero effect or the optimal effect.
This seems at least sufficient to say the main proposal has been meaningfully discredited.
It is also worth noting that neither condition in either experiment passed the strictest pre-registered version of any test for any Bayesian effect. However, this seems likely best understood as an
issue with the pre-registered exclusions. The exclusion criterion can be shown formally to be biased against such findings in Experiments 1 and 3 because any average shift in response placement (i.e.
the index of the Bayesian effect) decreases the resulting circular correlation. A more neutral exclusion criterion, simply requiring the median error to still place the response within 45° of the
target, led to significant findings in both experiments. Overall, it seems much more reasonable to conclude that these exclusion criteria are superior than to suspect that participants are not
capable of these applications of Bayesian reasoning.
As a methodological point, it should be noted that the circular spatial method used here has some practical drawbacks. The exclusion criteria need to be set very carefully. There are many
participants who will fail to understand the task even when they are shown the correct answer after every single trial. There is a consistent bias to respond nearer to the line, requiring a baseline
condition. This means that other methods are likely preferable when possible.
The results here point away from egocentric-allowing vs allocentric-requiring spatial tasks as an important predictor of Bayesian vs non-Bayesian reasoning. Further research will need to continue
positing and testing various explanations for why some psychological tasks return evidence of Bayesian reasoning while others do not.
|
{"url":"https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0312018","timestamp":"2024-11-08T08:56:41Z","content_type":"text/html","content_length":"216872","record_id":"<urn:uuid:98b36688-a49a-4f67-9100-767dae0fd27d>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00211.warc.gz"}
|
The Categorical Language
Explaining Cation language in terms of Category Theory
Cation is a categorical language, meaning a strong equivalence between Category theory and language constructions. This article dives into the details of such equivalences, which are made into a core
of the language design.
In Cation language everything – data types, functions, expressions, values, literals – corresponds to categories, functors and objects. Since categories can be seen as objects in a higher-order
category, all of them need just two main language constructions: types and values, introduced and explained in this section.
Types and Values
The very fundamental concept of a Cation language is type. Each type is a category in itself. Contravariantly, a category in Cation language is named type.
This is a full and sufficient definition of what type is; everything else about types in Cation and Cation itself are just consequences of this definition. For instance, saying that Cation is a
language of operations with types defines Cation as a categorical language.
Important to note that all Cation types are small categories and can be represented as finite sets, inhabited by their values. Hence, a value in Cation is an object in one of the type categories.
As all small categories can be seen as objects in the category of all small categories $\mathbf{Cat}$, all Cation types as categories may also be seen as objects in a category of all types of Cation
language – $\mathbf{Cation}$. As a category, $\mathbf{Cation}$ is small category with infinite number of members.
Expressions, functions, programs
A Cation expression maps a type ($A$) (named argument type) to a type ($R$) (named return type: it can be the same as the argument type, or a different type). Thus, an expression in Cation is always
a functor $F: A \rightarrow R$.
Evaluation of an expression selects a value ($a$) from the argument type ($A$) and maps it to another value ($r$) in the return type ($R$). Thus, it is an application of the functor ($F$) of the
expression to an object ($a$):
$$\mathsf{eval}\ F\ a \doteqdot a \xrightarrow{F} r$$
The signature of an expression $F: A \rightarrow R$ is a type by itself: this is a function type. It can also be defined via the evaluation, as shown above, applied to all objects of the argument
$$F\ a = r \in R\ |\ \forall a \in A$$
In Cation, each function is a named expression – and each expression is a function (which maybe anonymous). All expressions and functions are
• functors,
• morphisms in $\mathbf{Cation}$,
• objects in $\mathrm{hom}(\mathbf{Cation})$
A Cation program is a composition of expressions, which is an expression itself (as a composition of functors is always a functor). Execution of program corresponds to an evaluation of the
expression: an application of the composed functor to a program argument giving a result.
Data types
Data types is another example of Cation types, such that data types are the first-class language citizen alongside functions and expressions.
There are only two foundational data types, named unit and uninhabited. Unit type is an initial object in the $\mathbf{Cation}$ category and is written as (,). Uninhabited type is a terminal object
in the $\mathbf{Cation}$ category and is written as (|). All other data types are derived from these two types via ... yes, expressions!
Data type is defined as an expression composition of other types. As with any algebraic data type system there are two forms of composition: product and co-product.
Product data types are bi-functors from categories representing types $L$ and $R$ to a new category $P$; where $P$ is a cartesian product of sets of $L$ and $R$ objects: $P \doteqdot L \times R$.
This also defines two projections as functors from the type $P$ to the original types $L$ and $R$. Cation expresses product with a comma operator (,) and the new type may be defined in the following
data Prod: A, B
-- using the new Prod type
val new := (valueA, valueB)#Prod
val projA := new.a
val projB := new.b
-- testing that the result is the same as the original values:
projA =? valueA && projB =? valueB !! mustBeEqual
Co-product data types are bi-functors corresponding to the morphism in the $\mathbf{Cation^\mathrm{op}}$: they map a union of sets representing all possible values in $L$ and $R$ to a new category
$S$: $S \doteqdot L + R$. Cation union with a pipe operator (|) and the new type may be defined in the following way:
data Either: A | B
-- using the new Either type
let either := a#A -- `a` is the value from the type `A`
either >|
(a)#A |? print "I am A"
(b)#B |? print "I am B"
Advanced topics
Since each data type constructor is a functor, and each functor takes exactly one argument, the multi-argument function type type fn: A, B -> C can be seen as a functor of a single argument of an
unnamed product type (A, B) – or as a composition of two unnamed functors type fn: A -> (B -> C) – similar to the Haskell language.
Lambda expressions
Now it is time to introduce Cation language constructs build using the last component of the Category theory: natural transformations.
Expert topics
Here we show how all language constructs can be interpreted as natural transformations, such that their composability properties are used for terminal analysis and parallel computations without
creation of racing conditions.
Naturality conditions
Termination analysis
Rho expressions
|
{"url":"https://cation-lang.org/category/","timestamp":"2024-11-15T03:34:25Z","content_type":"text/html","content_length":"16090","record_id":"<urn:uuid:7a942793-0e4b-455c-b14d-478bc685bca4>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00045.warc.gz"}
|