content
stringlengths
86
994k
meta
stringlengths
288
619
Superconductivity near a nematic quantum critical point: Interplay between hot and lukewarm regions We present a strong-coupling dynamical theory of the superconducting transition in a metal near a quantum-critical point toward Q=0 nematic order. We use a fermion-boson model, in which we treat the ratio of effective boson-fermion coupling and the Fermi energy as a small parameter λ. We solve, both analytically and numerically, the linearized Eliashberg equation. Our solution takes into account both strong fluctuations at small momentum transfers ∼λkF and weaker fluctuations at large momentum transfers. The strong fluctuations determine Tc, which is of order λ2EF for both s- and d-wave pairing. The weaker fluctuations determine the angular structure of the superconducting order parameter F(θk) along the Fermi surface, separating between hot and lukewarm regions. In the hot regions F(θk) is largest and approximately constant. Beyond the hot region, whose width is θh∼λ1/3, F(θk) drops by a factor λ4/3. The s- and d-wave states are not degenerate but the relative difference (Tcs-Tcd)/Tcs∼λ2 is small. Dive into the research topics of 'Superconductivity near a nematic quantum critical point: Interplay between hot and lukewarm regions'. Together they form a unique fingerprint.
{"url":"https://cris.ariel.ac.il/en/publications/superconductivity-near-a-nematic-quantum-critical-point-interplay","timestamp":"2024-11-07T10:39:47Z","content_type":"text/html","content_length":"54136","record_id":"<urn:uuid:8dd3b2dd-b30a-4254-ba73-be952f535ab3>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00355.warc.gz"}
Mtx v's Sulpha Can anyone tell me what the difference is between the two? Mtx does seem to be more widely used, but from what I've read sulpha was previously more popular. Can anyone enlighten me? :? • MTX has a greater efficacy and is more likely to act on the disease from what I remember. I've been on both, neither did anything for me. MTX worked for a little while but not long enough, while sulph. did absolutely nothing but turn my pee yellow. Literature talks about MTX as the 'gold standard' of treatment. It's most likely to work with the least amount of side-effects. • Thanks Scattered. That answers my question. Lois x • Hi I have been on sulpha for 14 years now and it has been fantastic I definitely notice it if I am really silly and let my prescript run out. However, mtx made me sick as a dog and did absolutely nothing for my RA. I think the problem is that all drugs react with everyone differently. I know of people who are really ill with sulpha and yet take the top whack of mtx and it doesn't affect them. I think it is a case of trial and error and keeping a bucket handy! • I'm on neither at the moment but as anti-inflams aren't keeping things under control, am steeling myself for what the rheummy is going to suggest next! It's nice to know that there is an alternative if mtx doesn't agree with me! Lois x • Hi I am on both at the moment, i have found they worked intially but not helping as much at the moment. • Wibberley, just to let you know that I've now been on MTX for three weeks and so far I've had just one day feeling a bit icky. I was started on 7.5, now up to 10 and that goes up again before I see the Rheumy in December. So far, so good - haven't felt any benefits as yet, but didn't really expect to this early. I got my words mixed up a bit a couple of weeks ago and now the folic acid which goes with it is called 'frolic acid' - I wish I could! • Annie, Frolic acid sounds so much more fun! Glad to hear you've only suffered one icky day - must say that was another of my fears. I've 3 kids and an OH who sometimes works away....not to mention a menagerie of animals so can only handle a day or so of ickiness! Hiya Collywobble! Lois x • Hi I was on MTX for a year and it did nothing apart from reduce the tiredness so my rheumy asked me to stop and clear it from my system and then start sulpha. unfortunately when I went back I had a chesty cold and protein in my urine so she couldn't start me then as you have to be clear of infections. Next appointment is 24 Nov hoping she will start me then as I have been off DMARD since Sept 1 and arthur is having a field day especially in my hands. MTX seems to be the preferred first drug but as my Rheumy said everyone reacts differently to drugs and what works for one doesn't for another. Good luck Fay • Hi Fay, Good luck on the 24th - I hope you get the allclear to go ahead with sulpha. With the run-up to Christmas well and truly on, we need all our strength! That was yet another of my concerns - with 3 kids regularly bringing germs home from school, and with a compromised immune system from the meds will I catch every bug going around? I usually get 1 bad cold a year and am bad at coping with that! Do they give you anything to take instead of mtx when you've caught an infection or are you just expected to cope on your own? Sorry about all these questions. I actually have my rheummy appt this afternoon and no offence to him but I'm more interested in personal experiences than textbook stuff! Lois x • Hi Wibberley Hope your appointment went well. My GP's are wonderful, they are my first call when I go down with anything, which since I've started Mtx is quite regular. I never used to catch anything. If you have children you are in the firing line with any new bug, hopefully you won't have any problems. You should have been given loads of info on which ever drug they have given you and a number to ring if you have any queries. This site is wonderful for advice and support. Fay • hello, I take sulpha with hydroxy and am thankful to be able to say that I am in a remisson of my RA although only a remission due to meds ( I did ask how they knew I wasn't in a complete remission, they just said 'you're not') I have been taking the sulpha for about two / three years and added the hydroxy after a year or so which really helped with the tiredness. No definate side affects although i'm always wondering what i can blame on the meds - the list is endless! sarah x • i am on both.I started with Sulpha which was great for 3 months(during which time I had a medical for DLA so failed!)but then it wore off.Have been on it for 7 years ever since.The MTX was great and really helped with the R.A pain but had to increase it upto 22.5mg then had to add Humira as kept having flare-ups.I still rate the MTX apart from the nausea which, with 3 little kids can be a real nuisance.It can make me feel slow and uncomfortable and I dont want to do anything(but since having scrambled eggs before I take it has been much better ). Best wishes Never be bullied into silence. Never allow yourself to be made a victim. Accept no ones definition of your life Define yourself........ Harvey Fierstein • 21 Welcome • 11.9K Our Community
{"url":"https://community.versusarthritis.org/discussion/13096/mtx-vs-sulpha","timestamp":"2024-11-13T05:48:04Z","content_type":"text/html","content_length":"298822","record_id":"<urn:uuid:e3c66d10-ff3e-4d6c-857d-ef53c49a8afb>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00570.warc.gz"}
Numerical Differentiation Go to the first, previous, next, last section, table of contents. The functions described in this chapter compute numerical derivatives by finite differencing. An adaptive algorithm is used to find the best choice of finite difference and to estimate the error in the derivative. These functions are declared in the header file `gsl_diff.h' Function: int gsl_diff_central (const gsl_function *f, double x, double *result, double *abserr) Function: int gsl_diff_forward (const gsl_function *f, double x, double *result, double *abserr) This function computes the numerical derivative of the function f at the point x using an adaptive forward difference algorithm. The function is evaluated only at points greater than x and at x itself. The derivative is returned in result and an estimate of its absolute error is returned in abserr. This function should be used if f(x) has a singularity or is undefined for values less than x. Function: int gsl_diff_backward (const gsl_function *f, double x, double *result, double *abserr) This function computes the numerical derivative of the function f at the point x using an adaptive backward difference algorithm. The function is evaluated only at points less than x and at x itself. The derivative is returned in result and an estimate of its absolute error is returned in abserr. This function should be used if f(x) has a singularity or is undefined for values greater than x. The following code estimates the derivative of the function f(x) = x^{3/2} at x=2 and at x=0. The function f(x) is undefined for x<0 so the derivative at x=0 is computed using gsl_diff_forward. #include <stdio.h> #include <gsl/gsl_math.h> #include <gsl/gsl_diff.h> double f (double x, void * params) return pow (x, 1.5); main (void) gsl_function F; double result, abserr; F.function = &f; F.params = 0; printf("f(x) = x^(3/2)\n"); gsl_diff_central (&F, 2.0, &result, &abserr); printf("x = 2.0\n"); printf("f'(x) = %.10f +/- %.5f\n", result, abserr); printf("exact = %.10f\n\n", 1.5 * sqrt(2.0)); gsl_diff_forward (&F, 0.0, &result, &abserr); printf("x = 0.0\n"); printf("f'(x) = %.10f +/- %.5f\n", result, abserr); printf("exact = %.10f\n", 0.0); return 0; Here is the output of the program, $ ./demo f(x) = x^(3/2) x = 2.0 f'(x) = 2.1213203435 +/- 0.01490 exact = 2.1213203436 x = 0.0 f'(x) = 0.0012172897 +/- 0.05028 exact = 0.0000000000 The algorithms used by these functions are described in the following book, • S.D. Conte and Carl de Boor, Elementary Numerical Analysis: An Algorithmic Approach, McGraw-Hill, 1972. Go to the first, previous, next, last section, table of contents.
{"url":"http://doc.gnu-darwin.org/gsl-ref-html/gsl-ref_27.html","timestamp":"2024-11-04T04:17:57Z","content_type":"text/html","content_length":"7629","record_id":"<urn:uuid:fcda485f-b00d-4743-90d6-97db20fb6be2>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00470.warc.gz"}
Digital Roots Digital roots are based on the concept of numerical congruence formulated by Carl Friedrich Gauss. The largest digit in the decimal number system is 9, and so the sum of the digits of any number are always congruent modulo 9 to that number. This property can be used to show whether any number is divisible by 9. Obtaining the digital root is simply the ancient process of "casting out nines". For example, the number 3,128 has a digital root of 5 because its digits sum to 14, and the digits of 14 add to 5. It is the same as the remainder obtained when dividing the number by 9 (unless the number is divisible by 9, in which case the digital root is 9 instead of 0). One way of quickly checking whether a sum involving large numbers is correct is to take the digital roots of the numbers, add them, reduce the answer to a digital root, and then see if it corresponds to the digital root of the answer. If they don't match, the answer is wrong. If they do match, the probability is fairly high that the answer is correct. A similar idea (except in binary instead of decimal) is used in computers for parity checking. A knowledge of digital roots often enables shortcuts in solving otherwise difficult problems. Example: Find the smallest natural number that is composed entirely of 1's and 0's and is evenly divisible by 225. Answer: Since the digits in 225 have a digital root of 9, you know at once that the answer must also have a digital root of 9. The smallest such number is 111,111,111. The problem is to increase 111,111,111 by the smallest amount that will make it divisible by 225. Since 225 is a multiple of 25, the number we seek must also be a multiple of 25, and so the number must end in 00, 25, 50, or 75. Since the last three pairs can't be used, we attach 00 to 111,111,111 to obtain the answer: 11,111,111,100. Here is another math puzzle that makes use of digital roots. I found the book The Second Scientific American Book of Mathematical Puzzles and Diversionsbibliography).
{"url":"https://mathlair.allfunandgames.ca/digroots.php","timestamp":"2024-11-12T17:22:51Z","content_type":"text/html","content_length":"4703","record_id":"<urn:uuid:bf17de7c-f378-4807-84de-9c96c125fd40>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00340.warc.gz"}
The complexity and performance of WFA and band doubling This note explores the complexity and performance of band doubling (Edlib) and WFA under varying cost models. Edlib (Šošić and Šikić 2017) uses band doubling and runs in \(O(ns)\) time, for sequence length \(n\) and edit distance \(s\) between the two sequences. WFA (Marco-Sola et al. 2020) uses the diagonal transition method and runs in expected \(O(s^2+n)\) time. Complexity analysis Complexity of edit distance Let’s go a little bit into how these complexities arise in the case of edit distance: Edlib / band doubling: \(O(ns)\) Let’s say the band doubling ends at cost \(s\leq s’< 2s\). At that point, \(2s’+1\) diagonals have been processed^1, and the total cost of trying all smaller \(s’\) is an additional \(2s’\) diagonals, for a total of \(4s’+1 \leq 8s+1 = O(s)\) diagonals. Each diagonal contains \(n\) (or slightly fewer) states, so that the overall complexity is \(O(ns)\). WFA / diagonal transition: \(O(s^2+n)\) At the time WFA ends, it has visited exactly \(2s+1\) diagonals. On each diagonal, there are up to \(s+1\) farthest reaching states, one for each distance \(1\) to \(s\). This makes for a total of \(O(s^2)\) f.r. states. Note that I’m ignoring the extended states here. Myers (1986) shows that that’s OK. Complexity of affine cost alignment Now, let’s think what happens when we change the cost model from edit distance to some affine cost scheme. This means that the alignment score \(s\) will go up. Let \(e\) be the least (in case of multiple affine layers) cost to extend a gap. Now, we get this analysis: Edlib / band doubling: \(O(ns/e)\) The number of diagonals visited changes to \(2\cdot s/e+1\), since any diagonal more than \(s/e\) away from the main diagonal has cost \(>e\cdot s/e=s\) to reach. The number of states on each diagonal is still \(n\), so the overall complexity is \(O(ns/e)\). WFA / diagonal transition: \(O(s^2/e+n)\) Like with Edlib, the number of visited diagonals reduces to \(s/e\). The number of furthest reaching states remains^2 \(s\). Thus, the overall complexity is \(O(s^2/e+n)\). Now let’s think what this means for the relative complexity of Edlib \(O(ns/e)\) and WFA \(O(s^2/e)\): it’s the difference between \(n\) and \(s\). This means that as soon as the alignment cost \(s\) gets close to \(n\), Edlib should start to outperform WFA. In practice, this should mean that WFA/diagonal transition is relatively better for unit costs and small affine costs, whereas edlib is better when affine costs are large. Or equivalently, the maximum error rate at which WFA is faster than edlib should go down as the affine penalties increase. Implementation efficiency WFA (WFA2-lib, that is) supports both edit distance and affine costs. Single layer affine costs has to compute \(3\) instead of \(1\) layers, so will be a bit slower.^3 Edlib only implements edit distance. It does this extremely efficiently using Myers’ bit-vector algorithm. This allows for both bit-packing (storing multiple DP states in a single usize integer) and SIMD, which should give a \(10\times\) to \(100\times\) constant speedup.^4 However, Edlib does not implement affine scoring, which bring me to: Band doubling for affine scores was never implemented I am not aware of any library that (competitively, or at all really) implements band doubling for affine costs. KSW2 implements banded alignment for affine costs efficiently using the differences method of Suzuki and Kasahara (2018), which could be seen as the equivalent of Myers’ bit-packing for affine scores. However, as stated in the readme, KWS2 fails to implement band doubling. It seems that KSW2 is mostly used for heuristic alignment, where a band is chosen in other ways. This means that for exact alignment, it falls back to \(O(n^2)\). I did implement band doubling for affine costs as part of A*PA in a NW aligner that can handle arbitrary cost models, but for now this implementation is focused on the algorithm rather than efficiency, and it does not use any kind of SIMD and/or bit-packing. WFA vs band doubling for affine costs What we’ve learned so far: • WFA has better complexity when \(s \ll n\). • Edlib has better complexity when \(s \gg n\). • Edlib’s implementation is extremely efficient^5, so let’s change that to \(s\ll n/10\) or so.^6 • For affine costs, no efficient implementation of the \(O(ns)\) band doubling method exists. So now the question is, how would WFA compare to band doubling for affine scores? The WFA paper contains the following comparison. I’m interested in the synthetic data results for WFA, KSW2-Z2^7, and Figure 1: Table 2 from Marco-Sola et al. (2020) compares WFA performance to KWS2 and other aligners for affine costs with mismatch x=4, gap-open o=6 and gap-extend e=2. Errors are uniform. The bottom three aligners use edit distance, and the three above are approximate. For each n, the time shown is to align a total of 10M bp. First some remarks about the numbers here: Scaling with \(d\) □ Edlib should scale as \(O(ns) = O(nd)\). I have absolutely no clue why it’s only at most \(2\times\) slower for \(d=20\%\) compared to \(d=1\%\) instead of \(20\times\). Something feels off here. Maybe it’s only IO overhead that’s being measured? Or maybe Edlib uses an initial band that’s larger than the actual distance? □ KWS2’s runtime is \(O(n^2)\) and independent of \(d\). I suppose the variation for small \(n\) (2.4s vs 3.0s) is just measurement noise.^8 □ For WFA, for \(n=100k\) the scaling indeed seems to be roughly \(d^2\), with \(d=20\%\) being \(300\) (instead of \(400\)) times slower than \(d=1\%\). For smaller \(n\), the scaling seems to be less. For small \(n\) and/or \(d=1\%\) this is probably because of the constant \(+n\) overhead on top of the \(s^2\) term. Scaling with \(n\) □ Given Edlib’s \(O(ns)\) runtime, we expect it to become \(10\times\) slower per basepair when \(n\) goes times \(10\).^9 In practice, the only significant jump is from \(10K\) to \(100K\), but even that is only a factor \(5\) at most. Again this hints at some constant/linear overhead being measured, instead of the quadratic component of the algorithm itself. □ Like Edlib, KSW2’s runtime per basepair should go \(\times 10\) when \(n\) goes \(\times 10\). This indeed seems to be the case, within a \({\sim}20\%\) margin. □ Again, we expect WFA’s runtime per basepair to scale with \(n\). For \(n\geq 10k\), this indeed seems to be roughly the case. So now we can ask ourselves: how much would KSW2 improve if it supported band doubling? The complexity goes from \(O(n^2)\) to \(O(ns/e)\). For now let’s say \(e\) is constant.^10 So we should get a \(O(n/s)=O(1/d)\) speedup, where \(d\) is the relative error rate. If we look at the table and divide each value in the KSW2 row by \(1/d\in \{100, 20, 5\}\), WFA is still faster than KSW2 in some cases, but never by more much! n 100 100 100 1K 1K 1K 10K 10K 10K 100K 100K 100K alg d (%) 1 5 20 1 5 20 1 5 20 1 5 20 WFA 0.09 0.37 1.55 0.14 0.93 6.93 0.43 7.28 66.00 8.49 102.00 2542.00 KSW2\(\cdot d\) 0.024 0.14 0.61 0.164 0.82 3.32 1.88 9.42 37.8 21.4 106.8 427.8 After scaling, we can see that if KSW2 supported band doubling, it might be faster than WFA for many inputs and only slightly slower on those where it’s not, in particular at \(1\%\) low error rates. Of course I have ignored constants here: This is assuming that \(s/e\) roughly equals the error rate^11, and omitting the fact that band doubling can be up to \(2\) times slower than simply computing the states within the optimal band. Clearly the WFA implementation is much better than any other affine-cost aligner out there, but the benefit of diagonal transition over an efficient (bit-packed, SIMD) band doubling implementation is not so clear-cut to me. At \(1\%\) error rates WFA may indeed be faster, but for error rate \(5\%\) and up this may not be true. The WFA paper shows over \(100\times\) speedup compared to KSW2, but may only show a small constant speedup compared to band doubling. For unit-costs alignments, the evaluations for A*PA (Groot Koerkamp and Ivanov 2024) show that WFA is up to \(100\times\) faster than edlib for \(d=1\%\). Also note that these numbers are with relatively low affine costs. As they increase, I expect the benefit of WFA to get smaller. Future work Really, somebody^12 should patch KSW2 to support band doubling and rerun the WFA vs KSW2 vs Edlib comparison. I’d be curious to see results! Also, it would be nice to have some analysis on how affine alignment score scales with cost model paremeters. Groot Koerkamp, Ragnar, and Pesho Ivanov. 2024. “Exact Global Alignment Using A with Chaining Seed Heuristic and Match Pruning.” Edited by Tobias Marschall. 40 (3). Marco-Sola, Santiago, Juan Carlos Moure, Miquel Moreto, and Antonio Espinosa. 2020. “Fast gap-affine pairwise alignment using the wavefront algorithm.” 37 (4): 456–63. Myers, Gene. 1986. “An $O(ND)$ Difference Algorithm and Its Variations.” 1 (1–4): 251–66. Suzuki, Hajime, and Masahiro Kasahara. 2018. “Introducing Difference Recurrence Relations for Faster Semi-Global Alignment of Long Sequences.” BMC Bioinformatics 19 (S1). Šošić, Martin, and Mile Šikić. 2017. “Edlib: A c/c++ Library for Fast, Exact Sequence Alignment Using Edit Distance.” Edited by John Hancock. 33 (9): 1394–95.
{"url":"https://curiouscoding.nl/posts/wfa-edlib-perf/","timestamp":"2024-11-08T21:57:12Z","content_type":"text/html","content_length":"27659","record_id":"<urn:uuid:e8601929-f6b4-4c23-9b77-d0906a98427e>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00447.warc.gz"}
Be Economically Savvy: What the Heck Is Compound Interest? [Part IV] Welcome to Part IV of my Be Economically Savvy series. Originally, this was supposed to be a three-part series. However, over the last few weeks, the words compound interest have kept popping up. From folks retiring and trying to figure out what to do with their money, to parents trying to save and explain why they can’t buy a new car just yet, I can’t seem to get away from the question “What the heck is compound interest?” Bottom line – Compound interest is when you earn interest on interest. The longer you allow your money to grow, the more money you will have at the end of your savings period. Put the money you’ve saved in an interest-bearing account. Then, let your money work for you. Watch my video Compound Interest Explained: A Visual Explanation Without Math to learn more. >>TYWANQUILA: Hello, everyone. If you've been following my Order Your Life Blog, you know I write about finances and becoming economically savvy. One topic that keeps coming up is compound interest. Instead of writing a blog about it, I've decided to create a video explaining compound interest. Keep in mind that this is a visual explanation. There's no math. No formulas. I know some of you like that. Instead, there are a lot of beautiful visuals so you can see how your money will grow in an interest-bearing account. Before we get started, we need to define four terms. These terms are principal, interest, simple interest, and compound interest. Here are the definitions. Principal is the money you originally put into your account. That's your starting point. Interest is the money that a bank, or other financial institution, pays you for leaving your money in their establishment. Yes, people will pay you to leave your money with them. We're going to discuss two types of interest. Simple interest and compound interest. Simple interest is the money you earn on your principal. In contrast, compound interest is the money you earn on your principal and on your prior interest. Compound interest is when you earn interest on interest. It's called compound interest because the interest builds upon itself. I know this sounds complicated. So let's look at an example. In this example, you'll be able to see the difference between simple interest and compound interest. In this example, let's say you put $1,000 into your account at the beginning of the year. This is your principal. $1,000 is the money you start with, and we're going to call it principal. The annual interest rate is 5%. The bank will add interest to your account every month. Therefore, interest is added monthly. And finally, for this example, you won't add or take away any money for 12 months. You're leaving your principal untouched for 12 months. As a result, 12 months is your timeline. Let's take a look at our graph and see what happens. Here is our graph. On the top of the graph, we have our legend. No interest is in yellow. Simple interest is in purple. And compound interest is in green. On the y-axis we have the amount of money you've earned. Our timeline is on the x-axis. In this case, our timeline is in months. All right. Now that we understand the graph, let's make some money. For no interest, you have $1,000 at the beginning of the year and $1,000 at the end of the year. Nothing's changed. With simple interest you have $1,000 at the beginning of the year and $1,050 at the end of the year. Congratulations! You've earned $50. With compound interest, you have $1,000 at the beginning of the year and $1,051.16 at the end of the year. You earned $51.16. That's $1,000 with no interest, $1,050 with simple interest, and $1,051.16 with compound interest. The difference between your earnings with no interest and simple interest is $50. The difference between your earnings with compound interest and your earnings with simple interest is $1.16. You have earned a little more money with compound interest. Now I know $1.16 doesn't sound like much, but there is a way for you to earn more money with compound interest. It's time for a little Q&A for the curious. So what will happen if you increase your timeline? The short answer is you earn more money. Let's see what happens when you allow your money to grow for a longer period of time. Example 2. So in this example, we have increased our timeline to 30 years. That's 360 months. As before, we'll begin with a principal of $1,000 and we will leave that money untouched for 30 years. The annual interest rate is 5%. Interest is added monthly. And, as I mentioned, our new timeline – our increased timeline – is 30 years or 360 months. Let's go to our graph. With no interest, let's see how we do. With no interest, you began with a thousand dollars and at the end of thirty years you still have a thousand dollars. Nothing changed. With simple interest, you have $1,000 at the beginning. Let's go back. There we go. There's simple interest. With simple interest you have $1,000 at the beginning and $2,500 after 360 months. Let's see what we have with compound interest. You have $1,000 at the beginning and $4,467.74 at the end of your timeline. That's not bad for someone who started with a thousand dollars. Let's recap. So no interest. You start with $1,000. You end with $1,000. That's an earning of zero. With simple interest, you start with $1,000 and you end up with $2,500. You've earned $1,500. And finally, with compound interest, you end up with $4,467.74. Congratulations! You have earned $3,467.74. Here compound interest is clearly the winner. As you can see, the longer your timeline, the more money you earn. Well this is great. But is there anything else that you can do to earn more interest? What happens if you increase your principal? Well again the short answer is you earn more money. In our final example, we are going to see what happens when you increase your timeline and your principal. Your principal has increased to $10,000. Again we're leaving that $10,000 and we are not going to touch it until our timeline ends. Annual interest rate is still 5%. Interest added is still monthly and our timeline is still 360 months or 30 years. Let's see what happens. Well no interest. You guessed it. You start with 10,000. You end with 10,000. As usual, there is no change. With simple interest, you begin with $10,000 and you end up with $25,000. Finally, with compound interest, you began with the same $10,000 and after 30 years you have $44,677.44. No interest. Start with ten thousand. End with ten thousand. Zero earnings. For simple interest, you end up with $25,000 and that means you've earned $15,000. And finally, with compound interest, our clear winner here, you can see that the more money you put in at the beginning, the more you end up with at the end. And in the case of compound interest, you end up with $44,677.44. Meaning you've earned $34,677.44 over the course of your timeline. Imagine how much you can earn if your principal is even higher or if your timeline is longer. That is the power of compound interest. You let your money work for you and all you have to do is wait. Here are today's take-home messages. Firstly, compared to no interest and simple interest, you earn more money with compound interest. And you can clearly see that with no interest, it's not beneficial to you at all. So either choose simple interest or, if you can, if you have that option, compound interest. Put your money in an interest-bearing account that earns you compound interest. To earn even more money, there are three things that you can personally do. You can increase your timeline. You can increase your principal. And, ideally, you can increase your timeline and your principal. I made this video specifically for people who want to see compound interest, but they're not necessarily interested in the math behind the magic. So if you're one of those people who really wants to see the math, feel free to contact me. We can talk about formulas. We can talk about compound interest. We can talk about simple interest and I'd be happy to talk to you. I want to thank you all for watching my visual explanation of compound interest. And if you have questions, feel free to email me or contact me on social media. You can also go to OrderYourLife.com to get the latest financial tips and economically savvy advice. Thank you! Have you read the other parts of my Be Economically Savvy series? Click the links below to see what you’ve missed. Part I: Save Money on the Things You Already Buy Part II: How Much Money Can I Make Using Apps? Part III: Saving, the Dirty Word Nobody Wants to Hear Part IV: What the Heck is Compound Interest? Follow Order Your Life on Facebook and Twitter.
{"url":"https://orderyourlife.com/blogs/blog/be-economically-savvy-what-the-heck-is-compound-interest-part-iv","timestamp":"2024-11-14T11:12:45Z","content_type":"text/html","content_length":"111554","record_id":"<urn:uuid:05153dd0-3c98-49e3-9a61-ad647d190b21>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00698.warc.gz"}
Representation and Control of Infinite Dimensional Systems by Alain Bensoussan By Alain Bensoussan "This e-book is a such a lot great addition to the literature of this box, the place it serves the necessity for a contemporary therapy on issues that in simple terms very lately have discovered a passable solution.... Many readers will enjoy the concise exposition." "Presents, or refers to, the latest and up-to-date leads to the sector. therefore, it may function a superb asset to someone pursuing a study profession within the field." —Mathematical reports (reviews of Volumes I and II of the 1st Edition) The quadratic price optimum keep an eye on challenge for platforms defined via linear usual differential equations occupies a significant position within the learn of keep an eye on structures either from a theoretical and layout standpoint. The research of this challenge over an enormous time horizon indicates the attractive interaction among optimality and the qualitative homes of structures equivalent to controllability, observability, stabilizability, and detectability. This thought is way more challenging for limitless dimensional structures equivalent to people with time delays and disbursed parameter systems. This reorganized, revised, and multiplied version of a two-volume set is a self-contained account of quadratic fee optimum regulate for a wide type of limitless dimensional structures. The booklet is established into 5 components. Part I stories simple optimum regulate and online game idea of finite dimensional platforms, which serves as an creation to the e-book. Part II bargains with time evolution of a few prevalent managed countless dimensional platforms and incorporates a particularly entire account of semigroup conception. It contains interpolation idea and shows the position of semigroup thought in hold up differential and partial differential equations. Part III reports the ordinary qualitative homes of managed platforms. Parts IV and V research the optimum keep watch over of platforms whilst functionality is measured through a quadratic price. Boundary regulate of parabolic and hyperbolic structures and special controllability also are covered. New fabric and unique good points of the second one Edition: * Part I on finite dimensional managed dynamical platforms comprises new fabric: an multiplied bankruptcy at the regulate of linear structures together with a glimpse into H^-infinity thought and dissipative platforms, and a brand new bankruptcy on linear quadratic two-person zero-sum differential games. * a different bankruptcy on semigroup idea and interpolation of linear operators brings jointly complex innovations and strategies which are frequently taken care of independently. * the fabric on hold up platforms and structural operators isn't really to be had in different places in publication form. Control of countless dimensional structures has a variety and transforming into variety of demanding purposes. This e-book is a key reference for an individual engaged on those functions, which come up from new phenomenological reports, new technological advancements, and extra stringent layout necessities. will probably be important for mathematicians, graduate scholars, and engineers drawn to the sector and within the underlying conceptual principles of structures and control. Read Online or Download Representation and Control of Infinite Dimensional Systems PDF Best linear programming books Linear Programming and its Applications Within the pages of this article readers will locate not anything lower than a unified remedy of linear programming. with no sacrificing mathematical rigor, the most emphasis of the ebook is on versions and functions. an important periods of difficulties are surveyed and offered by way of mathematical formulations, via answer equipment and a dialogue of numerous "what-if" situations. Methods of Mathematical Economics: Linear and Nonlinear Programming, Fixed-Point Theorems (Classics in Applied Mathematics, 37) This article makes an attempt to survey the middle topics in optimization and mathematical economics: linear and nonlinear programming, isolating airplane theorems, fixed-point theorems, and a few in their applications. This textual content covers basically topics good: linear programming and fixed-point theorems. The sections on linear programming are headquartered round deriving equipment in response to the simplex set of rules in addition to a few of the average LP difficulties, comparable to community flows and transportation challenge. I by no means had time to learn the part at the fixed-point theorems, yet i believe it can end up to be worthy to investigate economists who paintings in microeconomic idea. This part provides 4 diversified proofs of Brouwer fixed-point theorem, an evidence of Kakutani's Fixed-Point Theorem, and concludes with an explanation of Nash's Theorem for n-person video games. Unfortunately, crucial math instruments in use by way of economists this present day, nonlinear programming and comparative statics, are slightly pointed out. this article has precisely one 15-page bankruptcy on nonlinear programming. This bankruptcy derives the Kuhn-Tucker stipulations yet says not anything concerning the moment order stipulations or comparative statics results. Most most likely, the unusual choice and assurance of subject matters (linear programming takes greater than 1/2 the textual content) easily displays the truth that the unique variation got here out in 1980 and likewise that the writer is absolutely an utilized mathematician, now not an economist. this article is worthy a glance if you want to appreciate fixed-point theorems or how the simplex set of rules works and its purposes. glance in different places for nonlinear programming or newer advancements in linear programming. Planning and Scheduling in Manufacturing and Services This ebook specializes in making plans and scheduling purposes. making plans and scheduling are sorts of decision-making that play a massive function in such a lot production and providers industries. The making plans and scheduling features in a firm in most cases use analytical strategies and heuristic the right way to allocate its restricted assets to the actions that experience to be performed. Optimization with PDE Constraints This e-book provides a contemporary advent of pde restricted optimization. It offers an actual useful analytic therapy through optimality stipulations and a state of the art, non-smooth algorithmical framework. additionally, new structure-exploiting discrete recommendations and massive scale, essentially proper purposes are awarded. Additional resources for Representation and Control of Infinite Dimensional Systems Example text The state–output system is said to be asymptotically output stable if ∞ |CeAt x|2 dt < ∞. ∀x ∈ Rn , 0 2 Controllability, observability, stabilizability, and detectability 27 Now it is easy to see that if (A, C) is an observable pair and A is asymptotically stable, then the equation A∗ P + P A = −C ∗ C has a solution and ∞ P = ∗ eA t C ∗ CeAt dt > 0 (positive definite), 0 the positive-definiteness being a consequence of observability. Conversely if the state–output system is asymptotically output stable and if (A, C) is an observable pair, then A is asymptotically stable. 2. Let G(s) = C(sI − A)−1 B with A stable. Then G and only if there exists an X = X ∗ ≥ 0 that satisfies ∞ < 1 if XA + A∗ X + C ∗ C + XBB ∗ X = 0 with A + BB ∗ X stable. Furthermore, the state feedback u(t) = BB ∗ Xx(t) solves 1 ∞ sup (|y(t)|2 − |u(t)|2 ) dt. u 2 0 For a proof of the above lemma see J. C. Willems [1]. 1 is now presented. 3) with A + (LL∗ − BB ∗ )X stable. 3) yields X(A − BB ∗ X) + (A − BB ∗ X)∗ X + (C − DB ∗ X)∗ (C − DB ∗ X) + XLL∗X = 0. 4) Now assumptions (A2) to (A3) imply that the pair (A − BB ∗ X, C − DB ∗ X) is observable. The following are true: (i) R(A) is dense in K ⇐⇒ N (A∗ ) = {0}. (ii) N (A) = {0} ⇐⇒ R(A∗ ) is dense in H. (iii) R(A) is dense in K ⇐⇒ AA∗ : K → K satisfies AA∗ > 0. (iv) N (A) = {0} ⇐⇒ A∗ A : H → H satisfies A∗ A > 0. (v) A ∈ L(H; K) is invertible ⇐⇒ R(A) = K, N (A) = {0} ⇐⇒ ∃c > 0, such that h ≤ c Ah , ∀h ∈ H, ∗ ⇐⇒ ∃c > 0, such that k ≤ c AA k , ∀k ∈ K. 1. Much of the above extends to operators A that are densely defined and closed to spaces H and K, which are Banach spaces. For proofs of these facts, see, for example, M. Rated of 5 – based on votes
{"url":"http://en.magomechaya.com/index.php/epub/representation-and-control-of-infinite-dimensional-systems","timestamp":"2024-11-12T19:52:13Z","content_type":"text/html","content_length":"33709","record_id":"<urn:uuid:4580a734-78fb-4e3d-87b2-152c7fa79806>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00775.warc.gz"}
‘Dark matter’, second waves and epidemiological modelling Recent reports using conventional Susceptible, Exposed, Infected and Removed models suggest that the next wave of the COVID-19 pandemic in the UK could overwhelm health services, with fatalities exceeding the first wave. We used Bayesian model comparison to revisit these conclusions, allowing for heterogeneity of exposure, susceptibility and transmission. We used dynamic causal modelling to estimate the evidence for alternative models of daily cases and deaths from the USA, the UK, Brazil, Italy, France, Spain, Mexico, Belgium, Germany and Canada over the period 25 January 2020 to 15 June 2020. These data were used to estimate the proportions of people (i) not exposed to the virus, (ii) not susceptible to infection when exposed and (iii) not infectious when susceptible to infection. Bayesian model comparison furnished overwhelming evidence for heterogeneity of exposure, susceptibility and transmission. Furthermore, both lockdown and the build-up of population immunity contributed to viral transmission in all but one country. Small variations in heterogeneity were sufficient to explain large differences in mortality rates. The best model of UK data predicts a second surge of fatalities will be much less than the first peak. The size of the second wave depends sensitively on the loss of immunity and the efficacy of Find-Test-Trace-Isolate-Support programmes. In summary, accounting for heterogeneity of exposure, susceptibility and transmission suggests that the next wave of the SARS-CoV-2 pandemic will be much smaller than conventional models predict, with less economic and health disruption. This heterogeneity means that seroprevalence underestimates effective herd immunity and, crucially, the potential of public health programmes. • epidemiology • mathematical modelling This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by Statistics from Altmetric.com Request Permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. Summary box • Hundreds of modelling papers have been published recently, offering predictions and projections of the current coronavirus outbreak; these range from peer-reviewed publications to rapid reports from learned societies. • Many, if not most, of these modelling initiatives commit to a particular kind of epidemiological model that precludes heterogeneity in viral exposure, susceptibility and transmission. • The ensuing projections can be fantastic in terms of fatalities and ensuing public health responses. • This study revisits the evidence for conventional epidemiological modelling assumptions using dynamic causal modelling and Bayesian model comparison. • It provides overwhelming evidence for heterogeneity, and the interaction between lockdown and herd immunity in suppressing viral transmission. • Heterogeneity of this sort means that low seroprevalence (<20%) is consistent with levels of population immunity that play a substantive role in attenuating viral transmission and, crucially, facilitating public health measures. The UK has suffered one of the highest death rates in the world from SARS-CoV-2. Three recent analyses project an even larger second wave of infections—with the UK facing an overwhelmed health service and death rates far higher than the first wave, unless a series of national lockdowns are enforced.1 2 Okell et al (ibid) suggest that ‘the epidemic is still at a relatively early stage and that a large proportion of the population therefore remain susceptible’. The Academy of Medical Science report (Government Publications: research and analysis: COVID-19: preparing for a challenging winter 2020/21, 7 July 2020 (Paper prepared by the Academy of Medical Sciences) https://acmedsci.ac.uk/file-download/51353957) suggested a peak in hospital admissions and deaths in January/February 2021 with estimates of 119900 (95% CI 24500 to 251000) hospital deaths between September 2020 and June 2021—double the number that occurred during the first wave in the spring of 2020. Davies et al 1 project a median unmitigated burden of 23 million (95%CI 13 to 30million) clinical cases and 350000 deaths (95%CI 170000 to 480000) due to COVID-19 in the UK by December 2021, with only national lockdowns capable of bringing the reproductive ratio near or below one. These kinds of projections have profound consequences for the national economy and the resulting health impacts of recession and unemployment. This article challenges these projections and, in particular, the underlying assumptions that the risk of infection is homogeneous within the population. The role of pre-existing immunity, host genetics and overdispersion in nuancing viral transmission—and explaining the course of the pandemic in light of unlocking—calls for a more careful quantitative analysis.3–6 The role of heterogeneity in exposure, susceptibility and transmission is receiving more attention, especially in relation to the build-up of herd immunity.7 8 This article illustrates a formal approach to epidemiological modelling that may help resolve some prescient issues. The pessimistic projections above consider two principal mechanisms that underlie the mitigation—and possible suppression—of the ongoing coronavirus epidemic: (i) a reduction in viral transmission due to lockdown and social distancing measures, and (ii) a build-up of population or herd immunity. Herd immunity can be read as the population immunity that is required to attenuate community transmission. For example, Okell et al (ibid) review three lines of argument and conclude that herd immunity is unlikely to explain differences in mortality rates across countries, thereby placing a strategic emphasis on lockdown to preclude a rebound of infections. This is in contrast with a herd immunity scenario, whereby immunity in the population will reduce transmission to pre-empt a second wave.5 8–10 We use their analyses as a vehicle to question the validity of projections based on conventional (Susceptible, Exposed, Infected and Removed (SEIR)) modelling assumptions. In particular, we deconstruct their arguments to show that the empirical observations they draw on are consistent with herd immunity. Furthermore, public health responses and herd immunity are not mutually exclusive explanations for mortality rates, they both contribute to the epidemiological process and contextualise each other in potentially important ways. In turn, this has implications for the timing of interventions such as lockdown and Find-Test-Trace-Isolate-Support (FTTIS). More generally, we question the commitment to conventional epidemiological models that have not been subject to proper model comparison. Dynamic causal modelling Dynamic causal modelling (DCM) is the application of variational Bayes to estimate the parameters of state-space models and, crucially, the evidence for alternative models of the same data.11 It was developed to model interactions among neuronal populations and has been used subsequently in radar, medical nosology and recently epidemiology.11–15 Variational Bayes is also known as approximate Bayesian inference and is computationally more efficient than Bayesian techniques based on sampling procedures (eg, approximate Bayesian computation), which predominate in epidemiological modelling. 16–18 The particular dynamic causal model used here embeds an SEIR model of immune status into a model that includes all latent factors generating data; namely, location, infection symptom and testing status. Please see the foundational paper for structural details of the model used in this paper15 and the generic (variational Laplace) scheme used to estimate model parameters and evidence. DCM differs from conventional epidemiological modelling in that it uses mean field approximations and standard variational procedures to model the evolution of probability densities.16 This contrasts with epidemiological modelling that generally uses stochastic realisations of epidemiological dynamics to approximate probability densities with sample densities.17 19–21 One advantage of variational procedures is that they are orders of magnitude more efficient, enabling end-to-end model inversion or fitting within minutes (on a laptop) as opposed to hours or days (on a supercomputer).17 More importantly, variational procedures provide an efficient way of assessing the quality of one model relative to another, in terms of model evidence (a.k.a., marginal likelihood).22 This enables one to compare different models using Bayesian model comparison (a.k.a. structure learning) and use the best model for nowcasting, forecasting or, indeed, test competing hypotheses about viral transmission. More generally, Bayesian model comparison plays a central role in testing hypotheses given (often sparse or noisy) data. It eschews intuitive assumptions, about whether there are sufficient data to test this or that, by evaluating the evidence for competing hypotheses or models. If there is sufficient information in the data to disambiguate between two hypotheses, the difference in log-evidence will enable one to confidently assert one model is more likely than the other. Note that this automatically ensures that the model is identifiable, in relation to the model parameters or prior assumptions in question. Dynamic causal models can be extended to generate any kind of epidemiological data at hand: for example, the number of positive antigen tests. This requires careful consideration of how positive tests are generated, by modelling latent variables such as the bias towards testing people with or without infection or, indeed, the time-dependent capacity for testing. In short, everything that matters—in terms of the latent (hidden) causes of the data—can be installed in the model, including lockdown, self-isolation and other processes that underwrite viral transmission. Model comparison can then be used to assess whether the effect of a latent cause is needed to explain the data—by withdrawing the effect and seeing if model evidence increases or decreases. Here, we leverage the efficiency of DCM to evaluate the evidence for a series of models that are distinguished by heterogeneity or variability in the way that populations respond to an epidemic. The dynamic causal model used for the analyses below is summarised in terms of its structure (figure 1) and parameters (table 1). Parameters and priors The prior expectations in table 1 should be read as the effective rates and time constants as they manifest in a real-world setting.23–28 The incubation period refers to the time constant corresponding to the rate at which one becomes symptomatic if infected—it does not refer to the period one is infected prior to developing symptoms. For example, early evidence indicates that by 14 days, approximately 95% of presymptomatic periods will be over.29 30 The priors for the non-susceptible and non-infectious proportion of the population are based on clinical and serological studies reported over the past few weeks.31 32 Please see the code base for a detailed explanation of the role of these parameters in transition probabilities among states. Although the (scale) parameters are implemented as probabilities or rates, they are estimated as log scale parameters, denoted by . Note that the priors are over log scale parameters and are therefore mildly informative. For example, a prior variance of 1/256 corresponds to a prior SD of 1/16. This means that the parameter in question can, a priori, vary by a factor of about 30%. Parameters with a variance of one can be regarded as essentially free parameters (that can vary over several orders of magnitude), for example, the effective population size, which is roughly the size of a large city. The default priors used for the current analyses are also listed in spm_COVID_priors.m (https://www.fil.ion.ucl.ac.uk/spm/covid-19/) and can be optimised using Bayesian model comparison (by comparing the evidence with models that have greater or lesser shrinkage priors).15 Notice that this model is more nuanced than most conventional epidemiological models. For example, immunity and testing are separate factors. This means that we have not simply added an observation model to an SEIR-like model; rather, testing now becomes a latent factor that can influence other factors (eg, the location factor via social distancing). Furthermore, there is a difference between the latent testing state and the reported number of new cases—that depends on sensitivity and specificity, via thresholds used for reporting.33 Separating the infection and symptom factors allows the model to accommodate asymptomatic infection34: to move from an asymptomatic to a symptomatic state depends on whether one is infected but moving from an infected to an infectious state does not depend on whether one is symptomatic. Furthermore, it allows for viral transmission prior to symptom onset.35 36 This particular dynamic causal model accommodates heterogeneity at three levels that can have a substantive effect on epidemiological trajectories. These effects are variously described in terms of overdispersion, super-spreading and amplification events.4 6 37 In the current model, heterogeneity was modelled in terms of three bipartitions (summarised in figure 2). Heterogeneity in exposure This was modelled in terms of an effective population size that is less than the total (census) population. The effective population comprises individuals who are in contact with other infected individuals. The remainder of the population are assumed to be geographically sequestered from a regional outbreak or are shielded from it. For example, if the population of the UK was 68 million, and the effective population was 39 million, then only 57% are considered to participate in the outbreak. Of this effective population, a certain proportion are susceptible to infection. Heterogeneity in susceptibility This was modelled in terms of a portion of the effective population that are not susceptible to infection. For example, they may have pre-existing immunity via cross-reactivity38–40 or particular host factors41 42 such as mucosal immunity.43 This non-susceptible proportion is assigned to the seronegative state at the start of the outbreak. Of the remaining susceptible people, a certain proportion can transmit the virus to others. Heterogeneity in transmission We modelled heterogeneity in transmission with a free parameter (with a prior of one half and a prior SD of 1/16). This parameter corresponds to the proportion of susceptible people who are unlikely to transmit the virus, that is, individuals who move directly from a state of being infected to a seronegative state (as opposed to moving to a seropositive state after a period of being infectious). We associated this transition with a mild infection44 that does not entail seroconversion, for example, recovery in terms of T-cell-mediated responses.39 41 In short, the seronegative state plays the role of a seropositive state of immunity for people who never become infectious, either because they are not susceptible to infection or have a mild infection (with or without symptoms, eg, Modelling heterogeneity of susceptibility in terms of susceptible and non-susceptible individuals can be read as modelling the difference between old (susceptible) and young (non-susceptible) people. 24 However, in contrast to models with age-stratification, the current model does not consider different contact rates between different strata (eg, contact matrices). Instead, we finesse this limitation with location-specific contact rates—parameterised by the number of people one is exposed to in different locations. Although full stratification is straightforward to implement (please see DEM_COVID_I.m for a Matlab demonstration that can be read as pseudocode), this simplified model of heterogeneity is sufficient to make definitive inferences about the joint contribution of lockdown and population immunity to viral transmission. The prior means of non-susceptible and non-infectious people of 50% was chosen on the basis of secondary attack rates in well-defined cohorts.34 45 A prior variance of 1/256 corresponds to a range of between 42% and 58% (99% CI). Note that these priors are just constraints (that are also used as starting estimates). The posterior estimate can be markedly different from the prior if the data are sufficiently informative. This kind of model is sufficiently expressive to reconcile the apparent disparity between morbidity/mortality rates and low seroprevalence observed empirically31 (https://www.gov.uk/government/ publications/national-covid-19-surveillance-reports/sero-surveillance-of-covid-19). We will see below that Bayesian model comparison suggests there is very strong evidence46 for all three types of Given a suitable dynamic causal model one can use standard variational techniques to fit the empirical data and estimate model parameters. Having estimated the requisite model parameters, one can then reconstitute the most likely trajectories of latent states: namely, the probability of being in different locations, states of infection, symptom and testing states. An example is provided in figure 3 using daily cases and death-by-date data from the UK from 30 January 2020 to 1 September 2020. Patients and public involvement Patients or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research. Having briefly established the form and nature of the quantitative modelling, we now apply it to daily reports of new cases and deaths from several countries. Our focus is not on the detailed structure of the model. Rather, we use the model to illustrate how Bayesian model comparison can be used to test some assumptions that underwrite conventional models. In this setting, we frame the results in the form of a commentary and restrict the analysis to data available at the time the above reports were published (ie, from 25 January 2020 to 15 June 2021). Heterogeneity in exposure, susceptibility and transmission In what follows, we use DCM to revisit some assumptions implicit in conventional epidemiological modelling. We follow the three lines of arguments rehearsed in Okell et al (ibid). The first can be summarised as: under herd immunity the cumulative mortality rate per million of the population should plateau at roughly the same level in different countries. This is true if, and only if, the same proportion of the population can transmit the virus. In other words, a plateau to endemic equilibrium—based on the removal of susceptible people from the population—only requires people are susceptible to infection to be immune. If this proportion depends on the composition of a country’s population (ie, demography), mortality rates could differ from country to country. This can be illustrated by using models with heterogeneous population structures, of the sort summarised in figure 2. Figure 4 shows the data and ensuing predictions for 10 countries, using the format of Okell et al.2 These countries were chosen because, at time of writing, they had a well-defined first peak in conjunction with a high fatality rate, in relation to other Heterogeneity of transmission may be particularly important here.3 This is sometimes framed in terms of overdispersion or the notion of superspreading and amplification events.4 6 37 For example, if only 20% of the population were able to develop a sufficient viral load to infect others, then protective immunity in this subpopulation would be sufficient for an innocuous endemic equilibrium.47 Furthermore, if seroconversion occurs largely in the subpopulation spreading the virus,48 a sufficient herd immunity may only require a seroprevalence of around 10% of the effective population (right panel of figure 4). In short, one might challenge the assumption that COVID-19 is spread homogeneously across the population. Indeed, heterogeneity is becoming increasingly evident in high-risk settings, and in the variation in the period of infectivity across ages. This begs the question: is there evidence for heterogeneity in the dispersion of SARS-CoV-2? And, if so, does this heterogeneity vary from country to country? Figure 5 answers this question using Bayesian model comparison. It shows—under the models in question—there is overwhelming evidence for heterogeneity of exposure, susceptibility and transmission. And that a substantial proportion of each country’s population does not contribute to viral transmission. These proportions vary from country to country, leading to the differential mortality rates as shown in figure 4. In general, the effective population is roughly half of the census population, with some variation over countries. The non-susceptible and non-infectious proportions are roughly half of the effective and susceptible populations, respectively—varying between 45% and 65%. This variation underwrites the differences in fatality rates in figure 4. It could be argued that the estimates of the proportion of non-susceptible individuals is at odds with empirical data from contained outbreaks. For example, on the aircraft carrier Charles de Gaulle, about 70% of sailors were infected (https:// en.wikipedia.org/wiki/COVID-19_pandemic_on_Charles_de_Gaulle). However, this argument overlooks the contribution of heterogeneity: there were no children on the Charles de Gaulle. Okell et al (ibid) wrote: ‘If acquisition of herd immunity was responsible for the drop in incidence in all countries, then disease exposure, susceptibility, or severity would need to be extremely different between populations’. On the basis of the above quantitative modelling, this assumption transpires to be wrong: small variations in heterogeneity of exposure and susceptibility are sufficient to explain differences between countries. Our point here is that predicates or assumptions of this sort can be evaluated quantitatively in terms of model evidence. Do countries that went into lockdown early experience fewer deaths in subsequent weeks? This is the second argument made by Okell et al (ibid) for the unique role of lockdown in mitigating fatalities: however, exactly the same correlation—between cumulative deaths before and after lockdown—emerges under epidemiological models that entertain heterogeneity and herd immunity (see figure 4 (middle panel)). In short, had the authors tested the hypothesis that lockdown or herd immunity were necessary to explain the data, they would have found very strong evidence for both—and may have concluded that lockdown nuances the emergence of herd immunity (see figure 5). Does a correlation between antibodies to SARS-CoV-2 (ie, seroprevalence) and COVID-19 mortality rates imply a similar infection fatality ratio over countries? Conventional models generally assume this is the case.2 24 The problem with this assumption is that it precludes pre-existing immunity and loss of immunity (as in SEIR models).38 Population immunity could fall over a few months due to population flux and host factors, such as loss of neutralising antibodies.48 This is important because it means that seroprevalence could fall slowly after the first wave of infection (indeed, empirical seroprevalence is not increasing and may be decreasing in the UK (https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/ conditionsanddiseases/bulletins/coronaviruscovid19infectionsurveypilot/18june2020%23antibody-data). In turn, this produces a non-linear relationship between the prevalence of antibodies and cumulative deaths at the time seroprevalence is assessed. This is illustrated by the curvilinear relationships in the right panel of figure 4 (under a loss of seroprevalence with a time constant of 3months). If one associates infection fatality ratio (IFR) with the slope of fatality rates—as a function of seroprevalence—then the IFR changes over time. In short, the IFR changes as the epidemic progresses, as the proportion of susceptible and transmitting people falls, or those at highest risk succumb early in the pandemic. This begs the question: are quantities such as IFR fit for purpose when trying to model epidemiological dynamics? What is the impact of different rates of loss of immunity? So, what are the implications of heterogeneity for seroprevalence and a second wave? The Bayesian model comparisons in figure 5 speak to a mechanistic role for herd immunity in mitigating a rebound of infections. Note that this model predicts seroprevalences that are consistent with empirical community studies, without ever seeing these serological data (eg, in the UK if 11% of the effective population is seropositive and the effective population is 49% of the census population, we would expect 5.4% of people to have antibodies, which was the case at the time of analysis (https:// Predictive validity of this sort generally increases with model evidence. This follows from the fact that the log-evidence is accuracy minus complexity. In other words, models with the greatest evidence afford an accurate account of the data at hand, in the simplest way possible. Unlike the Akaike and widely used Bayesian information criteria (AIC and BIC), the variational bounds on log-evidence used in DCM evaluate complexity explicitly.22 Models with the greatest evidence have the greatest predictive validity because they do not overfit the data. An example of the posterior predictions afforded by the current model is provided in figures 3 and 6 that indicate the timing and amplitude of a second wave in the UK. Figure 6 focuses on fatality rates under a couple of different scenarios; namely, under a rapid loss of antibody-mediated immunity and under an accelerated FTTIS programme. These posterior predictions suggest that there may be a mild surge of fatalities over the autumn, peaking at about 100 per day. This second wave could be eliminated completely with an increase in the efficacy of contact tracing (FTTIS)—modelled as the probability of self-isolating, given one is infected but asymptomatic. It can be seen that even with a relatively low efficacy of 25%, elimination is possible by November, with convergence to zero fatality rates. Please see Friston et al 49 for a more comprehensive analysis. Note that in a month or two, death rates should disambiguate between these scenarios. The CIs in figure 6 may appear rather tight. This reflects two issues. First, the well-known overconfidence problem with variational inference—discussed in Friston et al ^15 and MacKay.^50 Second, these posterior predictive densities are based on the entire timeseries, under a dynamic causal model that constrains the functional form of the trajectories. Put simply, this means that uncertainty about the future can be reduced substantially by data from the past. Our reading of the epidemiological modelling literature suggests a systemic failure to formally evaluate the evidence for alternative models (eg, models with age stratification and heterogeneous contact structure). This may reflect the fact that agent-based, stochastic transmission models are notoriously difficult to evaluate in terms of their evidence.16 18 In contrast, the variational approaches used in DCM34–39 furnish a variational bound on model evidence that allows competing models to be assessed quickly and efficiently. The central role of model comparison is established in many disciplines and is currently attracting attention in epidemiology.51 Although model evaluation using the AIC (or widely used BIC) can be found in the epidemiological literature,52–54 this kind of comparison does not constitute proper model comparison. This is because the complexity part of model evidence is not estimated by the AIC (or widely used BIC). The model complexity corresponds to the df used to explain the data (technically, the KL divergence between the posterior and prior). The AIC and BIC approximate complexity with (functions of) the number of free parameters, irrespective of whether these parameters are used or not. This means that the AIC is not fit for purpose when comparing models in a clinical or epidemiological setting. Please see Penny22 for illustrations of the failure of the AIC (and BIC). In short, it appears that most of the predictions underwriting ‘scientific advice’ to governmental agencies are based on epidemiological models that have not been properly compared with alternative models. If there is no rebound in fatality rates in the next few months, the conclusions in Okell et al, Davies et al and the Academy of Medical Sciences report (ibid) will be put under some pressure. This pressure might license a more (model) evidence-based approach, using the requisite variational methods that predominate in statistical physics, machine learning and (dynamic) causal modelling. The recurrent theme above is the danger of committing to one particular model or conception of the epidemiological process. In other fields—dealing with population dynamics—Bayesian model comparison is used to identify the best structure and parameterisation of models28–30 known as structure learning.31 32 Figure 5 offers an example of Bayesian model comparison in epidemiology, evincing very strong evidence for heterogeneity in responses to viral infection—and a synergistic role for social distancing and herd immunity. Identifying the right epidemiological model has considerable public health and economic implications. While SARS-CoV-2 may not be eradicated, model selection suggests that any second wave will be much smaller than other models have projected, and the virus will become endemic rather than epidemic. The size of a second wave may depend sensitively on the efficacy of FTTIS programmes and the rate of loss of immunity. Recent evidence suggests T-cell immunity may be more important for long-term immunity with circulating SARS-CoV-2-specific CD8^+ andCD4^+ T cells identified in 70% and 100% of convalescent patients with COVID-19, respectively.39 Furthermore, 90% of people who seroconvert make detectible neutralising antibody responses that are stable for at least 3months.55 If the above dynamic causal model is broadly correct, future national lockdowns may be unnecessary. As an endemic and potentially fatal virus, especially in elderly people and those with underlying conditions, attention to the details of FTTIS and shielding becomes all the more important. This emphasises the need for clear criteria for when and how to implement local lockdowns in ‘hotspot’ In summary, lockdown and social distancing have undoubtedly restricted the transmission of the virus. Model comparison suggests that these approaches remain an essential component of pandemic control, particularly at current levels of infections in the UK. However, extending the notion of ‘herd immunity’—to include seronegative individuals with lower susceptibility and/or lower risk of transmission—engenders an immune subpopulation that can change over time and country. The implicit immunity may reduce mortality and lower the risk of a second wave to a greater extent than predicted under many epidemiological models. On this view, herd immunity subsumes people who are not susceptible to infection or, if they are, are unlikely to be infectious or seroconvert; noting that SARS-CoV-2 can induce virus-specific T-cell responses without seroconversion. This reconciles the apparent disparity between reports of new cases, mortality rates and the low seroprevalence observed empirically. Crucially, Bayesian model comparison confirms that there is very strong evidence for the heterogeneity that underwrites this kind of herd immunity.8 Put simply, an effective herd immunity—that works hand-in-hand with appropriate public health and local lockdown measures—requires <20% seroprevalence. This seroprevalence has already reached in many countries and is sufficient to preclude a traumatic second wave, even under pessimistic assumptions about loss of humoral immunity endowed by antibodies. Glossary of terms Dynamic causal modelling: the application of variational Bayes to estimate the unknown parameters of state-space models and assess the evidence for alternative models of the same data. See http://www.scholarpedia.org/article/Dynamic_causal_modeling. Variational Bayes (a.k.a., approximate Bayesian inference): a generic Bayesian procedure for fitting and evaluating generative models of data by optimising a variational bound on model evidence. See Model evidence (a.k.a., marginal likelihood): the probability of observing some data under a particular model. It is called the marginal likelihood because its evaluation entails marginalising (ie, integrating) out dependencies on model parameters. Technically, model evidence is accuracy minus complexity, where accuracy is the expected log likelihood of some data and complexity is the divergence between posterior and prior densities over model parameters. Variational bound (a.k.a., variational free energy): known as an evidence lower bound (ELBO) in machine learning because it is always less than the logarithm of model evidence. In brief, variational free energy converts an intractable marginalisation problem—faced by sampling procedures—into a tractable optimisation problem. This optimisation furnishes the posterior density over model parameters and ensures the ELBO approximates the log evidence for a model: https://en.wikipedia.org/wiki/Variational_Bayesian_methods. Crucially, the variational bound includes an explicit estimate of model complexity, in contrast to the AIC and BIC.22 Bayesian model comparison: a procedure to compare different models of the same data in terms of model evidence. The marginal likelihood ratio of two models is known as a Bayes factor: https:// Agent-based simulation models: an alternative to equation-based models, usually used to simulate scenarios that are richer than models based on population dynamics. Agent-based models simulate lots of individuals to create a sample distribution over outcomes. Evaluating the marginal likelihood from the ensuing sample distributions is extremely difficult, even with state-of-the-art estimators such as the harmonic mean: see https://radfordneal.wordpress.com/2008/08/17/the-harmonic-mean-of-the-likelihood-worst-monte-carlo-method-ever/. • Handling editor Seye Abimbola • Contributors All authors contributed equally to the writing and revision of this work. The figures were prepared by KF. • Funding The Wellcome Centre for Human Neuroimaging is supported by core funding from the Wellcome Trust (203147/Z/16/Z). • Competing interests None declared. • Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research. • Patient consent for publication Not required. • Provenance and peer review Not commissioned; externally peer reviewed. • Data availability statement Data are available in a public, open access repository.
{"url":"https://gh.bmj.com/content/5/12/e003978","timestamp":"2024-11-12T15:11:27Z","content_type":"text/html","content_length":"297861","record_id":"<urn:uuid:50b57eb9-04c3-4754-8724-002d546090c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00044.warc.gz"}
We develop a theory of Grassmann semialgebra triples using Hasse-Schmidt derivations, which formally generalizes results such as the Cayley-Hamilton theorem in linear algebra, thereby providing a unified approach to classical linear algebra and tropical algebra. Bibliographical note Publisher Copyright: © 2020 by the authors. Received by the editors May 14, 2018, and, in revised form, May 13, 2020. 2020 Mathematics Subject Classification. Primary 15A75, 16Y60, 15A18; Secondary 12K10, 14T10. Key words and phrases. Cayley-Hamilton theorem, exterior semialgebras, Grassmann semial-gebras, Hasse-Schmidt derivations, differentials, eigenvalues, eigenvectors Laurent series, Newton’s formulas, power series, semifields, systems, semialgebras, tropical algebra, triples. The first author was partially supported by INDAM-GNSAGA and by PRIN ”Geometria sulle varietà algebriche” Progetto di Eccellenza Dipartimento di Scienze Matematiche, 2018–2022 no. E11G18000350001. The second author was supported in part by the Israel Science Foundation, grant No. 1207/12 and his visit to Torino was supported by the “Finanziamento Diffuso della Ricerca”, grant no. 53 RBA17GATLET, of Politecnico di Torino. Funders Funder number INDAM-GNSAGA E11G18000350001 Politecnico di Torino Israel Science Foundation 53 RBA17GATLET, 1207/12 • Cayley-Hamilton theorem • Grassmann semial-gebras • Hasse-Schmidt derivations • Newton’s formulas • differentials • eigenvalues • eigenvectors Laurent series • exterior semialgebras • power series • semialgebras • semifields • systems • triples • tropical algebra Dive into the research topics of 'GRASSMANN SEMIALGEBRAS AND THE CAYLEY-HAMILTON THEOREM'. Together they form a unique fingerprint.
{"url":"https://cris.biu.ac.il/en/publications/grassmann-semialgebras-and-the-cayley-hamilton-theorem","timestamp":"2024-11-07T20:42:32Z","content_type":"text/html","content_length":"54215","record_id":"<urn:uuid:dcd9caf8-d2d8-4fad-bf0c-563e883aa2d0>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00267.warc.gz"}
Proof by Contradiction - (Algebraic Combinatorics) - Vocab, Definition, Explanations | Fiveable Proof by Contradiction from class: Algebraic Combinatorics Proof by contradiction is a mathematical proof technique where you assume the opposite of what you want to prove is true, then show that this assumption leads to a logical inconsistency. This method is powerful because if assuming the negation of a statement results in a contradiction, it confirms that the original statement must be true. This approach often reveals truths in combinatorial settings by demonstrating impossibilities or contradictions in a clear and structured way. congrats on reading the definition of Proof by Contradiction. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. Proof by contradiction is often used in combinatorial proofs to establish the validity of certain counting principles or properties. 2. The structure of proof by contradiction typically starts with an assumption that the statement you want to prove is false, leading to an eventual contradiction. 3. This technique can be particularly useful when dealing with existential statements, allowing you to show that no counterexample can exist. 4. In combinatorial contexts, proof by contradiction can highlight the limitations of certain configurations or arrangements. 5. The method is commonly associated with famous results, such as proving the irrationality of numbers like $$\sqrt{2}$$. Review Questions • How does proof by contradiction serve as a valuable tool in combinatorial proofs? □ Proof by contradiction is essential in combinatorial proofs as it allows mathematicians to demonstrate the impossibility of certain configurations. By assuming that a statement regarding counting or arrangement is false and arriving at a contradiction, one can solidify the truth of the original assertion. This method often clarifies why certain combinations or structures cannot exist, helping to establish foundational results in combinatorics. • Discuss how proof by contradiction differs from direct proof and when each method might be preferred. □ Proof by contradiction contrasts with direct proof in its approach to establishing truth. While direct proof builds a logical sequence from premises to conclusion, proof by contradiction starts with the opposite assumption and shows it leads to an inconsistency. Proof by contradiction might be preferred when direct methods are complex or unclear, particularly when dealing with existential statements or properties that seem inherently contradictory. • Evaluate the effectiveness of proof by contradiction in revealing truths within combinatorial mathematics compared to other proof techniques. □ Proof by contradiction effectively uncovers deep truths in combinatorial mathematics by exposing impossibilities that other techniques may overlook. Its unique approach allows for insights into why certain configurations cannot hold true, thus enriching our understanding of combinatorial structures. Compared to other methods, such as direct proofs or induction, proof by contradiction often provides a more straightforward route to revealing underlying principles and asserting the validity of complex statements. ยฉ 2024 Fiveable Inc. All rights reserved. APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/algebraic-combinatorics/proof-by-contradiction","timestamp":"2024-11-02T07:33:01Z","content_type":"text/html","content_length":"158829","record_id":"<urn:uuid:6f1c5111-21a2-4eac-81c2-ac2ae7a3ad8c>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00311.warc.gz"}
Some students think a voltage source will help release photoelectrons Quantum and Nuclear For example, they may think that if the energy supplied by the voltage source overtakes the work function of the metal, then photoelectrons can be released, even if the photon energy is less than the work function. You might consider asking students to draw a current-voltage graph for an electrode with a work function that is larger than the photon energy. Students would be expected to draw a zero current graph. However, if students hold this incorrect idea, they are likely to draw a graph for a typical photoelectric experiment, where the work function is less than the photon energy. They may also draw a graph with a positive stopping voltage instead of a zero current graph. Resources to Address This • Use this lesson outline to help plan and carry out a practical demonstration of the photoelectric effect. The outcome of the experiment are discussed along with an analogy leading to the idea of electrons requiring sufficient energy to escape a potential well and the corresponding equation. A worksheet providing questions about the photoelectric effect is also linked. View Resource • These videos cover a wide range of approaches to teaching area of quantum and nuclear physics. The “Photoelectric effect” video demonstrates the effect using an electroscope and then discusses the effect in detail using Lego to give an interesting visual approach. This can me used to explain the effect before moving on to the equation. After the video discuss why a voltage will not assist in the escape of photoelectrons. View Resource • Taslidere, E., () Development and use of a three-tier diagnostic test to assess high school students’ misconceptions about the photoelectric effect, Research in Science & Technological Education, 34 (2) 164–186. • Steinburg, R., Oberem, G. and McDermott, L., () Development of a computer-based tutorial on the photoelectric effect, American Journal of Physics, 64 (11)
{"url":"https://spark.iop.org/some-students-think-voltage-source-will-help-release-photoelectrons","timestamp":"2024-11-14T11:18:48Z","content_type":"text/html","content_length":"35732","record_id":"<urn:uuid:badc4ea9-98fc-4370-9cd5-d8577f12dbe8>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00862.warc.gz"}
Winning Nim Against a Player who Plays Randomly I recently wrote about my way of playing Nim against a player who doesn’t know how to play. If my move starts in an N-position, then I obviously win. If my move starts in a P-position, I would remove one token hoping that more tokens for my opponent means more opportunity for them to make a mistake. But which token to remove? Does it make a difference from which pile I choose? Consider the position (2,4,6). If I take one token, my opponent has 11 different moves. If I choose one token from the first or the last pile, my opponent needs to get to (1,4,5) not to lose. If I choose one token from the middle pile, my opponent needs to get to (1,3,2) not to lose. But the first possibility is better, because there are more tokens left, which gives me a better chance to have a longer game in case my opponent guesses correctly. That is the strategy I actually use: I take one token so that the only way for the opponent to win is to take one token too. This is a good heuristic idea, but to make such a strategy precise we need to know the probability distribution of the moves of my opponent. So let us assume that s/he picks a move uniformly at random. If there are n tokens in a N-position, then there are n − 1 possible moves. At least one of them goes to a P-position. That means my best chance to get on the winning track after the first move is not more than n/(n−1). If there are 2 or 3 heaps, then the best strategy is to go for the longest game. With this strategy my opponent always has exactly one move to get to a P-position, I win after the first turn with probability n/(n−1). I lose the game with probability 1/(n−1)!!. Something interesting happens if there are more than three heaps. In this case it is possible to have more than one winning move from a N-position. It is not obvious that I should play the longest game. Consider position (1,3,5,7). If I remove one token, then my opponent has three winning moves to a position with 14 tokens. On the other hand, if I remove 2 tokens from the second or the fourth pile, then my opponent has one good move, though to a position with only 12 tokens. What should I do? I leave it to my readers to calculate the optimal strategy against a random player starting from position (1,3,5,7). Recent Comments • tanyakh on Make 60 by Using the Same Number Thrice • David Reynolds on Make 60 by Using the Same Number Thrice • David Reynolds on Make 60 by Using the Same Number Thrice • Andreas on Make 60 by Using the Same Number Thrice
{"url":"https://blog.tanyakhovanova.com/2017/03/winning-nim-against-a-player-who-plays-randomly/","timestamp":"2024-11-10T13:49:56Z","content_type":"text/html","content_length":"60800","record_id":"<urn:uuid:e042efe3-bcc7-4bbc-88f6-c6dbfcf2e759>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00799.warc.gz"}
Neural Network Architecture for Efficient Deep Hedging - Preferred Networks Research & Development Neural Network Architecture for Efficient Deep Hedging This post is contributed by Shota Imaki, who was an intern and a part-time engineer at PFN. Japanese version is available here. I am Shota Imaki, a Ph.D. student majoring in theoretical physics. During my internship, we studied Deep Hedging, which had captured my interest for a long time, and I am writing this to summarize this work. Deep Hedging is a grand-breaking framework to hedge financial derivatives using deep learning. It has, however, been notorious for the difficulty of training. We proposed a “no-transaction band network” to achieve a 20x speedup of training. Deep Hedging can now learn to hedge and price exotic derivatives in seconds. To our great honor, we receive the Incentive Award of the Japanese Society of Artificial Intelligence (JSAI) for this research. We released minimal implementation of the work as well as a PyTorch-based library for Deep Hedging, PFHedge. Research Objective: Hedging Automation Our research aims to automate hedging associated with securities companies’ dealing in derivatives. Let us first describe what derivatives are, how hedging works, and how Deep Hedging automates it. Derivatives are securities derived from ordinary securities such as equities, bonds, and currencies. An equity derivative, for example, settles a payoff that depends on the trajectory of the equity price. Examples include the following options: • European call option: It pays off \(\max(S – K, 0)\) where \(S\) is the final price in a predetermined time horizon and \( K\) is a strike price. • Lookback call option: It pays off \(\max(M – K, 0)\) where \(M\) is the maximum price in a predetermined time horizon and \(K\) is a strike price. Derivatives enable pliant insurance and investment. Entities may use derivatives to insure against their uncertain cash flow or balance; Investors may flexibly make their position with derivatives. Derivatives are indispensable instruments for modern risk management and advanced investments. That is why derivatives have thrived for a long to develop their trillion-dollar global market. Securities companies sell derivatives, say an equity derivative, to counterparties and usually hedge the resulting short position. The value of the equity derivative fluctuates with some “sensitivity” to the underlying equity. Therefore, a securities company can offset the risk of the derivative by transacting the right amount of equity. That is how hedging works. In an idealized market without transaction cost or other frictions, one can precisely compute the optimal hedging strategy. It is optimal to perfectly offset the risk with incessant trades that cancel out the “sensitivity” calculated based on financial models. The real market, in contrast, has transaction costs and thereby makes hedging optimization much harder. Human traders need to manually adjust quantitative models based on their experiences to account for the incompleteness of the market. Hedging automation is thus a crucial task because manual hedging has essential drawbacks such as high labor costs and limited order volumes. Existing Work: Deep Hedging Deep Hedging (Buehler et al. 2018) is a deep learning-based framework to automate hedging. It represents a hedging strategy with a neural network and seeks the optimum by improving parameters therein. Deep Hedging expects to slash up to 80% of costs and attracts high hopes as a “game-changer” of the derivatives business. However, such optimization is easier said than done in practice. In quite a few cases, a neural network struggles to converge even after a considerable amount of training. It is a fatal problem for securities companies that have the mandate to undertake their customers’ orders quickly while accurately. That is why we proposed a novel neural network that facilitates fast training in Deep Hedging. Proposed network attains 20x speedup in comparison to the preceding neural network. Hedging Optimization Let us now formulate the hedging optimization. Note that the formulation below is abridged from that in the original paper. A securities company sells a derivative to its customer and hedges the associated risk by transacting the underlying asset. So transactions at time steps \(t = 0,\, \dots\, , T\) give rise to the following final profit. The overall payoff is determined by the profit from the derivative, transaction of the underlying asset, and transaction cost. \[P = -Z + \sum_{t = 0}^{T – 1} (\delta_t \Delta S_t – c |\Delta \delta_t| S_t)\] Here, \(Z\) is the terminal value of the derivative (which is a function of the stock price trajectory), \(\delta\) is a unit of stocks hold at each time step, \(S\) is the stock price, and \(c\) is the transaction cost rate. Notice that \(P\) is a random variable as the stock price is assumed so. The risk measure is the following loss function with \(u\) being a utility of the securities company. The optimal hedging strategy is the one minimizing this loss. \[\ell(\delta) = -\mathbf{E} [u(P)]\] In other words, the optimal hedge maximizes the expected utility. Because the utility is monotone and convex, one has to increase the mean of the profit while quenching deviation. Since frequent re-hedging suppresses risk but increases transaction cost, one should re-hedge in the apt interval. Deep Hedging The central idea of Deep Hedging is to represent a hedging strategy \(\delta\) by a neural network. The network proposed in the original paper inputs the market information and the current position. The output is the position at the next time step. The neural network can approximate the optimal hedge through training. That is, we may want to simulate the path of stock prices, let the neural network hedge against them, and then tweak parameters therein to reduce the loss. However, this optimization is easier said than done in practice. As shown in the learning history on top, it may not converge even after iterating 1,000 simulations. We conjectured that the difficulty is because the inputs depend on the current position. A neural network struggles until it observes many paths for which a neural network makes various outputs. Proposal: No-Transaction Band Network We proposed a “no-transaction band network”, a neural network that overcomes the difficulty of position-dependence. This architecture is as simple as follows. 1. A neural network inputs the market information and outputs a permissible band of the next position, \([b_{\text{l}}, b_{\text{u}}]\). 2. The next position is obtained by clamping the current position into the band. Here the \(\mathsf{clamp}\) function reads as follows. PyTorch implements it as \(\mathsf{clamp}\). \[\mathsf{clamp}(\delta_{t_i}, b_{\text{l}}, b_{\text{u}}) = \begin{cases} b_{\text{l}} & \text{if } \delta_{t_i} < b_{\text{l}} \\ \delta_{t_i} & \text{if } b_{\text{l}} \leq \delta_{t_i} \leq b_{\text{u}} \\ b_{\text{u}} & \text{if } \delta_{t_i} > b_{\text{u}} The no-transaction band network has two advantages. • Neural network’s inputs do not depend on the current position: This feature overcomes the difficulty of position-dependence to facilitate training. • Neural network encodes an efficient strategy: Strategy using a band is cost-effective because it never transacts inside the band. A Neural network encodes this wisdom as an “induction bias.” Let us remark on the second advantage before leaving this section. This strategy has long been studied as a “no-transaction band strategy” and proved to be optimal for European options and exponential utility. We theoretically proved that this strategy is optimal for a broader class of derivatives and utilities as well. This proof, along with the universal approximation theorem, guarantees that the no-transaction band network can represent the optimal hedging strategy. It is an indispensable guarantee for securities companies, which are mandated to offer the best price to their customers through the optimal hedge. Numerical Experiment We performed numerical simulations to demonstrate the efficiency of the no-transaction band. We compare the following hedging methods for European and lookback options. 1. Deep Hedging with the No-Transaction Band Network: The proposed method. 2. Deep Hedging with an Ordinary Feed-forward Network: The method proposed in the orignal paper of Deep Hedging. 3. Asymptotically Optimal Hedging Strategy: The optimal strategy for an infinitesimally small cost rate. It is the optimal strategy for an infinitesimally small cost rate. It has been found theoretically in a preceding work. As shown in the learning history below, the no-transaction band quickly learns to hedge. While an ordinary feed-forward network struggles even after 1,000 times of simulations, the no-transaction band reaches its minimum in around 50 times. Preferred Networks’ MN-2 supercomputer completes it in seconds, which we expect is sufficiently quick for practical applications. The figure below presents the expected utility attained by each method as a function of a cost rate. The no-transaction band achieves the highest utility for a wide range of costs. It is worth mentioning that the utility of the no-transaction band network bottoms at some value of cost. That is because the no-transaction band learns “not to hedge” when a transaction cost is too expensive. This advantage is distinctive from the other two. We computed derivative prices as well. Let us omit the pricing theory here and emphasize that a lower price is better for competitive brokerage businesses. The no-transaction band network attains prices up to several percent lower than the other methods. This discount is nothing but the value added to the security companies and their counterparties. Besides, tighter quotes provide liquidity to the derivative market. Let us remark why the no-transaction band network achieves lower prices than the “analytic” asymptotic solution. Our interpretation is that while the analytic solution approximates by truncating sub-leading orders for a cost rate, the no-transaction band takes account of the higher-order contribution. Also, the price computed by the no-transaction band is the most accurate since the definition of the price considered here is the minimum value among all possible hedging strategies. The no-transaction band quickly learns to hedge a lookback option as well. Surprisingly, the no-transaction band network achieves its optima as fast as the European option, even though the lookback option bears extra complications of path dependence and needs more inputs. This result suggests that our method would scale for more input features. We presented our study as “Neural Network Architecture for Efficient Deep Hedging” (in Japanese) and had the privilege to receive the Incentive Award of the Japanese Society of Artificial Intelligence. Also, we released a detailed paper, “No-Transaction Band Network: A Neural Network Architecture for Efficient Deep Hedging” (in English). We provide minimal implementation to experience the efficiency of the no-transaction-band network in a GitHub repository: pfnet-research/NoTransactionBandNetwork. We also released PFHedge, which is a PyTorch-based library for Deep Hedging. One can try out Deep Hedging with a code as simple as follows. We hope this library accelerates the research of Deep Hedging toward data-driven derivative business. from pfhedge.instruments import BrownianStock, EuropeanOption from pfhedge.nn import Hedger from pfhedge.nn import MultiLayerPerceptron derivative = EuropeanOption(BrownianStock(cost=1e-3)) hedger = Hedger(MultiLayerPerceptron(), inputs=["moneyness", "expiry_time", "prev_hedge"]) The no-transaction band network learns the optimal hedging strategy blazingly fast. It enables securities companies to meet their customers’ needs with quicker and tighter quotes while slashing operational costs. Also, a cost-efficient hedging strategy can handle high-volume orders and supply liquidity to the derivative market. While there are still practical challenges ahead of Deep Hedging, we are proud to have overcome one of the most challenging obstacles to lead technology-driven innovation in the financial industry. We proposed the no-transaction band network by extending traditional research in quantitative finance to overcome the inherent problems of deep learning. It was a valuable experience for me to learn that exceptional ideas are inspired by leveraging different perspectives. Collaborators’ generous mentoring and Preferred Networks’ abundant computing resources enabled these all achievements. Imos-san (Imajo-san) has proactively shared ingenious ideas to make the way out of challenging situations. Ito-san has shared a lot of expertise about quantitative finance, which was fresh for me. Minami-san has been so dependable when it comes to analytical approaches to the optimal control problem and vigorous and concise writing of the paper. Nakagawa-san at Nomura Asset Management has contributed both theoretical and practical issues in the financial market, which will be essential to look ahead of applications. Also, Preferred Networks’ supercomputer has been indispensable for research with a lot of trials. Let me conclude this article by expressing my great appreciation for Preferred Networks and Nomura Asset Management. • Shota Imaki, Kentaro Imajo, Katsuya Ito, Kentaro Minami and Kei Nakagawa, “No-Transaction Band Network: A Neural Network Architecture for Efficient Deep Hedging”. arXiv:2103.01775 [q-fin.CP], Available at SSRN: https://ssrn.com/abstract=3797564 • Shota Imaki, Kentaro Imajo, Katsuya Ito, Kentaro Minami, Kei Nakagawa, “Neural Network Architecture for Efficient Deep Hedging“, JSAI Special Interest Group of Financial Informatics 26th • Hans Bühler, Lukas Gonon, Josef Teichmann and Ben Wood, “Deep hedging”. Quantitative Finance, 2019, 19, 1271–1291. arXiv:1802.03042 [q-fin.CP].
{"url":"https://tech.preferred.jp/en/blog/neural-network-architecture-for-efficient-deep-hedging/","timestamp":"2024-11-10T03:34:43Z","content_type":"text/html","content_length":"48519","record_id":"<urn:uuid:97c5397e-2cf1-41b2-9442-e0d2631ab59f>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00606.warc.gz"}
Graphing an Inequality in Two Variables Learning Outcomes • Graph an inequality in two variables So how do you get from the algebraic form of an inequality, like [latex]y>3x+1[/latex], to a graph of that inequality? Plotting inequalities is fairly straightforward if you follow a couple steps. Graphing Inequalities To graph an inequality: • Graph the related boundary line. Replace the <, >, ≤ or ≥ sign in the inequality with = to find the equation of the boundary line. • Identify at least one ordered pair on either side of the boundary line and substitute those [latex](x,y)[/latex] values into the inequality. Shade the region that contains the ordered pairs that make the inequality a true statement. • If points on the boundary line are solutions, then use a solid line for drawing the boundary line. This will happen for ≤ or ≥ inequalities. • If points on the boundary line aren’t solutions, then use a dotted line for the boundary line. This will happen for < or > inequalities. Let’s graph the inequality [latex]x+4y\leq4[/latex]. To graph the boundary line, find at least two values that lie on the line [latex]x+4y=4[/latex]. You can use the x– and y-intercepts for this equation by substituting 0 in for x first and finding the value of y; then substitute 0 in for y and find x. [latex]x[/latex] [latex]y[/latex] [latex]0[/latex] [latex]1[/latex] [latex]4[/latex] [latex]0[/latex] Plot the points [latex](0,1)[/latex] and [latex](4,0)[/latex], and draw a line through these two points for the boundary line. The line is solid because ≤ means “less than or equal to,” so all ordered pairs along the line are included in the solution set. The next step is to find the region that contains the solutions. Is it above or below the boundary line? To identify the region where the inequality holds true, you can test a couple of ordered pairs, one on each side of the boundary line. If you substitute [latex](−1,3)[/latex] into [latex]x+4y\leq4[/latex]: This is a false statement, since [latex]11[/latex] is not less than or equal to [latex]4[/latex]. On the other hand, if you substitute [latex](2,0)[/latex] into [latex]x+4y\leq4[/latex]: This is true! The region that includes [latex](2,0)[/latex] should be shaded, as this is the region of solutions. And there you have it—the graph of the set of solutions for [latex]x+4y\leq4[/latex]. Graphing Linear Inequalities in Two Variables Graph the inequality [latex]2y>4x–6[/latex]. Show Solution A quick note about the problem above—notice that you can use the points [latex](0,−3)[/latex] and [latex](2,1)[/latex] to graph the boundary line, but that these points are not included in the region of solutions, since the region does not include the boundary line! Try It Make sure you click Try Another Version of This Question so that you can get lots of practice graphing each type of inequality! Below is a video about how to graph inequalities with two variables when the equation is in what is known as slope-intercept form. When inequalities are graphed on a coordinate plane, the solutions are located in a region of the coordinate plane, which is represented as a shaded area on the plane. The boundary line for the inequality is drawn as a solid line if the points on the line itself do satisfy the inequality, as in the cases of ≤ and ≥. It is drawn as a dashed line if the points on the line do not satisfy the inequality, as in the cases of < and >. You can tell which region to shade by testing some points in the inequality. Using a coordinate plane is especially helpful for visualizing the region of solutions for inequalities with two variables.
{"url":"https://courses.lumenlearning.com/wm-developmentalemporium/chapter/read-or-watch-graph-an-inequality-in-two-variables/","timestamp":"2024-11-05T10:13:21Z","content_type":"text/html","content_length":"55542","record_id":"<urn:uuid:fcfffc9c-6499-4d8b-bcbe-5ac6a9e20cf2>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00190.warc.gz"}
2,487 research outputs found Lattice simulations of light nuclei necessarily take place in finite volumes, thus affecting their infrared properties. These effects can be addressed in a model-independent manner using Effective Field Theories. We study the model case of three identical bosons (mass m) with resonant two-body interactions in a cubic box with periodic boundary conditions, which can also be generalized to the three-nucleon system in a straightforward manner. Our results allow for the removal of finite volume effects from lattice results as well as the determination of infinite volume scattering parameters from the volume dependence of the spectrum. We study the volume dependence of several states below the break-up threshold, spanning one order of magnitude in the binding energy in the infinite volume, for box side lengths L between the two-body scattering length a and L = 0.25a. For example, a state with a three-body energy of -3/(ma^2) in the infinite volume has been shifted to -10/(ma^2) at L = a. Special emphasis is put on the consequences of the breakdown of spherical symmetry and several ways to perturbatively treat the ensuing partial wave admixtures. We find their contributions to be on the sub-percent level compared to the strong volume dependence of the S-wave component. For shallow bound states, we find a transition to boson-diboson scattering behavior when decreasing the size of the finite volume.Comment: 21 pages, 4 figures, 2 table We report experiments on spatial switching dynamics and steady state structures of passive nonlinear semiconductor resonators of large Fresnel number. Extended patterns and switching front dynamics are observed and investigated. Evidence of localization of structures is given.Comment: 5 pages with 9 figure Bloch equations give a quantum description of the coupling between an atom and a driving electric force. In this article, we address the asymptotics of these equations for high frequency electric fields, in a weakly coupled regime. We prove the convergence towards rate equations (i.e. linear Boltzmann equations, describing the transitions between energy levels of the atom). We give an explicit form for the transition rates. This has already been performed in [BFCD03] in the case when the energy levels are fixed, and for different classes of electric fields: quasi or almost periodic, KBM, or with continuous spectrum. Here, we extend the study to the case when energy levels are possibly almost degenerate. However, we need to restrict to quasiperiodic forcings. The techniques used stem from manipulations on the density matrix and the averaging theory for ordinary differential equations. Possibly perturbed small divisor estimates play a key role in the analysis. In the case of a finite number of energy levels, we also precisely analyze the initial time-layer in the rate aquation, as well as the long-time convergence towards equilibrium. We give hints and counterexamples in the infinite dimensional case In this contribution we show that a suitably defined nonequilibrium entropy of an N-body isolated system is not a constant of the motion in general and its variation is bounded, the bounds determined by the thermodynamic entropy, i.e., the equilibrium entropy. We define the nonequilibrium entropy as a convex functional of the set of n-particle reduced distribution functions (n=0,......., N) generalizing the Gibbs fine-grained entropy formula. Additionally, as a consequence of our microscopic analysis we find that this nonequilibrium entropy behaves as a free entropic oscillator. In the approach to the equilibrium regime we find relaxation equations of the Fokker-Planck type, particularly for the one-particle distribution function The spatiotemporal dynamics of singly resonant optical parametric oscillators with external seeding displays hexagonal, roll, and honeycomb patterns, optical turbulence, rogue waves, and cavity solitons. We derive appropriate mean-field equations with a sinc2 nonlinearity and demonstrate that off-resonance seeding is necessary and responsible for the formation of complex spatial structures via self-organization. We compare this model with those derived close to the threshold of signal generation and find that back-conversion of signal and idler photons is responsible for multiple regions of spatiotemporal self-organization when increasing the power of the pump field Data from the German miners' cohort study were analysed to investigate whether radon in ambient air causes cancers other than lung cancer. The cohort includes 58 987 men who were employed for at least 6 months from 1946 to 1989 at the former Wismut uranium mining company in Eastern Germany. A total of 20 684 deaths were observed in the follow-up period from 1960 to 2003. The death rates for 24 individual cancer sites were compared with the age and calendar year-specific national death rates. Internal Poisson regression was used to estimate the excess relative risk (ERR) per unit of cumulative exposure to radon in working level months (WLM). The number of deaths observed (O) for extrapulmonary cancers combined was close to that expected (E) from national rates (n=3340, O/E=1.02; 95% confidence interval (CI): 0.98–1.05). Statistically significant increases in mortality were recorded for cancers of the stomach (O/E=1.15; 95% CI: 1.06–1.25) and liver (O/E=1.26; 95% CI: 1.07–1.48), whereas significant decreases were found for cancers of the tongue, mouth, salivary gland and pharynx combined (O/E=0.80; 95% CI: 0.65–0.97) and those of the bladder (O/E=0.82; 95% CI: 0.70–0.95). A statistically significant relationship with cumulative radon exposure was observed for all extrapulmonary cancers (ERR/WLM=0.014%; 95% CI: 0.006–0.023%). Most sites showed positive exposure–response relationships, but these were insignificant or became insignificant after adjustment for potential confounders such as arsenic or dust exposure. The present data provide some evidence of increased risk of extrapulmonary cancers associated with radon, but chance and confounding cannot be ruled out Using a silicon vertex detector, we measure the charged particle pseudorapidity distribution over the range 1.5 to 5.5 using data collected from PbarP collisions at root s = 630 GeV. With a data sample of 3 million events, we deduce a result with an overall normalization uncertainty of 5%, and typical bin to bin errors of a few percent. We compare our result to the measurement of UA5, and the distribution generated by the Lund Monte Carlo with default settings. This is only the second measurement at this level of precision, and only the second measurement for pseudorapidity greater than 3.Comment: 9 pages, 5 figures, LaTeX format. For ps file see http://hep1.physics.wayne.edu/harr/harr.html Submitted to Physics Letters In this paper we try to construct noncommutative Yang-Mills theory for generic Poisson manifolds. It turns out that the noncommutative differential calculus defined in an old work is exactly what we need. Using this calculus, we generalize results about the Seiberg-Witten map, the Dirac-Born-Infeld action, the matrix model and the open string quantization for constant B field to non-constant background with H=0.Comment: 21 pages, Latex file, references added, minor modificatio We analyse the vector bundle moduli arising from generic heterotic compactifications from the point of view of quiver representations. Phenomena such as stability walls, crossing between chambers of supersymmetry, splitting of non-Abelian bundles and dynamic generation of D-terms are succinctly encoded into finite quivers. By studying the Poincar\'e polynomial of the quiver moduli space using the Reineke formula, we can learn about such useful concepts as Donaldson-Thomas invariants, instanton transitions and supersymmetry breaking.Comment: 38 pages, 5 figures, 1 tabl Mirror Symmetry, Picard-Fuchs equations and instanton corrected Yukawa couplings are discussed within the framework of toric geometry. It allows to establish mirror symmetry of Calabi-Yau spaces for which the mirror manifold had been unavailable in previous constructions. Mirror maps and Yukawa couplings are explicitly given for several examples with two and three moduli.Comment: 59 pages. Some changes in the references, a few minor points have been clarifie
{"url":"https://core.ac.uk/search/?q=author%3A(Kreuzer%20L.%20B.)","timestamp":"2024-11-08T20:49:17Z","content_type":"text/html","content_length":"233520","record_id":"<urn:uuid:01e11d99-9a22-4197-95dd-6239f0fb704d>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00039.warc.gz"}
Present Value Of Money (Meaning + Examples) - WhiteboardCrypto If you won the lottery, what would you do? I’m not talking about buying jet skis or a new mansion; I mean would you take an immediate lump sum or choose to take a series of payments over an extended period of time? For most of us, this is just a fanciful daydream, but how you go about the process could have implications for real-life investment decisions. Let’s go back to the daydream. How do you go about deciding between the lump sum or the long-term series of payments? What factors go into your decision-making process? Sure, for some of you, it will be a choice of personal preference. Maybe you have some immediate ideas for spending that money or perhaps you realize having a stream of income can help with budgeting and make it easier to stretch that money over the rest of your life. After all, you’ve heard of MC Hammer and all the lottery winners and professional athletes who’ve gone bankrupt, and you’d like to avoid a similar fate by steering clear of the temptation to What if you’re just interested in the total amount of money? You have faith in your self-discipline and budgeting skills but you have no immediate need for a large sum of money—you just want to take whichever option will give you the most cash. Can’t you just add up the total of the series of payments and compare that to the immediate payment? No—if they offer you $20,000,000 today or $1,000,000 a year for 20 years, you might get the same amount but you’d objectively be better off taking the $20,000,000 today. Why is this? The reason is a concept referred to as the present value of money that says a dollar today is worth more than a dollar in the future. In this case, the immediate $20,000,000 is worth more than the same amount being paid in the future. Simply knowing this concept makes the decision in our hypothetical lottery easy since the amounts are the same. However, we can also use this principle to decide whether to take a lesser amount today or a greater amount to be paid at some future date. In this article, we will examine the principle of the present value of money, how to calculate the present value of money, and the benefits and limitations of these analyses. What is the Present Value of Money? The present value of money is the current worth of a future sum of money or streams of income given at a specified rate of return. In other words, it allows you to know how much the guarantee of future money is worth to you at this moment. In the lottery example, you know the $20,000,000 lump sum is worth $20,000,000—you can use the idea of the present value of money and the formula we’ll introduce shortly to calculate how much that $20,000,000 spread over 20 years is worth today. This allows an easy comparison of the two. Why Does Present Value Exist? Before getting into the math, let’s examine why money today is worth more than the same amount paid in the future. There are three reasons for this—opportunity cost, inflation, and uncertainty. The opportunity cost is the lost opportunity to invest the money and earn a rate of return. If you had the money today, you could invest and earn interest. That $20,000,000 lump sum could be put into an account that would earn compound interest or a U.S. Treasury bond that would pay interest. At the end of twenty years, you would have both the $20,000,000 principal and the interest it earned. By accepting smaller payments spread over time, you would be giving up the opportunity to immediately invest it and earn interest. This is referred to as the opportunity cost. Secondly, there is inflation. You’ve all heard the trope about how everything used to cost a nickel, right? What can you get for a nickel today? The same principle applies to large sums of money. That $1,000,000 you’ll receive in twenty years as the final payment in your series of payments will not buy as much as it would if you had it in your pocket (or bank account) today. Finally, there is uncertainty. The money in your bank account is, well, money in the bank. The series of payments is a promise to pay. A lot can happen in the course of a year, and a lot can also happen in the course of twenty years. You could die before ever receiving the money, or the entity running the lottery could go bankrupt. In either scenario, you’ll never receive the full amount. Taking the money right now guarantees that the payment actually comes to fruition. How do we know how much those future payments are worth in present terms? We use this formula: PV = FV/(1+r)^n PV = present value FV = future value r = rate of return n = number of periods So, to calculate the present value of the $1,000,000 lottery payment in a year, you’d simply plug in the numbers as such: PV = $1,000,000/(1+r)^1 and do the math. To find the total value of the series of payments, you’d simply do this for each payment and add the total. The calculation would look like this: PV = [$1,000,000/(1+r)^1] + [$1,000,000/(1+r)^2] … + [$1,000,000/(1+r)^20] If you don’t like doing the math yourself, the internet is once again here to help. There are a number of present-value calculators online where all you have to do is enter the numbers and let the calculator do the work. Determining the Discount Rate You may have noticed something missing from our calculations; specifically, what is r, or the rate of return? The rate of return that is applied to a present value calculation is known as the discount rate. It measures the opportunity cost of the rate of return foregone by agreeing to accept the money in the In other words, it’s the rate of return you could have gotten by taking the money today and investing it rather than waiting until a future date. Determining the rate of return is highly subjective since, until the money has been invested, you don’t know what that rate of return will be. For example, stocks can yield higher returns than bonds, but there is also an increased risk of underperformance or even losing money. This increased risk is often compensated by a higher rate of return, but what should you use in your calculation? Again, there is some subjectivity, as, depending on your options, your estimates of the rate of return you can earn will vary. However, a good rule of thumb often followed in these calculations is to use a risk-free rate of return, that is, the interest rate paid by an instrument such as a U.S. Treasury bond with an almost nonexistent risk of default. This is often known as a hurdle rate, since any riskier investment must pay a greater rate to justify the risk, and is often used in these calculations since it forms a good baseline of what can be achieved through investments. Additionally, if you are mostly concerned with inflation, the inflation rate can be used. This will tell you how much the payment you’ll receive in the future is worth in terms of today’s money. This will account for inflation, but not the opportunity cost. Benefits of Present Value Calculations Present value is an important consideration in a number of financial calculations such as pension obligations or bond yields. On a personal level, learning how to use present value calculations can help decide whether to accept offers such as cash rebates, 0% financing on a car, or pay points on a mortgage. It is also critical in investment decisions. Using present value can provide valuable insight into whether or not to make a certain investment, or in choosing one investment over another. Companies and individuals use it to determine whether an investment’s future value and rate of return are enough to make it worth pursuing. This ability to compare future dollar amounts to present dollar amounts makes the present value of money a key comparison tool during the investment process, and can also help clarify decisions such as taking a lesser sum now as opposed to a larger sum paid out in the future or over a period of time. This can help determine the value of potential future bonuses or in assessing the value of long-term contracts. Finally, understanding present value can shed light on the economic impact of the changing value of money during periods of high inflation. However, there are limitations to the usefulness of present value calculations. First, they involve assumptions about the discount rate, as we’ve already discussed. An overly optimistic or pessimistic assumption about the rate of return you can hope to earn can lead to poor It also allows those with a bias towards or against one investment or the other to get the results they want by changing their assumptions about the discount rate to justify their biases. Even with the best of intentions, the rates of returns are still estimates of future earnings. Since they haven’t happened yet, exact numbers are impossible. As with all investments, they aren’t guaranteed, and inflation can erode the spending value of these earnings. Despite these limitations, present value calculations are an important tool to use in comparing future values to their present value. To learn about more, related topics, check out our articles on the time value of money and the future value of money. As always, we hope you enjoyed this article, that you will be able to use what you’ve learned in your own financial decisions, and that you will subscribe to join us for more articles on other interesting financial topics. Author: Whiteboard Crypto WhiteboardCrypto is the #1 online resource for crypto education that explains topics of the cryptocurrency world using analogies, stories, and examples so that anyone can easily understand them. Growing to over 870,000 Youtube subscribers, the content has been shared around the world, played in public conferences and universities, and even in Congress.
{"url":"https://whiteboardcrypto.com/present-value-of-money-meaning-examples/","timestamp":"2024-11-12T15:38:03Z","content_type":"text/html","content_length":"90950","record_id":"<urn:uuid:60d06b8c-fee1-408a-bcd6-d9f6fd9b4ccd>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00183.warc.gz"}
General IPMs A general, density independent, deterministic IPM We’ll start with the least complicated of the general IPMs and build progressively from there. If you’ve already read through the introduction vignette, then most of this will look pretty familiar. If not, it is probably best to at least skim the first few sections before proceeding here. Key differences between simple and general IPMs. As noted above, we are now working with models that may have multiple continuous state variables and/or discrete states to describe the demography of our species. Thus, we need to add some more information to the model’s definition in order to fully capture these additional demographic details. Relative to how we define simple models in ipmr, there are now two new bits: 1. Each kernel that has an integration requires that d_statevariable get appended to the kernel formula. This is equivalent to the “dz” in \(\int_L^U K(z',z) n(z,t) \mathrm{dz}\). ipmr automatically generates this variable internally, so there is no need to define it in the data_list or define_domains(), we can just add it any of the formula(s) that we want to. We skipped this step in the simple IPMs because it gets appended automatically. Unfortunately, it is less easy to infer the correct state variable to d_z with for general IPMs where there may be many continuous and/or discrete states. 2. The implementation arguments list can now have different values in the state_start and state_end slots. This is demonstrated in a separate chunk below. Mathematical overview of the example This example will use an IPM for Ligustrum obtusifolium, a tree species that is now invasive in North America. Data were collected outside of St. Louis, Missouri, and the full model is described in Levin et al. 2019. The IPM consists of a single discrete stage (seed bank, abbreviated b) and a single continuous state (height, abbreviated ht/\(z,z'\)). The census timing was such that all seeds must enter the seed bank. They can either germinate in the next year, or they can die (so stay_discrete = 0). Thus, the full model takes the following form: 1. \(n(z', t + 1) = \int_L^U P(z', z) n(z,t)d\mathrm{z} + b(t) * leave\_discrete(z')\) 2. \(b(t + 1) = \int_L^Ugo\_discrete(z) n(z,t)d\mathrm{z}+ stay\_discrete\) 3. \(P(z',z) = s(z) * G(z',z)\) 4. \(go\_discrete(z) = f_s(z) * f_r(z) * g_i\) 5. \(leave\_discrete(z') = e_p * f_d(z')\) 6. \(stay\_discrete = 0\) \(f_G\) and \(f_{r_d}\) denote Normal probability density functions. The vital rates functions and example code to fit them are: 1. survival (s): A logistic regression with a squared term □ Example code: glm(surv ~ ht_1 + I(ht_1)^2, data = my_surv_data, family = binomial()) □ Mathematical form: \(Logit(s(z)) = \alpha_s + \beta_{s,1} * z + \beta_{s,2} * z^2\) 2. growth (g): A linear regression 3. flowering probability (r_r): A logistic regression □ example code: glm(repro ~ ht_1, data = my_repro_data, family = binomial()) □ Mathematical form: \(Logit(r_r(z)) = \alpha_{r_r} + \beta_{r_r} * z\) 4. seed production (r_s): A Poisson regression □ example code: glm(seeds ~ ht_1, data = my_seed_data, family = poisson()) □ Mathematical form: \(Log(r_s(z)) = \alpha_{r_s} + \beta_{r_s} * z\) 5. Recruit size distribution (r_d): A normal distribution with mean r_d_mu and standard deviation r_d_sd. □ example code: r_d_mu <- mean(my_recr_data$ht_2, na.rm = TRUE) and r_d_sd <- sd(my_rer_data$ht_2, na.rm = TRUE). □ Mathematical form: \(r_d(z') = f_{r_d}(\mu_{r_d}, \sigma_{r_d})\) 6. germination (g_i) and establishment (e_p) are constants. The code below assumes we have our data in long format (each seed get its own row) and that successful germination/establishment is coded as 1s, failures are 0s, and seeds we don’t know the fate of for whatever reason are NAs. □ example code: g_i <- mean(my_germ_data, na.rm = TRUE) and e_p <- mean(my_est_data, na.rm = TRUE) □ Mathematical form: \(g_i = 0.5067, e_p = 0.15\) Model code Below is the code to implement this model. First, we define our our parameters in a list. # Set up the initial population conditions and parameters data_list <- list( g_int = 5.781, g_slope = 0.988, g_sd = 20.55699, s_int = -0.352, s_slope = 0.122, s_slope_2 = -0.000213, r_r_int = -11.46, r_r_slope = 0.0835, r_s_int = 2.6204, r_s_slope = 0.01256, r_d_mu = 5.6655, r_d_sd = 2.0734, e_p = 0.15, g_i = 0.5067 Next, we set up two functions to pass into the model. These perform the inverse logit transformations for the probability of flowering model (r_r/\(r_r(z)\)) and survival model (s/\(s(z)\)). # We'll set up some helper functions. The survival function # in this model is a quadratic function, so we use an additional inverse logit function # that can handle the quadratic term. inv_logit <- function(int, slope, sv) { 1/(1 + exp(-(int + slope * sv))) inv_logit_2 <- function(int, slope, slope_2, sv) { 1/(1 + exp(-(int + slope * sv + slope_2 * sv ^ 2))) Now, we’re ready to begin making the IPM kernels. We change the sim_gen argument of init_ipm() to "general". general_ipm <- init_ipm(sim_gen = "general", di_dd = "di", det_stoch = "det") %>% name = "P", # We add d_ht to formula to make sure integration is handled correctly. # This variable is generated internally by make_ipm(), so we don't need # to do anything else. formula = s * g * d_ht, # The family argument tells ipmr what kind of transition this kernel describes. # it can be "CC" for continuous -> continuous, "DC" for discrete -> continuous # "CD" for continuous -> discrete, or "DD" for discrete -> discrete. family = "CC", # The rest of the arguments are exactly the same as in the simple models g = dnorm(ht_2, g_mu, g_sd), g_mu = g_int + g_slope * ht_1, s = inv_logit_2(s_int, s_slope, s_slope_2, ht_1), data_list = data_list, states = list(c('ht')), uses_par_sets = FALSE, evict_cor = TRUE, evict_fun = truncated_distributions('norm', ) %>% name = "go_discrete", formula = r_r * r_s * d_ht, # Note that now, family = "CD" because it denotes a continuous -> discrete transition family = 'CD', r_r = inv_logit(r_r_int, r_r_slope, ht_1), r_s = exp(r_s_int + r_s_slope * ht_1), data_list = data_list, # Note that here, we add "b" to our list in states, because this kernel # "creates" seeds entering the seedbank states = list(c('ht', "b")), uses_par_sets = FALSE ) %>% name = 'stay_discrete', # In this case, seeds in the seed bank either germinate or die, but they # do not remain for multiple time steps. This can be adjusted as needed. formula = 0, # Note that now, family = "DD" because it denotes a discrete -> discrete transition family = "DD", # The only state variable this operates on is "b", so we can leave "ht" # out of the states list states = list(c('b')), evict_cor = FALSE ) %>% # Here, the family changes to "DC" because it is the discrete -> continuous # transition name = 'leave_discrete', formula = e_p * g_i * r_d, r_d = dnorm(ht_2, r_d_mu, r_d_sd), family = 'DC', data_list = data_list, # Again, we need to add "b" to the states list states = list(c('ht', "b")), uses_par_sets = FALSE, evict_cor = TRUE, evict_fun = truncated_distributions('norm', We’ve now defined all of the kernels, next are the implementation details. These also differ somewhat from simple IPMs. The key difference in the implementation arguments list lies in the state_start and state_end of each kernel, and is related to the family argument of each kernel. Kernels that begin with one state and end in a different state (e.g. moving from seed bank to a plant) will have different entries in the state_start and state_end slots. It is very important to get these correct, as ipmr uses this information to generate the model iteration procedure automatically (i.e. code corresponding to Equations 1-2). general_ipm <- general_ipm %>% P = list(int_rule = "midpoint", state_start = "ht", state_end = "ht"), go_discrete = list(int_rule = "midpoint", state_start = "ht", state_end = "b"), stay_discrete = list(int_rule = "midpoint", state_start = "b", state_end = "b"), leave_discrete = list(int_rule = "midpoint", state_start = "b", state_end = "ht") An alternative to the list above is to use make_impl_args_list(). The chunk above and the chunk below generate equivalent proto_ipm objects. general_ipm <- general_ipm %>% kernel_names = c("P", "go_discrete", "stay_discrete", "leave_discrete"), int_rule = c(rep("midpoint", 4)), state_start = c('ht', "ht", "b", "b"), state_end = c('ht', "b", "b", 'ht') The rest is the same as the simple IPM. We can use our pre-defined functions in make_ipm, and we can use pre-defined variables in define_domains. We’ll create variables for the upper (U) and lower (L) size bounds for the population, and the number of meshpoints for integration (n). We also define initial population vectors for both b and ht. # The lower and upper bounds for the continuous state variable and the number # of meshpoints for the midpoint rule integration. We'll also create the initial # population vector from a random uniform distribution L <- 1.02 U <- 624 n <- 500 init_pop_vec <- runif(500) init_seed_bank <- 20 general_ipm <- general_ipm %>% # We can pass the variables we created above into define_domains ht = c(L, U, n) ) %>% # We can also pass them into define_pop_state pop_vectors = list( n_ht = init_pop_vec, n_b = init_seed_bank ) %>% make_ipm(iterations = 100, usr_funs = list(inv_logit = inv_logit, inv_logit_2 = inv_logit_2)) # lambda is a generic function to compute per-capita growth rates. It has a number # of different options depending on the type of model # If we are worried about whether or not the model converged to stable # dynamics, we can use the exported utility is_conv_to_asymptotic. The default # tolerance for convergence is 1e-10, but can be changed with the 'tol' argument. is_conv_to_asymptotic(general_ipm, tol = 1e-10) w <- right_ev(general_ipm) v <- left_ev(general_ipm) Our model is now built! We can explore the asymptotic population growth rate with the lambda() function, which will work on any object made my make_ipm(). The same is true for is_conv_to_asymptotic (), which is a helper to function to figure out if we’ve set the number of iterations high enough to actually reach asymptotic population dynamics. left_ev and right_ev work for general deterministic IPMs as well. Stochastic versions are in the works, but are not yet implemented. Further analysis Say we wanted to calculate the mean and variance of life span conditional on initial state \(z_0\). This requires us to generate the IPM’s fundamental operator, \(N\), which is computed as follows: \ (N = (I - P)^{-1}\). The following chunk is a function to compute average lifespan as a function of initial size. Note that we omit the fecundity from these calculations entirely, as our model does not assume that reproduction imposes a cost on survival. The formula for mean lifespan is given as \(\bar{\eta}(z_0) = eN\) (where \(e\) is a constant function such that \(e(z)\equiv1\)), and the formula for its variance is \(\sigma^2_\eta(z_0) = e(2N^2-N) - (eN)^2\). Since the \(e\) function has the effect of summing columns, we’ll replace that with the R function colSums. Our second function, sigma_eta, will make use of %^% from ipmr, which multiples matrices by themselves, rather than the point-wise power provided by ^. More details on the math underlying these calculations are provided in Ellner, Childs, & Rees (2016), chapter 3. make_N <- function(ipm) { P <- ipm$sub_kernels$P I <- diag(nrow(P)) N <- solve(I - P) eta_bar <- function(ipm) { N <- make_N(ipm) out <- colSums(N) sigma_eta <- function(ipm) { N <- make_N(ipm) out <- colSums(2 * (N %^% 2L) - N) - colSums(N) ^ 2 mean_l <- eta_bar(general_ipm) var_l <- sigma_eta(general_ipm) mesh_ps <- int_mesh(general_ipm)$ht_1 %>% par(mfrow = c(1,2)) plot(mesh_ps, mean_l, type = "l", xlab = expression( "Initial size z"[0])) plot(mesh_ps, var_l, type = "l", xlab = expression( "Initial size z"[0])) From prior knowledge, we happen to know that even in the best conditions, Ligustrum obtusifolium individuals usually only live 25-50 years. It appears that our model (wildly) overestimates their average lifespan, and we’d want to think about how to re-parameterize it to more accurately capture mortality events. Our survival model would seem the most likely candidate for re-examination, particularly for young and mid-sized trees. That is a problem for another day though, and so the next section will investigate how to implement general IPMs in discretely varying environments. General models for discretely varying environments Discretely varying parameters can be used to construct general IPMs with little additional effort. These are typically the result of fitting a set of fixed effects vital rate models that include both continuous predictors for size/state and categorical variables like treatment or maturation state. They can also result from mixed effects models, for example, working with conditional modes for a random intercept corresponding to year or site. ipmr refers to these as parameter sets, and abbreviates them par_sets. For example, a random intercept for year may be denoted \(\alpha_{yr}\), where \(_{yr}\) provides an index for the different values that \(\alpha\) can take on. If you’ve already read the Intro to ipmr article and the example above, then there aren’t any new concepts to introduce here. Below is an example showing how all of these come together. Mathematical overview of the example We’ll use a variation of the model above, simulating some random year-specific intercepts for growth and seed production. Parameters, functions, and kernels that are now time varying have a subscript \(x_{yr}\) appended to them. 1. \(n(z', t + 1) = \int_L^U P_{yr}(z', z) n(z,t)d\mathrm{z} + b(t) * leave\_discrete(z')\) 2. \(b(t + 1) = \int_L^Ugo\_discrete_{yr}(z) n(z,t)d\mathrm{z}+ stay\_discrete\) 3. \(P_{yr}(z',z) = s(z) * G_{yr}(z',z)\) 4. \(go\_discrete_{yr}(z) = r_{s,yr}(z) * r_r(z) * g_i\) 5. \(leave\_discrete(z') = e_p * r_d(z')\) 6. \(stay\_discrete = 0\) The vital rate models are as follows: 1. survival (s): A logistic regression □ Example code: glm(surv ~ ht_1, data = my_surv_data, family = binomial()) □ Mathematical form: \(Logit(s(z)) = \alpha_s + \beta_{s,1} * z + \beta_{s,2} * z^2\) 2. growth (g): A linear mixed effects regression. \(f_G\) denotes a normal probability density function. 3. flowering probability (r_r): A logistic regression □ example code: glm(repro ~ ht_1, data = my_repro_data, family = binomial()) □ Mathematical form: \(Logit(r_r(z)) = \alpha_{r_r} + \beta_{r_r} * z\) 4. seed production (r_s): A Poisson mixed effects regression □ example code: glmer(seeds ~ ht_1 + (1 | year), data = my_seed_data, family = poisson()) □ Mathematical form: \(Log(r_{s,yr}(z)) = \alpha_{r_s} + \alpha_{r_s, yr} + \beta_{r_s} * z\) 5. Recruit size distribution (r_d): A normal distribution (\(f_{r_d}\)) with mean r_d_mu and standard deviation r_d_sd. □ example code: r_d_mu <- mean(my_recr_data$ht_2, na.rm = TRUE) and r_d_sd <- sd(my_recr_data$ht_2, na.rm = TRUE). □ Mathematical form: \(r_d(z') = f_{r_d}(z', \mu_{r_d}, \sigma_{r_d})\) 6. germination (g_i) and establishment (e_p) are constants. The code below assumes we have our data in long format (each seed get its own row) and that successful germinations/establishments are coded as 1s, failures are 0s, and seeds we don’t know the fate of for whatever reason are NAs. □ example code: g_i <- mean(my_germ_data, na.rm = TRUE) and e_p <- mean(my_est_data, na.rm = TRUE) □ Mathematical form: \(g_i = 0.5067, e_p = 0.15\) Model parameterization In the next chunk, we’ll set up the data list with all of our constants and make sure the names are what we want them to be. This example generates intercepts for 5 years of data. This can be altered according to your own needs. Then, we’ll define a couple functions to make the vital rate expressions easier to write. A key aspect here is setting the names on the time-varying parameters in the data_list. We are using the form "g_int_year", and substituting the values that "year" can take in for the actual index. These don’t need to be numbers, they can be words, letters, or some combination of the two. For this example, the years will be represented with the values 1:5. If we wanted to denote the actual years of the censuses, we could switch 1:5 to, for example, 2002:2006. # Set up the initial population conditions and parameters # Here, we are simulating random intercepts for growth # and seed production, converting them to a list, # and adding them into the list of constants. Equivalent code # to produce the output for the output from lmer/glmer # is in the comments next to each line all_g_int <- as.list(rnorm(5, mean = 5.781, sd = 0.9)) # as.list(unlist(ranef(my_growth_model))) all_r_s_int <- as.list(rnorm(5, mean = 2.6204, sd = 0.3)) # as.list(unlist(ranef(my_seed_model))) names(all_g_int) <- paste("g_int_", 1:5, sep = "") names(all_r_s_int) <- paste("r_s_int_", 1:5, sep = "") constant_list <- list( g_slope = 0.988, g_sd = 20.55699, s_int = -0.352, s_slope = 0.122, s_slope_2 = -0.000213, r_r_int = -11.46, r_r_slope = 0.0835, r_s_slope = 0.01256, r_d_mu = 5.6655, r_d_sd = 2.0734, e_p = 0.15, g_i = 0.5067 all_params <- c(constant_list, all_g_int, all_r_s_int) # The lower and upper bounds for the continuous state variable and the number # of meshpoints for the midpoint rule integration. L <- 1.02 U <- 624 n <- 500 init_pop_vec <- runif(500) init_seed_bank <- 20 # add some helper functions. The survival function # in this model is a quadratic function, so we use an additional inverse logit function # that can handle the quadratic term. inv_logit <- function(int, slope, sv) { 1/(1 + exp(-(int + slope * sv))) inv_logit_2 <- function(int, slope, slope_2, sv) { 1/(1 + exp(-(int + slope * sv + slope_2 * sv ^ 2))) Next, we will set up our sub-kernels. We’ll append the _year suffix to the kernel name, the vital rate expressions and parameter values that have time-varying components (g_year, f_s_year, g_int_year, f_s_int_year), and the call to truncated_distributions so that the growth kernel is correctly specified. We’ll also pass a list(year = 1:5)) into the par_set_indices argument of define_kernel(). When building the model, make_ipm() automatically expands these expressions to create 5 different expressions containing the various levels of year. general_stoch_kern_ipm <- init_ipm(sim_gen = "general", di_dd = "di", det_stoch = "stoch", kern_param = "kern") %>% # The kernel name gets indexed by _year to denote that there # are multiple possible kernels we can build with our parameter set. # The _year gets substituted by the values in "par_set_indices" in the # output, so in this example we will have P_1, P_2, P_3, P_4, and P_5 name = "P_year", # We also add _year to "g" to signify that it is going to vary across kernels. formula = s * g_year * d_ht, family = "CC", # Here, we add the suffixes again, ensuring they are expanded and replaced # during model building by the parameter names g_year = dnorm(ht_2, g_mu_year, g_sd), g_mu_year = g_int_year + g_slope * ht_1, s = inv_logit_2(s_int, s_slope, s_slope_2, ht_1), data_list = all_params, states = list(c('ht')), # We set uses_par_sets to TRUE, signalling that we want to expand these # expressions across all levels of par_set_indices. uses_par_sets = TRUE, par_set_indices = list(year = 1:5), evict_cor = TRUE, # we also add the suffix to `target` here, because the value modified by # truncated_distributions is time-varying. evict_fun = truncated_distributions('norm', target = 'g_year') ) %>% # again, we append the index to the kernel name, vital rate expressions, # and in the model formula. name = "go_discrete_year", formula = r_r * r_s_year * g_i * d_ht, family = 'CD', r_r = inv_logit(r_r_int, r_r_slope, ht_1), # Again, we modify the left and right hand side of this expression to # show that there is a time-varying component r_s_year = exp(r_s_int_year + r_s_slope * ht_1), data_list = all_params, states = list(c('ht', "b")), uses_par_sets = TRUE, par_set_indices = list(year = 1:5) ) %>% # This kernel has no time-varying parameters, and so is not indexed. name = 'stay_discrete', # In this case, seeds in the seed bank either germinate or die, but they # do not remain for multipe time steps. This can be adjusted as needed. formula = 0, # Note that now, family = "DD" becuase it denotes a discrete -> discrete transition family = "DD", states = list(c('b')), # This kernel has no time-varying parameters, so we don't need to designate # it as such. uses_par_sets = FALSE, evict_cor = FALSE ) %>% # This kernel also doesn't get a index, because there are no varying parameters. name = 'leave_discrete', formula = e_p * r_d, r_d = dnorm(ht_2, r_d_mu, r_d_sd), family = 'DC', data_list = all_params, states = list(c('ht', "b")), uses_par_sets = FALSE, evict_cor = TRUE, evict_fun = truncated_distributions('norm', ) %>% # We add suffixes to the kernel names here to make sure they match the names # we specified above. kernel_names = c("P_year", "go_discrete_year", "stay_discrete", "leave_discrete"), int_rule = c(rep("midpoint", 4)), state_start = c('ht', "ht", "b", "b"), state_end = c('ht', "b", "b", 'ht') ) %>% ht = c(L, U, n) ) %>% n_ht = init_pop_vec, n_b = init_seed_bank ) %>% iterations = 100, # We can specify a sequence of kernels to select for the simulation. # This helps others to reproduce what we did, # and lets us keep track of the consequences of different selection # sequences for population dynamics. kernel_seq = sample(1:5, size = 100, replace = TRUE), usr_funs = list(inv_logit = inv_logit, inv_logit_2 = inv_logit_2) 100 isn’t too many iterations, but hopefully this demonstrates how to set up and implement such a model. There are a number of slots in the output that may be useful for further analyses. 1. env_seq: This contains a character vector which shows the sequence in which levels of the time-varying components were chosen during model iteration. We could use this to reproduce the model outputs later. 2. pop_state$lambda: This shows the single timestep per-capita growth rates for the simulation. Additionally, there are functions to compute the stochastic population growth rate (\(\lambda_s\), computed as mean(log(pop_state$lambda)), burn in can be controlled with the burn_in parameter), and the mean sub-kernels. General models with continuously varying environments We can also use continuously varying parameters to construct general IPMs. These are good tools for exploring the consequences of environmental variation on demographic rates. ipmr handles these using define_env_state(), which can take both functions and data and generate draws from distributions during each iteration of the model. Mathematical overview This example includes survival and growth models that make use of environmental covariates. The model can be written as: 1. \(n(z', t+1) = \int_L^U K(z',z, \theta)n(z,t)dz + B(t) * r_s * r_d * f_{c_d}(z')\) 2. \(B(t + 1) = r_e * r_s * \int_L^U c_r(z) * c_s(z)n(z,t)dz + B(t) * r_s * r_r\) 3. \(K(z',z,\theta) = P(z',z,\theta) + F(z',z)\) 4. \(P(z',z,\theta) = s(z, \theta) * G(z',z,\theta)\) 5. \(F(z',z) = c_r(z) * c_s(z) * c_d(z') * (1 - r_e)\) \(\theta\) is a vector of time-varying environmental parameters. For this example, we’ll use a Gaussian distribution for temperature and a Gamma distribution for precipitation: 6. \(\theta_t \sim Norm(\mu = 8.9, \sigma = 1.2)\) 7. \(\theta_p \sim Gamma(k = 1000, \beta = 2)\) Next, we’ll write out the actual vital rate functions in the model: 8. survival (s/\(s(z,\theta)\)): a logistic regression. □ Example model formula: glm(survival ~ size_1 + temp + precip, data = my_surv_data, family = binomial()) □ Mathematical form: \(Logit(s(z,\theta)) = \alpha_s + \beta_s^z * z + \beta_s^t * temp + \beta_s^p * precip\) 9. growth (g, \(G(z',z,\theta)\)): a linear regression with a Normal error distribution (denoted \(f_G\)) 10. flower probability (c_r/\(c_r(z)\)): A logistic regression. □ Example model formula: glm(repro ~ size_1, data = my_repro_data, family = binomial()) □ Mathematical form: \(Logit(c_r(z)) = \alpha_{c} + \beta_{c_r} * z\) 11. seed production (c_s/\(c_s(z)\): a logistic regression. □ Example model formula: glm(flower_n ~ size_1, data = my_flower_data, family = poisson()) □ Mathematical form: \(Log(c_s(z)) = \alpha_{c_s} + \beta_{c_s} * z\) 12. recruit sizes (c_d/\(c_d(z')\)): A Normal distribution (denoted \(f_{c_d}\)) □ Example code: mean (c_d_mu) mean(my_recruit_data$size_2, na.rm = TRUE) and standard deviation (c_d_sd) sd(my_recruit_data$size_2, na.rm = TRUE) □ Mathematical form: \(c_d(z') = f_{c_d}(z', \mu_{f_d}, \sigma_{f_d})\) 13. Constants: Discrete stage survival (r_s), discrete stage entrance probability (r_e), discrete stage departure probability conditional on survival (r_d), and probability of remaining in discrete stage (r_r). Model parameterization First, we’ll specify the constant parameters. We keep all values related to demography in the data_list. # Define the fixed parameters in a list constant_params <- list( s_int = -5, s_slope = 2.2, s_precip = 0.0002, s_temp = -0.003, g_int = 0.2, g_slope = 1.01, g_sd = 1.2, g_temp = -0.002, g_precip = 0.004, c_r_int = 0.3, c_r_slope = 0.03, c_s_int = 0.4, c_s_slope = 0.01, c_d_mu = 1.1, c_d_sd = 0.1, r_e = 0.3, r_d = 0.3, r_r = 0.2, r_s = 0.2 In addition to creating the standard data_list, we need to create a function to sample the environmental covariates, and a list of parameters that generate the environmental covariates. The function needs to return a named list. The names in the returned list can be referenced in vital rate expressions, kernel formulas, etc. as if we had specified them in the data_list. make_ipm() only runs the functions in define_env_state() once per model iteration. This ensures that parameters from joint distributions can be used consistently across kernels without losing user-any specified correlations. # Now, we create a set of environmental covariates. In this example, we use # a normal distribution for temperature and a Gamma for precipitation. env_params <- list( temp_mu = 8.9, temp_sd = 1.2, precip_shape = 1000, precip_rate = 2 # We define a wrapper function that samples from these distributions sample_env <- function(env_params) { # We generate one value for each covariate per iteration, and return it # as a named list. temp_now <- rnorm(1, precip_now <- rgamma(1, shape = env_params$precip_shape, rate = env_params$precip_rate) # The vital rate expressions can now use the names "temp" and "precip" # as if they were in the data_list. out <- list(temp = temp_now, precip = precip_now) # Again, we can define our own functions and pass them into calls to make_ipm. This # isn't strictly necessary, but can make the model code more readable/less error prone. inv_logit <- function(lin_term) { 1/(1 + exp(-lin_term)) Model specification We are now ready to begin implementing the model. This next chunk should look familiar, with the caveat that we have now add the terms temp and precip to vital rates in the P kernel. general_stoch_param_model <- init_ipm(sim_gen = "general", di_dd = "di", det_stoch = "stoch", kern_param = "param") %>% name = "P_stoch", family = "CC", # As in the examples above, we have to add the d_surf_area # to ensure the integration of the functions is done. formula = s * g * d_surf_area, # We can reference continuously varying parameters by name # in the vital rate expressions just as before, even though # they are passed in define_env_state() as opposed to the kernel's # data_list g_mu = g_int + g_slope * surf_area_1 + g_temp * temp + g_precip * precip, s_lin_p = s_int + s_slope * surf_area_1 + s_temp * temp + s_precip * precip, s = inv_logit(s_lin_p), g = dnorm(surf_area_2, g_mu, g_sd), data_list = constant_params, states = list(c("surf_area")), uses_par_sets = FALSE, evict_cor = TRUE, evict_fun = truncated_distributions("norm", "g") ) %>% name = "F", family = "CC", formula = c_r * c_s * c_d * (1 - r_e) * d_surf_area, c_r_lin_p = c_r_int + c_r_slope * surf_area_1, c_r = inv_logit(c_r_lin_p), c_s = exp(c_s_int + c_s_slope * surf_area_1), c_d = dnorm(surf_area_2, c_d_mu, c_d_sd), data_list = constant_params, states = list(c("surf_area")), uses_par_sets = FALSE, evict_cor = TRUE, evict_fun = truncated_distributions("norm", "c_d") ) %>% # Name can be anything, but it helps to make sure they're descriptive name = "go_discrete", # Family is now "CD" because it is a continuous -> discrete transition family = "CD", formula = r_e * r_s * c_r * c_s * d_surf_area, c_r_lin_p = c_r_int + c_r_slope * surf_area_1, c_r = inv_logit(c_r_lin_p), c_s = exp(c_s_int + c_s_slope * surf_area_1), data_list = constant_params, states = list(c("surf_area", "sb")), uses_par_sets = FALSE, # There is not eviction to correct here, so we can set this to false evict_cor = FALSE ) %>% name = "stay_discrete", family = "DD", formula = r_s * r_r, data_list = constant_params, states = list("sb"), uses_par_sets = FALSE, evict_cor = FALSE ) %>% name = "leave_discrete", family = "DC", formula = r_d * r_s * c_d, c_d = dnorm(surf_area_2, c_d_mu, c_d_sd), data_list = constant_params, states = list(c("surf_area", "sb")), uses_par_sets = FALSE, evict_cor = TRUE, evict_fun = truncated_distributions("norm", "c_d") ) %>% kernel_names = c("P_stoch", int_rule = rep("midpoint", 5), state_start = c("surf_area", state_end = c("surf_area", ) %>% surf_area = c(0, 10, 100) Now, we need to define_env_state(). This consists of two parts - specifying expressions that generate environmental covariates, and supplying the information needed to evaluate those expressions. In this example, we want to use the env_params in sample_env (i.e. sample_env(env_params)). define_env_state uses the data_list argument to supply the env_params, and the call to sample_env goes into the ... portion of the function. Here, we assign the result to env_covs. The name env_covs isn’t important for specifying vital rate expressions and you could call it whatever you want, but it must be named something in order to work properly. The only thing that needs to match are the names in the list that sample_env returns, and the names used in the vital rate expressions/kernels. Note that you can pass the sample_env function in either the data_list of define_env_state(), or the usr_funs argument of make_ipm(). # In the first version, sample_env is provided in the data_list of # define_env_state. general_stoch_param_ipm <- define_env_state( proto_ipm = general_stoch_param_model, env_covs = sample_env(env_params), data_list = list(env_params = env_params, sample_env = sample_env) ) %>% n_surf_area = runif(100), n_sb = rpois(1, 20) ) %>% make_ipm(usr_funs = list(inv_logit = inv_logit), iterate = TRUE, iterations = 100) # in the second version, sample_env is provided in the usr_funs list of # make_ipm(). These two versions are equivalent. general_stoch_param_ipm <- define_env_state( proto_ipm = general_stoch_param_model, env_covs = sample_env(env_params), data_list = list(env_params = env_params) ) %>% n_surf_area = runif(100), n_sb = rpois(1, 20) ) %>% make_ipm(usr_funs = list(inv_logit = inv_logit, sample_env = sample_env), iterate = TRUE, iterations = 100, return_sub_kernels = TRUE) Stochastic parameter-resampled models also return an env_seq, but this time it will be a data.frame of parameter draws rather than a character vector. We can also compute mean sub-kernels and \(\ lambda_s\) as we did in the kernel-resampled models. This time, each mean kernel is computed as the average of each sub-kernel over the course of the simulation (i.e. mean of all P kernels). Some sub-kernels in our example are not time-varying - they will be numerically equivalent to the sub-kernels stored in the IPM object. A note on memory management Longer running stochastic parameter-resampled models can take up a lot of space in memory when all of the sub-kernels are saved from each iteration. For example, running the model above for 10,000 iterations would result in 20,000 \(100 \times 100\) matrices, 20,000 \(1 \times 100\)/\(100 \times 1\) matrices, and 10,000 \(1 \times 1\) matrices in the sub_kernels slot of the IPM object (~16.4 GB of RAM). This will likely result in crashes as smaller machines run out of available RAM. Therefore, make_ipm() contains an argument return_sub_kernels for these types of models that allows you to switch off that behavior and conserve available RAM. By default, this is set to FALSE. If you need sub-kernels for downstream analyses, set this option to TRUE and make sure you have a computer with sufficient RAM (64-128 GB range is likely required to store all information for longer running models). These warnings also apply to all density dependent model classes and the same return_sub_kernels argument can be used for those as well. Code to construct mega-kernels Sometimes, we may want to work with our sub-kernels arranged into a single block kernel. This isn’t required for any of the code in ipmr, except for the plot method for general_di_det IPMs. Other use cases may arise for analyses not included in this package though, so below is a brief overview of how to generate those. make_iter_kernel() takes an IPM object and a vector of symbols (for interactive use) or a character version of the expression (for programming) showing where each sub-kernel should go. It works in ROW MAJOR order. This example will use the model from the general deterministic example at the top of the article. First, re-run the model to create the IPM object (if you haven’t already). data_list <- list( g_int = 5.781, g_slope = 0.988, g_sd = 20.55699, s_int = -0.352, s_slope = 0.122, s_slope_2 = -0.000213, r_r_int = -11.46, r_r_slope = 0.0835, r_s_int = 2.6204, r_s_slope = 0.01256, r_d_mu = 5.6655, r_d_sd = 2.0734, e_p = 0.15, g_i = 0.5067 L <- 1.02 U <- 624 n <- 500 init_pop_vec <- runif(500) init_seed_bank <- 20 # Initialize the state list and add some helper functions. The survival function # in this model is a quadratic function. inv_logit <- function(int, slope, sv) { 1/(1 + exp(-(int + slope * sv))) inv_logit_2 <- function(int, slope, slope_2, sv) { 1/(1 + exp(-(int + slope * sv + slope_2 * sv ^ 2))) general_ipm <- init_ipm(sim_gen = "general", di_dd = "di", det_stoch = "det") %>% name = "P", formula = s * g * d_ht, family = "CC", g = dnorm(ht_2, g_mu, g_sd), g_mu = g_int + g_slope * ht_1, s = inv_logit_2(s_int, s_slope, s_slope_2, ht_1), data_list = data_list, states = list(c('ht')), uses_par_sets = FALSE, evict_cor = TRUE, evict_fun = truncated_distributions('norm', ) %>% name = "go_discrete", formula = r_r * r_s * g_i, family = 'CD', r_r = inv_logit(r_r_int, r_r_slope, ht_1), r_s = exp(r_s_int + r_s_slope * ht_1), data_list = data_list, states = list(c('ht', "b")), uses_par_sets = FALSE ) %>% name = 'stay_discrete', formula = 0, family = "DD", states = list(c('ht', "b")), evict_cor = FALSE ) %>% name = 'leave_discrete', formula = e_p * r_d, r_d = dnorm(ht_2, r_d_mu, r_d_sd), family = 'DC', data_list = data_list, states = list(c('ht', "b")), uses_par_sets = FALSE, evict_cor = TRUE, evict_fun = truncated_distributions('norm', ) %>% kernel_names = c("P", "go_discrete", "stay_discrete", "leave_discrete"), int_rule = c(rep("midpoint", 4)), state_start = c('ht', "ht", "b", "b"), state_end = c('ht', "b", "b", 'ht') ) %>% ht = c(L, U, n) ) %>% pop_vectors = list( n_ht = init_pop_vec, n_b = init_seed_bank ) %>% make_ipm(iterations = 100, usr_funs = list(inv_logit = inv_logit, inv_logit_2 = inv_logit_2)) Now, we specify which kernel belongs where in row major order, using a call to c(). mega_mat <- make_iter_kernel(ipm = general_ipm, mega_mat = c( stay_discrete, go_discrete, leave_discrete, P # These values should be almost identical, so this should ~0 Re(eigen(mega_mat[[1]])$values[1]) - lambda(general_ipm) Say we wanted to program with this function. Passing bare expression is difficult programatically, and how to do that is not really within the scope of this vignette (though if you’re interested in learning how, this is a good start). make_iter_kernel() also accepts text strings in the same format as above. # Get the names of each sub_kernel sub_k_nms <- names(general_ipm$sub_kernels) mega_mat_text <- c(sub_k_nms[3], sub_k_nms[2], sub_k_nms[4], sub_k_nms[1]) mega_mat_2 <- make_iter_kernel(general_ipm, mega_mat = mega_mat_text) # Should be TRUE identical(mega_mat, mega_mat_2) make_iter_kernel() can also handle cases where you need blocks of 0s or identity matrices. These are specified using 0 for 0s, and I for identity matrices. make_iter_kernel() automatically works out the correct dimensions internally, so you don’t need to worry about specifying those. Below is an example that inserts 0s and identity matrices on the off-diagonals with the P kernel duplicated along the diagonal. Finally, make_iter_kernel supports ipmr’s parameter set index syntax as well, enabling us to generate a list of mega-kernels for each combination of parameter set values. We’ll re-run the "general_di_stoch_kern" example from above to demonstrate this. all_g_int <- as.list(rnorm(5, mean = 5.781, sd = 0.9)) all_f_s_int <- as.list(rnorm(5, mean = 2.6204, sd = 0.3)) names(all_g_int) <- paste("g_int_", 1:5, sep = "") names(all_f_s_int) <- paste("f_s_int_", 1:5, sep = "") constant_list <- list( g_slope = 0.988, g_sd = 20.55699, s_int = -0.352, s_slope = 0.122, s_slope_2 = -0.000213, f_r_int = -11.46, f_r_slope = 0.0835, f_s_slope = 0.01256, f_d_mu = 5.6655, f_d_sd = 2.0734, e_p = 0.15, g_i = 0.5067 all_params <- c(constant_list, all_g_int, all_f_s_int) L <- 1.02 U <- 624 n <- 500 init_pop_vec <- runif(500) init_seed_bank <- 20 inv_logit <- function(int, slope, sv) { 1/(1 + exp(-(int + slope * sv))) inv_logit_2 <- function(int, slope, slope_2, sv) { 1/(1 + exp(-(int + slope * sv + slope_2 * sv ^ 2))) general_stoch_kern_ipm <- init_ipm(sim_gen = "general", di_dd = "di", det_stoch = "stoch", kern_param = "kern") %>% name = "P_year", formula = s * g_year * d_ht, family = "CC", g_year = dnorm(ht_2, g_mu_year, g_sd), g_mu_year = g_int_year + g_slope * ht_1, s = inv_logit_2(s_int, s_slope, s_slope_2, ht_1), data_list = all_params, states = list(c('ht')), uses_par_sets = TRUE, par_set_indices = list(year = 1:5), evict_cor = TRUE, evict_fun = truncated_distributions('norm', ) %>% name = "go_discrete_year", formula = f_r * f_s_year * g_i * d_ht, family = 'CD', f_r = inv_logit(f_r_int, f_r_slope, ht_1), f_s_year = exp(f_s_int_year + f_s_slope * ht_1), data_list = all_params, states = list(c('ht', "b")), uses_par_sets = TRUE, par_set_indices = list(year = 1:5) ) %>% name = 'stay_discrete', formula = 0, family = "DD", states = list(c('b')), uses_par_sets = FALSE, evict_cor = FALSE ) %>% name = 'leave_discrete', formula = e_p * f_d * d_ht, f_d = dnorm(ht_2, f_d_mu, f_d_sd), family = 'DC', data_list = all_params, states = list(c('ht', "b")), uses_par_sets = FALSE, evict_cor = TRUE, evict_fun = truncated_distributions('norm', ) %>% kernel_names = c("P_year", "go_discrete_year", "stay_discrete", "leave_discrete"), int_rule = c(rep("midpoint", 4)), state_start = c('ht', "ht", "b", "b"), state_end = c('ht', "b", "b", 'ht') ) %>% ht = c(L, U, n) ) %>% n_ht = init_pop_vec, n_b = init_seed_bank ) %>% iterations = 10, kernel_seq = sample(1:5, size = 10, replace = TRUE), usr_funs = list(inv_logit = inv_logit, inv_logit_2 = inv_logit_2) Next, we call make_iter_kernel() using the kernel names like so: block_list <- make_iter_kernel(general_stoch_kern_ipm, mega_mat = c(stay_discrete, go_discrete_year, leave_discrete, P_year)) make_iter_kernel() also works for simple models, but assumes that all sub-kernels are combined additively (i.e. \(K(z',z) = P(z',z) + F('z,z)\)). It can handle parameter set index syntax as well, but does not require the mega_mat argument, and can just be called with an IPM object.
{"url":"http://rsync.jp.gentoo.org/pub/CRAN/web/packages/ipmr/vignettes/general-ipms.html","timestamp":"2024-11-14T18:23:23Z","content_type":"text/html","content_length":"181970","record_id":"<urn:uuid:6b58a075-50c4-4ab6-b254-62faa3244fd6>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00567.warc.gz"}
5Th Grade Math Worksheets Printable 5Th Grade Math Worksheets Printable - Advanced math whizzes can access fifth grade math worksheets that introduce the basics of algebra, as well as how to calculate the base and volume of geometric shapes. Web your fifth graders will be challenged with these free math worksheets. Adding fractions, area, algebra, rounding, volume and capacity, statistics, order of operations, positive & negative integers, ratio, simplify fractions. Our range of worksheets cover all topics, such as addition, multiplication subtraction and division, fractions (including equivalent fractions), order of operations, coordinate planes and much more! Worksheets include one and two variable expressions, simplifying expressions and solving equations. Printable worksheets shared to google classroom. Web search printable 5th grade math worksheets. Web master 5th grade math with our range of pdf worksheets. There is also a free 5th grade common core math practice test hope you enjoy it! Web printable math worksheets for 5th grade. Free 5th Grade Math Worksheets Activity Shelter Adding decimals on a number line | tenths Printable worksheets shared to google classroom. Web this is a comprehensive collection of free printable math worksheets for grade 5, organized by topics such as addition, subtraction, algebraic thinking, place value, multiplication, division, prime factorization, decimals, fractions, measurement, coordinate grid, and geometry. Adding fractions, area, algebra, rounding, volume and capacity, statistics, order. Printable 5Th Grade Math Worksheets With Answer Key Printable Worksheets Our range of worksheets cover all topics, such as addition, multiplication subtraction and division, fractions (including equivalent fractions), order of operations, coordinate planes and much more! Advanced math whizzes can access fifth grade math worksheets that introduce the basics of algebra, as well as how to calculate the base and volume of geometric shapes. Fifth graders will cover a FREE 5th Grade Math Worksheets Web 5th grade math worksheets: Adding decimals on a number line | tenths Web this is a comprehensive collection of free printable math worksheets for grade 5, organized by topics such as addition, subtraction, algebraic thinking, place value, multiplication, division, prime factorization, decimals, fractions, measurement, coordinate grid, and geometry. Our range of worksheets cover all topics, such as addition, multiplication. 5th Grade Math Worksheets Multiplication And Division Times Tables Web this is a comprehensive collection of free printable math worksheets for grade 5, organized by topics such as addition, subtraction, algebraic thinking, place value, multiplication, division, prime factorization, decimals, fractions, measurement, coordinate grid, and geometry. Advanced math whizzes can access fifth grade math worksheets that introduce the basics of algebra, as well as how to calculate the base and. 5Th Grade Free Printable Math Worksheets Math Problems 5th Grade Printable worksheets shared to google classroom. Adding decimals on a number line | tenths Includes a mix of word problems, fractions, and math puzzles to use in the classroom or at home. Web this is a comprehensive collection of free printable math worksheets for grade 5, organized by topics such as addition, subtraction, algebraic thinking, place value, multiplication, division, prime. Printable 5th Grade Fractions Practice Worksheet Fifth graders will cover a wide range of math topics as they solidify their arithmetic skills. Web your fifth graders will be challenged with these free math worksheets. No matter where your child is on the math spectrum, she will find our fifth grade math worksheets helpful and challenging. Adding decimals on a number line | tenths Web this is. Fifth Grade Math Practice Worksheet Free Printable Math Worksheets Web this is a comprehensive collection of free printable math worksheets for grade 5, organized by topics such as addition, subtraction, algebraic thinking, place value, multiplication, division, prime factorization, decimals, fractions, measurement, coordinate grid, and geometry. Web printable math worksheets for 5th grade. There is also a free 5th grade common core math practice test hope you enjoy it! Web. 106 best images about Fifth Grade Printables! on Pinterest Math Worksheets include one and two variable expressions, simplifying expressions and solving equations. Adding fractions, area, algebra, rounding, volume and capacity, statistics, order of operations, positive & negative integers, ratio, simplify fractions. Web master 5th grade math with our range of pdf worksheets. Multiplication, division, place value, rounding, fractions, decimals , factoring, geometry, measurement & word problems. Web your fifth graders. Fifth Grade Math Worksheets pdf free downloads EduMonitor There is also a free 5th grade common core math practice test hope you enjoy it! Web master 5th grade math with our range of pdf worksheets. Fifth graders will cover a wide range of math topics as they solidify their arithmetic skills. Web this is a comprehensive collection of free printable math worksheets for grade 5, organized by topics. Math Worksheets for Fifth Grade Adding Decimals Web printable math worksheets for 5th grade. Web master 5th grade math with our range of pdf worksheets. Web search printable 5th grade math worksheets. Worksheets include one and two variable expressions, simplifying expressions and solving equations. There is also a free 5th grade common core math practice test hope you enjoy it! Worksheets include one and two variable expressions, simplifying expressions and solving equations. Web printable math worksheets for 5th grade. Printable worksheets shared to google classroom. Advanced math whizzes can access fifth grade math worksheets that introduce the basics of algebra, as well as how to calculate the base and volume of geometric shapes. Our range of worksheets cover all topics, such as addition, multiplication subtraction and division, fractions (including equivalent fractions), order of operations, coordinate planes and much more! No matter where your child is on the math spectrum, she will find our fifth grade math worksheets helpful and challenging. Fifth graders will cover a wide range of math topics as they solidify their arithmetic skills. There is also a free 5th grade common core math practice test hope you enjoy it! Adding decimals on a number line | tenths Web search printable 5th grade math worksheets. Web your fifth graders will be challenged with these free math worksheets. Web 5th grade math worksheets: Multiplication, division, place value, rounding, fractions, decimals , factoring, geometry, measurement & word problems. Adding fractions, area, algebra, rounding, volume and capacity, statistics, order of operations, positive & negative integers, ratio, simplify fractions. Includes a mix of word problems, fractions, and math puzzles to use in the classroom or at home. Web this is a comprehensive collection of free printable math worksheets for grade 5, organized by topics such as addition, subtraction, algebraic thinking, place value, multiplication, division, prime factorization, decimals, fractions, measurement, coordinate grid, and geometry. The math worksheets on this page cover many of the core topics in 5th grade math, but confidence in all of the basic operations is essential to success both in 5th grade and beyond. Web master 5th grade math with our range of pdf worksheets. No Matter Where Your Child Is On The Math Spectrum, She Will Find Our Fifth Grade Math Worksheets Helpful And Challenging. There is also a free 5th grade common core math practice test hope you enjoy it! Multiplication, division, place value, rounding, fractions, decimals , factoring, geometry, measurement & word problems. Worksheets include one and two variable expressions, simplifying expressions and solving equations. Adding decimals on a number line | tenths Fifth Graders Will Cover A Wide Range Of Math Topics As They Solidify Their Arithmetic Skills. Web 5th grade math worksheets: Web search printable 5th grade math worksheets. Web master 5th grade math with our range of pdf worksheets. Web this is a comprehensive collection of free printable math worksheets for grade 5, organized by topics such as addition, subtraction, algebraic thinking, place value, multiplication, division, prime factorization, decimals, fractions, measurement, coordinate grid, and geometry. Includes A Mix Of Word Problems, Fractions, And Math Puzzles To Use In The Classroom Or At Home. The math worksheets on this page cover many of the core topics in 5th grade math, but confidence in all of the basic operations is essential to success both in 5th grade and beyond. Web printable math worksheets for 5th grade. Web your fifth graders will be challenged with these free math worksheets. Advanced math whizzes can access fifth grade math worksheets that introduce the basics of algebra, as well as how to calculate the base and volume of geometric shapes. Adding Fractions, Area, Algebra, Rounding, Volume And Capacity, Statistics, Order Of Operations, Positive & Negative Integers, Ratio, Simplify Fractions. Printable worksheets shared to google classroom. Our range of worksheets cover all topics, such as addition, multiplication subtraction and division, fractions (including equivalent fractions), order of operations, coordinate planes and much more! Related Post:
{"url":"https://dl-uk.apowersoft.com/en/5th-grade-math-worksheets-printable.html","timestamp":"2024-11-14T14:27:00Z","content_type":"text/html","content_length":"32803","record_id":"<urn:uuid:554129ac-0422-48ae-8bd4-54f2c531e1b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00582.warc.gz"}
Bitwise And Calculator Here is a bitwise and calculator, for performing an and between the bits of two numbers (once converted to 32-bit binary). In a bitwise and, a binary digit will only be set to 1 if both numbers have a 1 in that spot, otherwise it'll set to 0. Bitwise And Calculator Using the Bitwise And Calculator To use the bitwise and calculator, enter two numbers to and in the "Number One" and "Number Two" fields in the tool. Once happy with your inputs, click the "Calculate Bitwise And" button. The result of the bitwise and will show up in the "Anded Number" field, converted back to integer: Result of a bitwise and of 5 and 4 Bitwise And Example Behind the scenes, the tool is converting both of your numbers to 32-bit binary numbers, then going digit by digit and anding the two numbers together. Let's do an example together, matching the screenshot (anding the numbers 4 and 5). 101\&100=100\ (4) As you can see in the above block, the tool converted the numbers 5 and 4 to the binary numbers 101 and 100, respectively. Moving in either direction, you can see the 4s digit was the only place where both numbers had a 1 – so the final result was 100. 100 is the same as 4, so the tool converts it back to a 4 and gives you the anded number. Contrast that with the same input to the bitwise or calculator, where 5 or 4 gives you 5, or the bitwise xor which gives you 1. Other Binary Calculator Try our other binary math calculators:
{"url":"https://dqydj.com/bitwise-and-calculator/","timestamp":"2024-11-10T08:50:53Z","content_type":"text/html","content_length":"76484","record_id":"<urn:uuid:3faabde1-e102-4dbd-b692-fdea034ee9c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00677.warc.gz"}
dD Convex Hulls and Delaunay Triangulations Reference Manual Susan Hert and Michael Seel A subset is convex if for any two points $p$ and $q$ in the set the line segment with endpoints $p$ and $q$ is contained in $S$. The convex hull of a set $S$ is the smallest convex set containing $S$ . The convex hull of a set of points $P$ is a convex polytope with vertices in $P$. A point in $P$ is an extreme point (with respect to $P$) if it is a vertex of the convex hull of $P$. CGAL provides functions for computing convex hulls in two, three and arbitrary dimensions as well as functions for testing if a given set of points in is strongly convex or not. This chapter describes the class available for arbitrary dimensions and its companion class for computing the nearest and furthest side Delaunay triangulation. 9.4 Classified Reference Pages CGAL::Delaunay_d< R, Lifted_R > 9.5 Alphabetical List of Reference Pages
{"url":"https://doc.cgal.org/Manual/3.3/doc_html/cgal_manual/Convex_hull_d_ref/Chapter_intro.html","timestamp":"2024-11-01T19:18:20Z","content_type":"text/html","content_length":"5088","record_id":"<urn:uuid:685c821f-3c3d-4a72-aa1c-22e429b88fb2>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00530.warc.gz"}
Tectonic-Forces.org - A.3 Estimation of the magnitude of the circumferential forces driving tecto A.3 Estimation of the magnitude of the circumferential forces driving tectonic movements The mathematical analysis is based on the concept of the outer rim being allowed to slide relative to the main rotating body (Figs 17b, 18a, & 19 ). In order to determine the forces postulated as being responsible for tectonic movement the model used is one in which the thin crust can slide relative to the solid body at the crust /mantle interface. By way of illustration Fig 18a shows that if an unbalanced disc with an outer annular ring containing fluid is rotated about its principal axis, the liquid will move to the ‘lighter’ side. Fig. 18b shows an analogous situation with the sliding continental plates. If we consider the crust as being able to move relative to the mantle, albeit it over a long geological time span, then a simple force diagram (Fig 17b & 19) can be constructed by making the following assumptions: (a) the crust is a thin shell that is able to slide relative to the mantle, (b) the forces owing to eccentricity are superimposed on the stress caused by the general rotation and gravity, and (c) the stress that is of interest for the purposes of tectonic movement is the differential stress owing to this eccentricity. By approaching the problem in terms of a thin shell moving relative to the mantle, it is possible to consider what increments of the tensile force are responsible for putting the Pacific Basin under compression and the African Plate under tension. The Rift Valley, in Africa, would be a case in point. The calculations which follow are based on the consideration of the eccentrically induced loads on the thin crust. In calculating the effects of the circumferential tensile forces (F) at the surface of the earth due to the centre of mass being offset from the principal axis of rotation, the term ‘radius of eccentricity’ (E) is introduced to denote the magnitude of the offset. The magnitude of the derived circumferential stress (F) will be dependent on the distance between the geometric centre and the centre of mass, i.e. (E) the ‘radius of eccentricity’. In a limiting case, if the ‘radius of eccentricity’ is zero, the rotating body will be balanced, and the centripetal forces will be zero. Consider a thin shell cut across the Earth’s diameter at the Mid-Atlantic ridge (Fig 19 below). The force tending to cause this half of the shell to part is the ‘vertical’ component of the centripetal forces generated by the eccentricity. This is similar in concept to that in thin shell circular vessels subjected to an internal pressure.^7 Figure C in figure 19 shows this concept of ‘vertical force’. As the semi-circle is symmetrical there are two sides resisting the parting force. Thus, only one side needs to be considered for integration of the ‘vertical’ forces from 0 to π/2. Fig C in Fig 19 shows the force and vector diagrams used to determine the magnitude of the circumferential stress in the direction of the maximum effective radius. For ease of understanding the force diagram is superimposed on the major geological features on the equatorial belt. Fig 18A & 18B Models used for the calculation of the differential circumferential stress forces required to move the crust to the mantie. The movement to the lighter side is independent of the hard rotation M = Mass per unit length of crust (kg) R = Radius of Earth. (m) E = Radius of eccentricity. (m) ω = Angular velocity. (rad s^-1)^ θ = Angle. (rad) δe = Effective eccentricity at angle θ F = Total force at point X (cf. Fig.11) (N) F[1 ]= Radial force due to eccentricity at θ 2.8 x10^6 kg 6.4x10^6 metres 1x 10^3 metres 7.27x10^-5 rads.sec^-1 Then from the 'force vector diagram' at surface at an angle θ: Vertical component of F[1 ]: δf = F[1]Sinθ[ ] Effective eccentricity at angle θ: δe = Ε sinθ And Mass of segment: Rδθ = M R δθ. F[1] = M. Rδθ.ω^2. Ε sinθ = M. R. ω^2.Ε. Sinθ. δθ The vertical force component: δf = F[1]. Sinθ = M. R.ω^2. Ε. Sinθ. Sinθ. δθ = M.R.ω2. Ε. Sin^2θ. δθ (1) the total vertical force F = Σ[0]^π/2 M.R.ω^2E.Sin^2θ. δθ^ = M.R.ω^2.E (½.θ -¼Sin2θ) ^π/2 - (½.θ- ¼ Sin2θ) ^0 = M.R.ω^2.E (π/4-¼.0) - (½.0-¼.0) = M.R.ω^2.Eπ/4. (2) The derivation of the equation of the total force at the maximum effective radius allows for the determination of the circumferential tensile stress on the crust. The approach given above considers the forces developed as a direct function of the radius of eccentricity. If we take into eq. 2 the crust to be 1000 meters thick with an average density of 2.8x10^3 kgm-^3 then for a 1metre x 1-metre-wide strip, The mass per unit area of crust (m) is = 1000 x 1 x1 x 2.8x10^3 = 2.8x10^6 kg: The radius of the Earth (r) = 6400 km The angular velocity of the Earth^55 at the equator (ω) = 7.27x10^-5 rads.sec^-1 The radius of eccentricity at the Core (E) = 1 km. Hence substituting into equation 2 we have F= 2.8 x 10^6 x 6.4 x 10^6x (7.27x10^-5)^2 x 10^3 x π /4 = 6.64 x10^7 N. Since, the magnitude of the circumferential stress is Force/Area this becomes 6.64x10^7 / 1 x10^3: and hence the circumferential tensile stress is = 6.64x10^-2 Nmm^-2, 0.644 Bar or c. 9.7 lbs.in^-2 It is also possible to look at the addition of the vertical component of E to the radius of the Earth to determine the expression of the forces in the direction of the maximum effective radius. Fig. 35 is used for this analysis. Fig 36 shows the relationship between the Radius of Eccentricity and the circumferential stresses. Fig 37 shows the relationship between F, E and µ. As above: the mass of the segment R δθ = M R δθ the radial force F[1 ]= Mass.R.ω^2 the radial force F[1 ]= (M R δθ). R. ω^2 = M ω^2 R^2 δθ. δf = M ω^2 R^2sinθδθ. With reference to Fig. 35: R = R0 + Esinθ. thus, δf = M ω^2 (R[0]+Esinθ)^2 sinθ δθ which approximates to δf = M ω^2 (R[0]^2+2E R[0] sinθ) sinθ δθ = M ω^2 (R[0]^2+2E R[0] sinθ) sinθ δθ. Thus, the increase of δf = δf - M ω^2 Ro^2 sinθ δθ = M ω^2 (R[0]^2+2E R[0] sinθ) sinθ δθ - M ω^2 Ro^2 sinθ δθ = M ω^2 sinθ δθ (R[0]^2+2E R[0] sinθ- Ro^2) = M ω^2 sinθ δθ2E R[0] sinθ = MR[0] ω^2 2 Esin^2θ δθ (Eq. 3) This equation has the same form as (Eq. 1) above. As E is small in comparison to R, and R[0] and R have essentially the same values, the factor 2 that appears in (Eq. 3) does not invalidate (Eq. 1). Hence the derivation of (Eq. 1) from the force diagram (Fig 19 above and 20 below) is considered valid for determining (Eq. 2) by integrating between 0 and π/2. Fig 36: Relationship between the radius of eccentricity & the circumferential stresses Fig 37: Relationship between the force (Newtons) needed to move a 1km x 1m x 1m element of crust and the coefficient of friction at the crust/mantle interface
{"url":"https://www.tectonic-forces.org/index/a-3-estimation-of-the-magnitude-of-the-circumferential-forces-driving-tecto","timestamp":"2024-11-14T08:59:50Z","content_type":"text/html","content_length":"132375","record_id":"<urn:uuid:deee51e6-5879-4fe3-b91f-b7aee69336b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00170.warc.gz"}
GATE Exam Paper Production and Industrial Engineering (PI) 2023 | ENTRANCE INDIA PI: Production and Industrial Engineering General Aptitude Q.1 – Q.5 Carry ONE mark each. 1. “You are delaying the completion of the task. Send _______ contributions at the earliest.” (A) you are (B) your (C) you’re (D) yore 2. References : ______ : : Guidelines : Implement (By word meaning) (A) Sight (B) Site (C) Cite (D) Plagiarise 3. In the given figure, PQRS is a parallelogram with PS = 7 cm, PT = 4 cm and PV = 5 cm. What is the length of RS in cm? (The diagram is representative.) (A) 20/7 (B) 28/5 (C) 9/2 (D) 35/4 4. In 2022, June Huh was awarded the Fields medal, which is the highest prize in Mathematics. When he was younger, he was also a poet. He did not win any medals in the International Mathematics Olympiads. He dropped out of college. Based only on the above information, which one of the following statements can be logically inferred with certainty? (A) Every Fields medalist has won a medal in an International Mathematics Olympiad. (B) Everyone who has dropped out of college has won the Fields medal. (C) All Fields medalists are part-time poets. (D) Some Fields medalists have dropped out of college. 5. A line of symmetry is defined as a line that divides a figure into two parts in a way such that each part is a mirror image of the other part about that line. The given figure consists of 16 unit squares arranged as shown. In addition to the three black squares, what is the minimum number of squares that must be coloured black, such that both PQ and MN form lines of symmetry? (The figure is representative) (A) 3 (B) 4 (C) 5 (D) 6 Q.6 – Q.10 Carry TWO marks Each 6. Human beings are one among many creatures that inhabit an imagined world. In this imagined world, some creatures are cruel. If in this imagined world, it is given that the statement “Some human beings are not cruel creatures” is FALSE, then which of the following set of statement(s) can be logically inferred with certainty? (i) All human beings are cruel creatures. (ii) Some human beings are cruel creatures. (iii) Some creatures that are cruel are human beings. (iv) No human beings are cruel creatures. (A) only (i) (B) only (iii) and (iv) (C) only (i) and (ii) (D) (i), (ii) and (iii) 7. To construct a wall, sand and cement are mixed in the ratio of 3:1. The cost of sand and that of cement are in the ratio of 1:2. If the total cost of sand and cement to construct the wall is 1000 rupees, then what is the cost (in rupees) of cement used? (A) 400 (B) 600 (C) 800 (D) 200 8. The World Bank has declared that it does not plan to offer new financing to Sri Lanka, which is battling its worst economic crisis in decades, until the country has an adequate macroeconomic policy framework in place. In a statement, the World Bank said Sri Lanka needed to adopt structural reforms that focus on economic stabilisation and tackle the root causes of its crisis. The latter has starved it of foreign exchange and led to shortages of food, fuel, and medicines. The bank is repurposing resources under existing loans to help alleviate shortages of essential items such as medicine, cooking gas, fertiliser, meals for children, and cash for vulnerable households. Based only on the above passage, which one of the following statements can be inferred with certainty? (A) According to the World Bank, the root cause of Sri Lanka’s economic crisis is that it does not have enough foreign exchange. (B) The World Bank has stated that it will advise the Sri Lankan government about how to tackle the root causes of its economic crisis. (C) According to the World Bank, Sri Lanka does not yet have an adequate macroeconomic policy framework. (D) The World Bank has stated that it will provide Sri Lanka with additional funds for essentials such as food, fuel, and medicines. 9. The coefficient of x^4 in the polynomial (x − 1)^3 (x − 2)^3 is equal to _______. (A) 33 (B) −3 (C) 30 (D) 21 10. Which one of the following shapes can be used to tile (completely cover by repeating) a flat plane, extending to infinity in all directions, without leaving any empty spaces in between them? The copies of the shape used to tile are identical and are not allowed to overlap. (A) circle (B) regular octagon (C) regular pentagon (D) rhombus PI: Production and Industrial Engineering Q.11 – Q.35 Carry ONE mark Each 11. Given matrices B is skew-symmetric matrix of A. B[13] is (A) −3 (B) −2 (C) 2 (D) 3 12. The non-linear differential equation from the following options is 13. The power series expansion of a function is given as for 0 < x ≤ 1. The values of constants b and c, respectively, are (A) −1/2 and 1/3 (B) 1/2 and −1/3 (C) −1 and 1/2 (D) 1 and −1/2 14. Three unbiased coins are tossed. Provided that at least two outcomes are tails, the probability of having all three outcomes as tails is (A) 1/8 (B) 1/4 (C) 1/3 (D) 1/2 15. Two plane parallel surfaces exchange heat by thermal radiation. A radiation shield is placed in between at equal distance from the two surfaces to reduce heat transfer. All surfaces are black with infinite length and width. The ratio of heat transfer rate between surfaces with and without radiation shield is (A) 1/2 (B) 1/4 (C) 1/6 (D) 1/8 16. As per the ANSI marking system, a grinding wheel with alumina as abrasive is designated as 51 A 36 K 5 V 23 Here, K indicates that (A) abrasive used in the wheel is aluminum oxide (B) hardness of the wheel is medium (C) bonding material of the wheel is shellac (D) structure of the wheel is dense 17. The combination of Directrix and Generatrix in a machining operation is shown in figure. The surface produced is (A) cylindrical (B) planar (C) helical (D) parabolic 18. In NC machine, the function of interpolator is to (A) compute and maintain the tool feed rate (B) compute and maintain the velocity of the slide (C) generate warning signal based on the error (D) generate reference signals prescribing the shape of the produced part 19. Vacuum in the machining zone is an essential requirement for (A) Electric Discharge Machining (B) Chemical Machining (C) Electro Chemical Machining (D) Electron Beam Machining 20. The qualitative method of forecasting amongst the given options is (A) Linear Regression (B) Weighted Moving Average (C) Delphi (D) Exponential Smoothing 21. Transformation matrix to translate a point P from (10, 15) to (15, 25) is 22. A copper rod of 200 mm diameter and 400 mm length is extruded to the final diameter of 100 mm. The extrusion ratio is (A) 1 (B) 2 (C) 3 (D) 8 23. A symbol for surface texture parameters is shown in figure. The difference between maximum and minimum values of surface roughness (R[a]) is (A) 0.499 μm (B) 0.508 μm (C) 0.762 μm (D) 1.524 μm 24. A thin cylinder has length L, diameter d, and thickness t. It is made of a material with modulus of elasticity E and Poisson’s ratio μ. When the cylinder is subjected to an internal pressure P, the change in length is 25. Creep of mild steel at elevated temperature involves (A) elastic deformation under constant load (B) elastic deformation under dynamic load (C) plastic deformation under constant load (D) plastic deformation under dynamic load 26. Number of minimum control points required to generate a quadratic B-Spline curve is (A) 2 (B) 4 (C) 8 (D) 16 27. The Euler’s method is used to solve The step size is 0.1. The approximate value of y(0, 1) is _____ (round off to 2 decimal places). 28. A solid circular disk of 0.025 m thickness is used as flywheel. The density of the disk material is 7800 kg/m^3 and the mass moment of inertia of the disk about its center is 4.36 kg-m^2. The radius, in m, of the disk is _____ (round off to 2 decimal places). 29. The standard time for completing a job on a machine is 10 minutes. Number of machines available is 5, each machine is available for 300 hours/month, and average machine utilization is 80 %. The maximum number of jobs that can be produced in a month is _____ (in integer). 30. Travel details of two persons P and Q travelling from city X to city Y are given as The positive difference in value of travel between the two modes is _____ (in integer). 31. A wooden cubical block of side 0.1 m has specific gravity (SG) of 0.75. It is held submerged in a pool of oil and water by a massless rigid wire as shown in figure. The density of water is 1000 kg/m^3 and acceleration due to gravity is 9.8 m/s^2. The tension, in N, in the wire is _____ (round off to 2 decimal places). 32. Under steady state conditions, superheated steam enters the turbine with enthalpy, h[1] = 3200 kJ/kg and wet steam leaves the turbine at pressure p[2 ]= 0.1 bar. The heat loss is 100 kJ/kg and work output is 1000 kJ/kg. Kinetic and potential energies for inflow and outflow are neglected. At pressure 0.1 bar, the enthalpy of saturated liquid is 200 kJ/kg and the enthalpy of vaporization is 2400 kJ/kg. The dryness fraction of the steam at the exit of the turbine is _____ (round off to 2 decimal places). 33. The total number of nonconformities is 420 from 30 samples. The size of each sample is 100. The lower control limit for the control chart for number of nonconformities is _____ (round off to 2 decimal places). 34. Two metal sheets are joined using resistance spot welding. A welding current of 4500 A is applied for 0.2 s. The effective contact resistance at the sheet interface is 400 × 10^−6 Ω. The thermal efficiency of the welding process is 50 %. The amount of heat, in J, used for producing a spot weld is _____ (in integer). 35. A metal rod of diameter 14 mm is subjected to a tensile test. After the test, its cross-sectional diameter at the fractured end is 12 mm. The ductility, in %, is _____ (round off to 2 decimal Q.36 – Q.65 Carry TWO marks Each 36. Given, z(x, y) = e^x – 2y, where x(t) = e^t and y(t) = e^−^t. All the variables are real. The total differential dz/dt is (A) −z(x + 2y) (B) −z(x – 2y) (C) z(x + 2y) (D) z(x – 2y) 37. Two cards are drawn one after the other from a regular deck of 52 playing cards without replacement. The probability that the drawn cards are of different suits is (A) 39/51 (B) 13/52 (C) 2/52 (D) 2/51 38. Match the machine elements with their functions. (A) P – 3, Q – 2, R – 1 (B) P – 3, Q – 1, R – 2 (C) P – 2, Q – 1, R – 3 (D) P – 1, Q – 3, R – 2 39. A massless beam is fixed at one end and supported on a roller at other end. A point force P is applied at the midpoint of the beam as shown in figure. The reaction at the roller support is (A) 5P/16 (B) 2P/3 (C) 4P/9 (D) 9P/25 40. Six jobs (1, 2, 3, 4, 5, 6) undergo drilling, followed by reaming operation. The time required for each operation is given as The sequence of processing the jobs, using the Johnson’s rule, is (A) 4 – 1 – 6 – 3 – 5 – 2 (B) 4 – 6 – 1 – 5 – 3 – 2 (C) 2 – 1 – 6 – 3 – 5 – 4 (D) 2 – 1 – 3 – 6 – 5 – 4 41. Match the engineering materials at room temperature with the given crystal structures. (A) P – 3, Q – 4, R – 1, S – 2 (B) P – 2, Q – 1, R – 4, S – 3 (C) P – 2, Q – 4, R – 1, S – 3 (D) P – 3, Q – 1, R – 4, S – 2 42. Match the recording techniques used in method study with the most appropriate application areas. (A) P – 3, Q – 4, R – 2, S – 1 (B) P – 3, Q – 1, R – 2, S – 4 (C) P – 2, Q – 4, R – 3, S – 1 (D) P – 2, Q – 1, R – 3, S – 4 43. Match the products to be manufactured with the given metal working processes. (A) P – 3, Q – 4, R – 1, S – 2 (B) P – 2, Q – 4, R – 1, S – 3 (C) P – 4, Q – 3, R – 2, S – 1 (D) P – 4, Q – 3, R – 1, S – 2 44. The dual of a LPP is Minimize w = 4w[1] + 6w[2] + 5w[3] – w[4] subject to, and w[i] ≥ 0 for i = 1, 2, 3, 4 The objective function of the primal is (A) Maximize z = −3x[1] + 2x[2] (B) Maximize z = x[1] + x[3] (C) Maximize z = x[3] – x[4] (D) Maximize z = 3x[1] – 2x[2] 45. There are four locations (P, Q, R, S) and four factors to be considered for setting up a facility. The scores (on a scale of 0 to 10, with 10 being the maximum) for the given locations and the weight assigned to each factor are given as The best location for setting up the facility is (A) P (B) Q (C) R (D) S 46. As per the Fe-C phase diagram, the microstructure of plain carbon steel with 0.4 wt.% carbon at room temperature contains (A) proeutectoid ferrite and pearlite (B) proeutectoid cementite and pearlite (C) ferrite and austenite (D) austenite and cementite 47. The most appropriate process for manufacturing of plastic chair is (A) injection molding (B) extrusion (C) calendering (D) blow molding 48. The following equation is solved using Newton-Raphson method x^5 – 15 = 0 with initial value x[0] = 1.0. The value of first approximation x[1] is ________ (round off to 2 decimal places). 49. For the matrix 50. The work sampling study, with 100 observations, revealed 25 % idle time of a worker. The number of observations required for ±10 % accuracy and 95.45 % confidence level is _____ (in integer). 51. The information of two products P and Q is given as The value of 52. A system shown in figure has seven components with reliabilities R[A] = 0.96, R[B] = 0.92, R[C] = 0.94, R[D] = 0.89, R[E] = 0.95, R[F] = 0.88, and R[G] = 0.90. The reliability of the system is _____ (round off to 2 decimal places). 53. Details of activities of a project are given as The time required, in days, to complete the project along the critical path is _____ (in integer). 54. A system has 10 essential components. Each component has an exponential time-to-failure distribution with constant failure rate of 0.04 per 4000 hours. The mean-time-to-failure, in hours, of the system is _____ (in integer). 55. A CNC water jet cutting machine is used to cut a straight slot between the points (2, 1) and (10, 10) on the XY plane (dimensions are in mm). If the feed rate is 1.5 mm/s, the time, in s, required to machine the slot following the shortest path, is _____ (round off to 2 decimal places). 56. In an orthogonal cutting with a tool of rake angle 0°, the value of the cutting force is two times of the thrust force. The coefficient of friction is _____ (round off to 1 decimal place). 57. The solidification of a cubical casting of side 100 mm takes place with volumetric solidification shrinkage and solid contraction of 10 % each. The shape of the casting is retained on cooling to room temperature. The side of the cubical cast, in mm, at room temperature is _____ (round off to 2 decimal places). 58. A straight turning operation is carried out at the feed rate of 100 mm/min using a single point cutting tool with signature 8 – 8 – 5 – 5 – 7 – 25 – 0 (ASA). The spindle speed is 1600 rpm. The roughness, in μm, of the machined surface in terms of peak-to-valley height is _____ (round off to 2 decimal places). 59. An arc welding operation is performed at 25 V and 200 A at welding speed of 2 mm/s. The heat used for melting is 80 % of the total heat generated. The unit melting energy of the metal to be joined is 10 J/mm^3. The volume of the weld metal produced per unit time, in mm3/s, is _____ (in integer). 60. Water flows through a pipe of diameter 0.02 m. The Reynolds number of the flow is 1000. The pipe is heated from outside with a uniform heat flux. The flow and heat transfer in the pipe are steady and fully developed. The thermal conductivity of water is 0.66 W/(m-K). The convective heat transfer coefficient, in W/(m^2-K), is _____ (round off to 2 decimal places). 61. In an ideal air-standard Brayton cycle, air enters the compressor at 100 kPa and 300 K. Thermal efficiency of the cycle is 50 %. The heat added to air is 1000 kJ/kg. Air has constant specific heat c[p] =1.0 kJ/(kg-K) and γ =1.4. Air temperature, in K, at the turbine inlet is ______ (round off to 2 decimal places). 62. A key of width and height of 6 mm each is used to fix a gear on a shaft of 20 mm diameter. The shaft is used to transmit 10 kW power at 600 rpm to the gear. Permissible shear stress in the key is 80 N/mm^2, while compressive stress in the key is neglected. The minimum length of the key, in mm, is _____ (round off to 2 decimal places). 63. A cylindrical casting has 10 cm diameter and a mass of 12.56 kg. The material density is 7.85 × 10^−3 kg/cm^3. The value of exponent ‘n’ is 2 and solidification time is 12 min. The Chvorinov’s constant, in min/cm^2, is ______ (round off to 2 decimal places). 64. A pair of spur gears is designed to transmit 20 kW power at a pitch line velocity of 10 m/s. Diameter of the driving gear is 0.5 m. The tangential force, in N, between the driver and the driven gear is _____ (in integer). 65. Two products, P and Q , are sold in the ratio of 10:1. The fixed cost is Rs. 1,40,000. The selling price of P is Rs. 10/unit and Q is Rs. 40/unit. The variable costs of P and Q are Rs. 5/unit and Rs. 20/unit, respectively. The break-even point in terms of revenue, in Rs., is _____ (in integer).
{"url":"https://entranceindia.com/tag/gate-exam-paper-production-and-industrial-engineering-pi-2023/","timestamp":"2024-11-07T08:00:57Z","content_type":"text/html","content_length":"165435","record_id":"<urn:uuid:25f9faa4-a4d3-4311-a84c-7858afd96002>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00125.warc.gz"}
Arithmetic | Smart Start Montessori School | Toronto top of page Combination of Quantities & Symbols Arithmetic is the science of computing using positive real numbers, more specifically, the process of addition, subtraction, multiplication, and division. Little children are naturally attracted to the science of numbers. Mathematics, like language, is the product of the human intellect and the nature of a human being. Mathematics arises from the human mind as it comes into contact with the world and as it contemplates the universe and the factors of time and space. It undergirds the effort of the human to understand the world in which he lives. All humans exhibit this mathematical propensity, even little children. It can therefore be said that humankind has a mathematical mind. By age four, the child is ready for the language of mathematics. A series of preparations have been made. The child has established internal order, they have developed precise movement, established the work habit, and is able to follow and complete a work cycle. The child has the ability to concentrate and has learned how to follow a process and use symbols. The mathematical material gives the child his own mathematical experience and to arrive at individual work. There are some teacher-directed activities but these are followed by individual activities. The Exercises in arithmetic are grouped and there is some sequential work and some parallel work. The first group is Numbers through Ten. The experiences in this group are sequential. When the child has a full understanding of numbers through ten, the second group, The Decimal System, can be introduced. The focus here is on the hierarchy of the decimal system and how the system functions. It also starts the child on the Exercises of simple computations, which are the operations of arithmetic. The third group will be started when the decimal system is well underway. This third group, Counting beyond Ten, includes the teens, the tens, and linear and skip counting. The fourth group is the memorization of the arithmetic tables. This work can begin while the later work of the decimal system and the counting beyond ten exercises are continued. The fifth group is the passage to abstraction. The Exercises in this group require the child to understand the process of each form of arithmetic and to know the tables of each operation. There is again an overlap. The children who know the process and tables for addition can begin to do the addition for this group. They may still be working on learning the tables for the other operations and these will not be taken up until they have readiness. Numbers to Ten • Number Rods • Sandpaper Numerals • Number Rods and Number Cards • Spindle Boxes • Cards and Counters • Memory Game of Numbers Decimal System • the Introduction to the Golden Beads • Golden Beads - counting Through the Hierarchies • Counting Golden Beads • Introduction to the Large Number Cards • Large Number Cards - Counting Through the Hierarchies • Identifying Large Number Cards • Formation of Large Number Cards with Beads • Combination of Golden Beads and Large Number Cards (Bird's Eye View) Teens and Tens - Teens • Formation of Quantities 11-19 with ten bars and short bead stair • Formation of Symbols 10 - 90 with teen boards • Combination of Quantities and Symbols to form 11-19 Teens and Tens - Tens • Formation of Quantities 10-99 with ten bars • Formation of Symbols 10 - 90 with ten boards • Combination of Quantities and Symbols 10-90 with ten bars and boards • Formation of 11 - 99 with ten bars, unit beads, and ten boards Teens and Tens - Counting • Linear Counting • Skip Counting Decimal System Continued • Changing Exercise • Addition with the Golden Beads • Multiplication with the Golden Beads • Subtraction with the Golden Beads • Division with the Golden Beads • The Stamp Game - Introductory Exercise • The Stamp Game - Addition • The Stamp Game - Multiplication • The Stamp Game - Subtraction • The Stamp Game - Division Exploration and Memorization of Tables • Addition Snake Game • Addition Strip Board • Addition Charts 3,4,5, and 6 (blank) • Negative Snake Game • Negative Strip Board • Subtraction Chart 2 and 3 (blank) • Multiplication Bead Bar Layout • Multiplication Board • Multiplication chars 3, 4, and 5 (blank) • Unit Division Board • Division Charts 1 and 2 Passage to Abstraction • The Dot Game • The Small Bead Frame bottom of page
{"url":"https://www.smartmontessori.com/arithmetic","timestamp":"2024-11-11T16:42:05Z","content_type":"text/html","content_length":"640872","record_id":"<urn:uuid:db1e5212-913b-4900-9959-759d68a54326>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00555.warc.gz"}
Radiosystem AB dummy load > Polytech.nu Radiosystem AB dummy load A long time ago I bought a Radio System AB (Ericsson) 50 Ohms dummy load with N-connector for testing purposes. It’s a dummy load from the former NMT450 mobile phone network. It’s likely these dummy load were fitted to the front of the RS460 cavity band pass filter. The footprint, paint color and structure and four holes of the dummy load match perfectly to the unpainted square on the front of the RS460 filter boxes. Possibly the dummy load were used to dissipate sideband energy… The nice thing about these dummy loads it equipped with a BNC test point. The internal chip resistor has a test point which is connected to the BNC socket. This is for example convenient for frequency measurements. The RF power is dissipated into the chip resistor and a relative low power test signal can be used to feed a frequency counter. I guess the chip resistor can handle at least 100 Watts of RF power when fitted to a heatsink. The dummy load is used for seven years now. And since I bought a Rigol DSA815-TG spectrum analyzer recently, this is a nice opportunity to determine the power reduction at the test point. I normalized the test setup and measured the loss. I would expect the signal reduction is linear, but I was wrong. The signal reduction of the test point is logarithmic. The lower the frequency, the more the signal reduction at the test point and vice versa. Luckily I updated the firmware of the Rigol DSA815-TG spectrum analyzer and has now the option to show the frequency scale logarithmic instead of linear. Therefore the bent signal reduction line on the screen appears quite linear! Using the CSV export function I exported the trace information to Microsoft Excel. After fiddling with the graph function I added a “trend” line that fits very nice onto the trace of the measurement. There’s a convenient option to show the formula of the trend line. Since the trend line fits very well it’s safe to say that the formula fits therefore also the signal reduction. The determined formula is this: [dB] = (7,527 × f [MHz])-72,44. If a frequency in MHz is multiplied with 7,527 and the result is reduced with 72,44, the answer is the signal reduction in dB relative to the input. For convenience I calculated the signal reduction for the regular ham bands. The results are shown below. The dummy load is used for approximately 460 MHz and should work fine up to 3 GHz. I cannot verify this since my equipment can measure “only” op to 1,5 GHz. Assuming the formula fits until 10 GHz, the numbers should fit. But it’s possible the dummy load will behave different at “extreme” frequencies (above 3 GHz). So beware not to blow up the input of your measurement devices. Normally a spectrum analyzer is very sensitive for overload at the input. Be sure to calculate/verify the test point signal strength at certain frequencies and input power. For example the Rigol DSA815-TG can handle +20dBm at max. If a HF transmitter (1,8…28 MHz) is tested, the least signal reduction at the test point is -47,4 dB. The maximum signal at the input port if the dummy load is +67,4 dBm. The maximum input power is therefore 5.495 Watts. Since the dummy load can handle approximately 100 Watts, it’s safe to say that your DSA815 won’t be damaged testing an 100 Watts HF transmitter. At 435 MHz the signal reduction is 26,7 dB. The maximum input is +20 dBm, therefore the maximum power input is +20 dBm + 26,7 dB = +46,7 dBm = 46,8 W! There are several 70 cm transceivers and amplifiers delivering more than 50 Watts, so beware that this could (or will?!) damage the DSA815 applying more than 46,8 Watts to the dummy load input. 1,8 MHz = -68,0 dB 3,5 MHz = -63,0 dB 5,3 MHz = -59,9 dB 7 MHz = -57,8 dB 10 MHz = -55,1 dB 14 MHz = -53,6 dB 18 MHz = -50,7 dB 21 MHz = -49,5 dB 25 MHz = -48,2 dB 28 MHz = -47,4 dB 50 MHz = -43,0 dB 145 MHz = -35,0 dB 435 MHz = -26,7 dB 1,3 GHz = -18,47 2,3 GHz = -14,2 dB* 3,4 GHz = -11,2 dB * 5,8 GHz = -7,2 dB * 10,0 GHz = -3,1 dB * * These numbers are based on the formula and not verified in a real world measurement.
{"url":"https://polytech.nu/index.php?artikel=216","timestamp":"2024-11-04T20:54:17Z","content_type":"text/html","content_length":"11703","record_id":"<urn:uuid:c5d2c696-9ff9-42e1-9890-d2f35000ff3e>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00267.warc.gz"}
higher heating valu in dry basis for coal The heating value (or energy value or calorific value) of a substance, usually a fuel or food (see food energy), is the amount of heat released during the combustion of a specified amount of it.. The calorific value is the total energy released as heat when a substance undergoes complete combustion with oxygen under standard chemical reaction is typically a hydrocarbon or other ... WhatsApp: +86 18838072829 For comparison the lower heating values calculated from default value (HHV) from B415 are also shown in Table 1. Table 1 Heating Values 1 ASTM D 586507a: Standard Test Method for Gross Calorific Value of Coal and Coke 2 ASTM E 71187: Standard Test Method for Gross Calorific Value of RefuseDerived Fuel by the Bomb Calorimeter WhatsApp: +86 18838072829 Is the heating value on a higher (gross) or lower (net) heating value basis? ... Note that if chlorine is included with the trace element analysis, it is typically on a wholecoal dry basis. WhatsApp: +86 18838072829 Operating data from a coal inspection system in China were obtained. The range of the higher heating value (HHV, dry ashfree basis) of the data was from to KJ/g; 124 coal samples were obtained. The coal proximate analysis and ultimate analysis are based on ASTM D 3172 and ASTM D 3176, respectively. WhatsApp: +86 18838072829 x HHV: Higher heating value of a fuel in Btu/lb, also known as gross heating value. x LHV: Lower heating value of a fuel. Equal to HHV latent heat of vaporization of water formed from hydrogen in the fuel or moisture in the fuel. Also known as net heating value. x XS air: Excess air percent. This is the amount of air WhatsApp: +86 18838072829 Moistureand ashfree basis Calculate the higher heating value and the lower heating value on asreceived basis. Problem 2 An RDF has a hydrogen and moisture content of 7% and 20% by mass respectively, and a higher heating value of 18000 kJ/kg. Calculate its lower heating value on a dry basis. WhatsApp: +86 18838072829 Given the following ultimate analysis on the dry basis of a coal sample, 80% C, % H2, % O2, % N2, % S, and % ash, and that the heats of combustion of carbon, hydrogen, and sulfur are 33 700, 141 875, and 9300 kJ/kg, respectively, determine the higher heating value of the coal in kJ/kg. Answer: 26476 kJ/kg WhatsApp: +86 18838072829 1 ar = as received basis 2 daf = dry ash free basis 3 gar = gross as received . Cofiring Biomass Coal Page..4 ... value in which the latent heat of the moisture in the coal is subtracted from the higher heating value. When the coal is burnt, water in the coal will be evaporated, and additional water that is ... WhatsApp: +86 18838072829 The heating value, also called calorific value or heat of combustion, defines the energy content of a biomass fuel and is one of the most important characteristic parameters for design calculations and numerical simulations of thermal systems no matter how biomass is used, direct combustion [1] or cofiring with other fuels ( coals) [1], [2 ... WhatsApp: +86 18838072829 Higher heating value (HHV) and composition of biomass, coal and other solid fuels, are important properties which define the energy content and determine the clean and efficient use of these fuels. There exists a variety of correlations for predicting HHV from ultimate analysis of fuels. WhatsApp: +86 18838072829 Higher or lower heating value base efficiency. The HHV value, ... where the exergy value results in MJ/kg for dry coal in ashfree basis. In addition, the chemical exergy of high fixed carbon containing coals varies between 7 and MJ/kg dry ashfree basis. The specific chemical exergy of coal with wet basis can be calculated with ... WhatsApp: +86 18838072829 A unified correlation for computation of higher heating value (HHV) from elemental analysis of fuels is proposed in this paper. This correlation has been derived using 225 data points and ... WhatsApp: +86 18838072829 Higher Heating Value (HHV) is defined as heat released when burning a gram of biomass in a calorimeter bomb. HHV and proximate analysis can be carried out to standardized rules. The content of fixedcarbon is calculated on dry basis as a percentage according to the difference from among 100, ash and volatile matter (see Fig. 1). WhatsApp: +86 18838072829 NationalBureauofStandards Ubrar PEC8J366 NBS Referencebooknottobe 1 library, TECHNICALNOTE 299 CalculationoftheHeatingValue ofaSampleofHighPurityMethane forUseasaReferenceMaterial 4 Q Z / «***"'°°. o 1* T Q USDEPARTMENTOFCOMMERCE NationalBureauofStandards WhatsApp: +86 18838072829 The higher heating value (HHV) is the total amount of heat energy that is available in the fuel, including the energy contained in the water vapor in the exhaust gases. The lower heating value (LHV) does not include the energy embodied in the water vapor. ... In the case of drybasis calculations, the moisture content is equal to the mass of ... WhatsApp: +86 18838072829 After undergoing the process, MSWs of various sizes and forms became slump materials that were easily dryable to a powdery product with a 10% moisture content and an average heating value of 20 MJ/kg (dry basis), which is equal to that of lowgrade subbituminous coal. Because the MSW used in the experiments contained a significant amount of ... WhatsApp: +86 18838072829 The energy content of biomass is therefore mostly dependent on net calorific value on dry basis and moisture. High ash content decreases net calorific value on dry basis. ... (LHV), higher heating value (HHV), and gross heating value (GHV). ... Take the lower heating value of coal to be 26,535 kJ/kg. Assume combustion takes place at 2200K with ... WhatsApp: +86 18838072829 coals. 1 Generally, bituminous coals have heating values of 10,500 to 14,000 British thermal units per pound (Btu/lb) on a wet, mineralmatterfree basis. 2 As mined, the heating values of typical bituminous coals range from 10,720 to 14,730 Btu/lb. 3 The heating values of subbituminous coals range WhatsApp: +86 18838072829 The heating value (HV), also known as the higher heating value (HHV), determines how much energy a fuel contains in a specific process. The maximum possible heat potential of a solid fuel is indicated by the HHV, which is typically expressed as a unit of energy per mass, frequently on a dry weight basis. WhatsApp: +86 18838072829 The higher heating values of various wood species on a dry basis vary by < 15%. The higher heating values of softwoods are 2022MJkg1 and of hardwoods 19 21 MJ kg1 In earlier workj l, formulae were developed for estimating the higher heating values of fuels from different lignocellu losic materials, using their ultimate analysis data. WhatsApp: +86 18838072829 (wt% dry basis) HHV (MJ/kg, Measured ) (daf) HHV (MJ/kg, Calculated) Difference Corn stover ... Azvedo JLT. Estimating the higher heating value of biomass fuels from basic analysis data. Biomass Bioenergy 2005;28:499507 [11] Demirbas A. Combustion properties and calculation of higher heating values of diesel WhatsApp: +86 18838072829 The set of data used has values of FC, VM, and ASH ranges from to %, %, and .%, respectively, by dry weight basis. The actual higher heating values of biomass materials considered for the analysis varies from to MJ/kg. As discussed in Section 2, the constant terms are determined for all 20 proposed ... WhatsApp: +86 18838072829 Among these characteristics, the higher heating value is fundamental for allocating the feedstock for specific uses. ... and fixed carbon (FC) content on dry basis are the inputs in the present study. The output is the data regarding the HHV of biomass. ... Chelgani S. C. Estimation of gross calorific value based on coal analysis using ... WhatsApp: +86 18838072829 Also referred to as energy or calorific value, heat value is a measure of a fuel's energy density, and is expressed in energy (joules) per specified amount ( kilograms). Heat value. Hydrogen (H 2) 120142 MJ/kg. Methane (CH 4) 5055 MJ/kg. Methanol (CH 3 OH) MJ/kg. Dimethyl ether DME (CH 3 OCH 3) WhatsApp: +86 18838072829 The calculated and experimental heating values (dry ash free basis) strongly depend on the ash content of the coal. The difference in the binding energy of the elements in coals having varying degrees of coalification and petrographic composition plays a relatively weak role. WhatsApp: +86 18838072829 Table 2 E lemental composi tion on a dryashfree basis (dafb), proximate analysis on a dry basis (db), and higher heating valu e (HHV) of the feedstock. VM represen ts volatile matter an d FC ... WhatsApp: +86 18838072829 It indicates the amount of heat that is released when the coal is burned. The Calorific Value varies on the geographical age, formation, ranking and location of the coal mines. It is expressed as kJ/kg in the SI unit system. Power plant coals have a Calorific Value in the range of 9500 kJ/kg to 27000 kJ/ coal when mined contains moisture. WhatsApp: +86 18838072829 weight and extractivefree dry weight basis to find out relationship, if any, between ash and extractive content with the higher heating value. Moisture in biomass generally decreases its heating value. Ash and extractive content are ... developed for coal, such as the Dulong equation (Perry and Chilton, 1973), typically WhatsApp: +86 18838072829 The net calorific value (or lower heat value), being the energy available after complete combustion 1 Wet basis is commonly used for biomass, not dry basis which is used more for wood products. varies with moisture content of the wood biomass (Figure 1). 5 M oisture content (S ource: R efs 2 On a unit weight basis one tonne of wood of a lower WhatsApp: +86 18838072829 Gross Net Calorific Values Gross CVor 'higher heating value' (HHV) is the CV under laboratory conditions. Net CVor 'lower heating value' (LHV) is the useful calorific value in boiler plant. The difference is essentially the latent heat of the water vapour produced. Conversions Units From kcal/kg to MJ/kg multiply kcal/kg by WhatsApp: +86 18838072829
{"url":"https://www.mineralyne.fr/Jul_26/higher-heating-valu-in-dry-basis-for-coal.html","timestamp":"2024-11-12T22:43:18Z","content_type":"application/xhtml+xml","content_length":"28574","record_id":"<urn:uuid:11deebc9-69bf-4b8d-ae20-0a30df1fd1e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00852.warc.gz"}
Excellent power In the tale, there was a great wizard at SUSTech called SUSTechDaFaShi with excellent power. With his great power, DFS led a scourge army with \(n\) ultimate soldiers. Each soldier has 2 attributes, hp and attack. What's more, DFS can cast at most \(p\) times of spell1 to make one soldier double its hp, and at most \(q\) times of spell2 to make one soldier's attack equal to its hp. DFS wants to know the maximum sum of the attack of all his soldiers after casting two kinds of spells.
{"url":"https://acm.sustech.edu.cn/onlinejudge/problem.php?id=1221","timestamp":"2024-11-14T08:05:46Z","content_type":"text/html","content_length":"8786","record_id":"<urn:uuid:5b0a9497-0f4d-474c-b3f2-a1c3536d3cc5>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00059.warc.gz"}
Computer Science Academic Year 2019/2020 - 1° Year - Curriculum Elaborazione Dati e Applicazioni and Curriculum Sistemi e Applicazioni Teaching Staff Credit Value: Taught classes: 72 hours 24 hours Term / Semester: 1° and 2° Learning Objectives • ALGEBRA LINEARE E GEOMETRIA □ Knowledge and understanding: The aim of the course is to give the basics of linear algebra and analytic geometry that are useful to interpret and describe problems in computer science. □ Applying knowledge and understanding: the student will acquire the skills necessary to deal with typical issues of discrete mathematics, solving classical problems where standard techniques are required. □ Making judgements: the student will be able to independently develop solutions to the main problems of the course by choosing the most convenient strategy based on the learning outcomes. □ Communication skills: the student will acquire the necessary communication skills by acquiring the specific language of linear algebra and geometry. □ Learning skills: The aim of the course is to provide the study method to the students, the forma mentis and the logical rigor that will be needed in order to solve autonomously new problems that may arise during a work activity. • STRUTTURE DISCRETE Knowledge and understanding: Studenta will acquire the basic notions of discrete mathematical structures that are at the basis of computer science, and that are used to interpret and describe the relative problems. Applying knowledge and understanding: Students will acquire the necessary skills to tackle and analyze, from a theoretical point of view, typical problems in computer science and, in particular, in the design of algorithms, and in solving problems in which the application of standard techniques is required. Making judgements: students will be able to independently develop solutions to the main problems covered in the course, by choosing the most convenient strategy based on the results learned. Communication skills: students will acquire the necessary communication skills and the specific language of discrete mathematics, and its use in computer science. Learning skills: the aim of the course is to provide students with the study method, the mindset and the logical rigor that will be necessary for them to be able to face and solve new problems that may arise during their work as computer scientists. Course Structure • ALGEBRA LINEARE E GEOMETRIA Traditional (teacher up front) lessons. • STRUTTURE DISCRETE The course will be taught in the classroom for a total of 48 hours. Detailed Course Content • ALGEBRA LINEARE E GEOMETRIA 1. Calculation of matrix algebra and linear systems. Matrices. Matrix operations. Linear systems *. Calculating the inverse matrix. Determinant of a square matrix and its properties. Rank of a matrix. Cramer's theorem and Rouche-Capelli. 2. Vector spaces. Subspaces and transactions between them. Subspace sum. Linear independence and linear dependence. Bases and dimension of a vector space. Eigenvalues and eigenvectors. Characteristic polynomial. Research of the eigenvalues and eigenspaces associated with them. Similarity between matrices. Diagonalizable matrices. 1. Vector Calculus. Applied vectors. Decomposition theorem. Scalar product and cross product. Mixed product. Free vectors. 2. Linear geometry in the plane. Lines in the plane and their equations. Parallelism and squareness. Intersection between plane and lines. Homogeneous coordinates in the plane. Bundles of 3. Isometries. Translation, rotation around a point. Reflection. 4. Linear geometry in space. Planes and lines in space and their equations. Parallelism and squareness. Intersection between planes, between a plane and a straight line and between lines. homogeneous coordinates in space. Improper points and lines in space. Bundles of plans. 5. Conics and their associated matrices. Orthogonal Invariants. Irreducible and degenerate conics. Discriminant of a conic. Canonical reduction of a conic. Parabolas, Ellipses, Hyperbolas equations, center and axis. Circumferences, Tangents of conics. • STRUTTURE DISCRETE The course, for a total of 6 CFU, is divided into 4 parts of different sizes, as outlined below. Each of the parts ends with one or more case studies of particular importance. Part I: Sets and Relations (1 CFU): Preliminaries. Sets and operations between them. Venn diagrams, power set, Cartesian product, set partition. Set relationships. Reflexive, symmetrical, transitive relations. Equivalence Case Study: Families of closed sets and the Union-Closed conjecture Part II: Graphs and Trees (2.5 CFU): Basic definitions. Complete graphs. Complement of a graph. Bipartite graphs. Graph representations. Isomorphisms. Eulerian graphs and Hamiltonian graphs. The problem of the traveling salesman and the weighted graphs. is. Coloring of graphs and chromatic number. f. Definition of Tree and characterization. Binary trees and their properties. d. Planar graphs, Euler formula and characterization of flatness. CASE STUDIES: The problem of Crossing Number. Examples of combinatorial problems on computationally complex graphs and their characterization as an optimal permutation search. Part III: Combinatorics and Discrete Probability (1 CFU): Permutations, combinations, arrangements (simple and with repetition). Discrete probability. Definition of probability. Uniform probability and relative properties. Conditional probability. Stochastic independence. Case Study: The Monty Hall Paradox Part IV: Fundamentals of Number Theory and Demonstration Methods (1.5 CFU): Natural numbers, relative integers, rational. Divisibility and Prime Numbers. Unique factorization theorem of integers. Theorem of the rest. Direct and indirect demonstrations. Examples of classical theorems and numerical algorithms. Numerical, summation and production sequences. Principle of mathematical induction and proof of fundamental properties. Modular arithmetic. CASE STUDIES: The 3x + 1 problem and Goldbach's Conjecture. Textbook Information • ALGEBRA LINEARE E GEOMETRIA 1. Appunti in rete alla pagina web https://andreascapellato.wordpress.com/didattica-2/ 2. S. Giuffrida, A. Ragusa, Corso di Algebra Lineare, Il Cigno Galileo Galilei Roma. 3. G. Paxia, Lezioni di Geometria, Cooperativa Universitaria Libraria Catanese. 4. S. Greco, B. Matarazzo, S. Milici, Matematica Generale, G. Giapichelli Editore, 2016. • STRUTTURE DISCRETE No specific reference text. The teacher will provide students with the slides of the course and anything else necessary and sufficient to complete and deepen the topics discussed in class. All teaching material will be published on Studium.
{"url":"https://dmi.unict.it/courses/l-31/course-units/?cod=13546","timestamp":"2024-11-14T18:49:20Z","content_type":"text/html","content_length":"31053","record_id":"<urn:uuid:94d4ebf6-ddd3-4df0-80de-bfe9d44d1188>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00679.warc.gz"}
Lightning Network Statistics - Bitcoin Visuals Want to sponsor BitcoinVisuals.com? Get in touch Number of nodes with and without channels. Unique = channels connecting nodes directly for the first time. Duplicate = channels between nodes that are already connected. Cumulative bitcoin capacity across all channels. Channel capacity statistics. The maximum number of hops required to reach another node (among shortest paths). Density is a ratio of actual / potential channels. Transitivity is the ratio of potential triangles present. A value of 1 means every path of length 2 loops back into a triangle. Clustering coefficient is the ratio of interconnections between a node's peers. A value of 0 means the node is a hub, and none of its peers are connected. A value of 1 means the node forms a clique with its peers. A cut channel (aka cut edge, or bridge) is a channel between two nodes that connects different components of the network. This channel's removal would prevent other nodes from having a path. A cut node (aka cut vertex) is the same idea, except it's a crucial node instead of a channel. Channels involving a node. Capacity across all channels involving a node.
{"url":"https://bitcoinvisuals.com/lightning?ref=bitcoinfi.thesis.co","timestamp":"2024-11-06T12:28:38Z","content_type":"text/html","content_length":"47325","record_id":"<urn:uuid:3d883f7f-6a3d-406a-a8d4-fcff77523127>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00747.warc.gz"}
Ruled Rectangles¶ New in Version 5.7.0. Adapted from this notebook. In past versions of AMRClaw and GeoClaw, one could specify “refinement regions” as rectangles over which a minlevel and maxlevel are specified (perhaps over some time interval), as described in Allowing only rectangles made it very easy to check each cell for inclusion in a region, but is a severe restriction – often a number of rectangles must be used to follow a complicated coastline, for example, or else many points ended up being refined that did not require it (e.g. onshore points that never get wet in a GeoClaw simulation). We have introduced a new data structure called a “Ruled Rectangle” that is a special type of polygon for which it is also easy to check whether a given point is inside or outside, but that is much more flexible than a rectangle. It is a special case of a “ruled surface”, which is a surface in 3D that is bounded by two curves, each parameterized by \(s\) from \(s_1\) to \(s_2\), and with the surface defined as the union of all the line segments connecting one curve to the other for each value of \(s\) in \([s_1,s_2]\). A Ruled Rectangle is a special case in which each curve lies in the \ (x\)-\(y\) plane and either \(s=x\) for some range of \(x\) values or \(s=y\) for some range of \(y\) values. If \(s=x\), for example, then the line segments defining the surface are intervals \(y_{\ scriptstyle lower}(x) \leq y \leq y_{\scriptstyle upper}(x)\) for each \(x\) over some range \(x_1 \leq x \leq x_2\). It is easy to check if a given \((x_c,y_c)\) is in this region: it is if \(x_1 \ leq x_c \leq x_2\) and in addition \(y_{\scriptstyle lower}(x_c) \leq y_c \leq y_{\scriptstyle upper}(x_c)\). The class clawpack.amrclaw.region_tools.RuledRectangle supports a subset of ruled rectangles defined by a finite set of \(s\) values along a coordinate line, e.g. \(s[0] < s[1] < s[2] < \cdots < s[N] \) and for each \(s[k]\) two values lower[k] and upper[k]. If rr is an instance of this class then rr.s, rr.lower, and rr.upper contain these arrays. Whether s corresponds to x or y is determined by: • If rr.ixy in [1, ‘x’] then s gives a set of \(x\) values, • If rr.ixy in [2, ‘y’] then s gives a set of \(y\) values. The points specified can then be connected by line segments to define a Ruled Rectangle, and this is done if rr.method == 1 (piecewise linear). On the other hand, if rr.method == 0 (piecewise constant) then the values lower[k], upper[k] are used as the bounds for all s in the interval \(s[k] \leq s \leq s[k+1]\) (for \(k = 0,~1,~\ldots,~N-1\)). In this case the values lower[N], upper[N] are not used. This also defines a polygon, but one that consists of a set of stacked boxes. The advantage of the latter form is that it is slightly easier to check if a point is in the Ruled Rectangle since no linear interpolation is required along the edges. Also for some applications we want the Ruled Rectangle to exactly cover a contiguous set of finite volume grid cells, which has the shape of a set of stacked boxes. Some simple examples follow: Define a Ruled Rectangle by specifying a set of points: from clawpack.amrclaw import region_tools rr = region_tools.RuledRectangle() rr.ixy = 1 # so s refers to x, lower & upper are limits in y rr.s = array([1,2,4,6]) rr.lower = array([0,2,1,3]) rr.upper = array([4,5,3,6]) Setting rr.method to 1 or 0 gives a Ruled Rectangle in which the points specified above are connected by lines or used to define stacked boxes. Both are illustrated in the figure below. Note that we use the method rr.vertices() to return a list of all the vertices of the polygon defined by rr for plotting purposes. rr.method = 1 xv,yv = rr.vertices() title('With method==1') rr.method = 0 xv,yv = rr.vertices() title('With method==0') In the plots above the s values correspond to x = 1, 2, 4, 6, and the lower and upper arrays define ranges in y. If we set rr.ixy = 2 or ‘y’, then the s values will instead correspond to y = 1, 2, 4, 6 and the lower and upper will define ranges in x. This is illustrated in the plots below. rr.ixy = 2 # so s refers to y, lower & upper are limits in x rr.method = 1 xv,yv = rr.vertices() title('With method==1') rr.method = 0 xv,yv = rr.vertices() title('With method==0') Relation to convex polygons¶ Note that the polygons above are not convex, but clearly some Ruled Rectangles would be convex. Conversely, any convex polygon can be expressed as a Ruled Rectangle — simply order the vertices so that the \(x\) values are increasing, for example, and use these as the s values in a RuledRectangle with ixy=’x’. Then for each \(x\) there is a connected interval of \(y\) values that lie within the polygon (by convexity), so this defines the lower and upper values. (Or you could start by ordering vertices by increasing \(y\) values and similarly define a RuledRectangle with ixy=’y’.) So a RuledRectangle is a nice generalization of a convex polygon for which it is easy to check inclusion of an arbitrary point. Other attributes and methods¶ If the points s[k] are equally spaced then ds is the spacing between them. This makes it quicker to determine what two points an arbitrary value of \(s\) lies between when determining whether a large set of points are inside or outside the Ruled Rectangle, rather than having to search. The Ruled Rectangle defined above has unequally spaced points and the ds attribute is set to -1 in this case. Rather than specifying s, lower, and upper separately, you can specify an array slu with three columns in defining a RuledRectangle, and such an array is returned by the slu method: array([[1, 0, 4], [2, 2, 5], [4, 1, 3], [6, 3, 6]]) Here’s an example defining a RuledRectangle via slu: slu = vstack((linspace(0,14,8), zeros(8), [1,2,1,2,1,2,1,2])).T print('slu = \n', slu) rr = region_tools.RuledRectangle(slu=slu) rr.ixy = 2 rr.method = 1 xv,yv = rr.vertices() slu = [[ 0. 0. 1.] [ 2. 0. 2.] [ 4. 0. 1.] [ 6. 0. 2.] [ 8. 0. 1.] [10. 0. 2.] [12. 0. 1.] [14. 0. 2.]] rr.bounding_box() returns the smallest rectangle [x1,x2,y1,y2] containing the ruled rectangle: If X,Y are 2D numpy arrays defining (x,y) coordinates on a grid, then rr.mask_outside(X,Y) returns a mask array M of the same shape as X,Y that is True at points outside the Ruled Rectangle. x = linspace(0,3,31) y = linspace(0,16,81) X,Y = meshgrid(x,y) Z = X + Y # sample data values to plot M = rr.mask_outside(X,Y) Zm = ma.masked_array(Z,mask=M) read and write, and instantiating from a file¶ rr.write(fname) writes out the slu array and other attributes to file fname, and rr.read(fname) can be used to read in such a file. You can also specify fname when instantiating a new Ruled rr2 = region_tools.RuledRectangle('RRzigzag.data') array([[ 0., 0., 1.], [ 2., 0., 2.], [ 4., 0., 1.], [ 6., 0., 2.], [ 8., 0., 1.], [10., 0., 2.], [12., 0., 1.], [14., 0., 2.]]) Here’s what the file looks like: lines = open('RRzigzag.data').readlines() for line in lines: 2 ixy 1 method 2 ds 8 nrules 0.000000000 0.000000000 1.000000000 2.000000000 0.000000000 2.000000000 4.000000000 0.000000000 1.000000000 6.000000000 0.000000000 2.000000000 8.000000000 0.000000000 1.000000000 10.000000000 0.000000000 2.000000000 12.000000000 0.000000000 1.000000000 14.000000000 0.000000000 2.000000000 Note that this Ruled Rectangle has equally spaced points and so ds = 2 is the spacing. rr.make_kml() can be used to create a kml file that can be opened on Google Earth to show the polygon defined by rr. This assumes that x corresponds to longitude and y to latitude and is designed for GeoClaw applications. Several optional arguments can be specified: fname, name, color, width, verbose. A GeoClaw AMR flag region¶ The figure below shows a Ruled Rectangle designed to cover Admiralty Inlet, the water way between the Kitsap Peninsula and Whidbey Island connecting the Strait of Juan de Fuca to lower Puget Sound. For some tsunami modeling problems it is important to cover this region with a finer grid than is needed elsewhere. Image('figs/RuledRectangle_AdmiraltyInlet.png', width=400) The Ruled Rectangle shown above was defined by the code below: slu = \ array([[ 47.851, -122.75 , -122.300], [ 47.955, -122.75 , -122.300], [ 48. , -122.8 , -122.529], [ 48.036, -122.8 , -122.578], [ 48.12 , -122.9 , -122.577], [ 48.187, -122.9 , -122.623], [ 48.191, -122.9 , -122.684], [ 48.221, -122.9 , -122.755]]) rr_admiralty = region_tools.RuledRectangle(slu=slu) rr_admiralty.ixy = 'y' rr_admiralty.method = 1 rr_name = 'RuledRectangle_AdmiraltyInlet' rr_admiralty.write(rr_name + '.data') rr_admiralty.make_kml(fname=rr_name+'.kml', name=rr_name) The file RuledRectangle_AdmiraltyInlet.data can then be used as a “flag region” in the modified GeoClaw code, see FlagRegions.ipynb for more details. The file RuledRectangle_AdmiraltyInlet.kml can be opened in Google Earth to show the polygon, as captured in the figure above. A simple rectangle¶ A simple rectangle with extent [x1,x2,y1,y2] can be specified as a RuledRectangle via e.g. : rectangle = region_tools.RuledRectangle() rectangle.ixy = 'x' rectangle.s = [x1, x2] rectangle.lower = [y1, y1] rectangle.upper = [y2, y2] rectangle.method = 0 This can be done for you when instantiating a RuledRectangle using: rectangle = region_tools.RuledRectangle(rect=[x1,x2,y1,y2]) For example: rectangle = region_tools.RuledRectangle(rect=[1,3,5,6]) xv,yv = rectangle.vertices() Defining a Ruled Rectangle covering selected cells¶ The module function region_tools.ruledrectangle_covering_selected_points can be used to generate a Ruled Rectangle that covers a specified set of points as compactly as possible. This is useful for generating AMR refinement regions that cover a set of points where we want to enforce a fine grid without including too many other points. First generate some sample data: x_edge = linspace(-5,27,33) y_edge = linspace(-7,7,15) x_center = 0.5*(x_edge[:-1] + x_edge[1:]) y_center = 0.5*(y_edge[:-1] + y_edge[1:]) X,Y = meshgrid(x_center,y_center) pts_chosen = where(abs(X-Y**2) < 4., 1, 0) pts_chosen = where(logical_or(X>24, Y<-4), 0, pts_chosen) pcolorcells(x_edge,y_edge,pts_chosen, cmap=cm.Blues, edgecolor=[.8,.8,.8], linewidth=0.1) The array pts_chosen has been defined with the value 1 in the dark blue cells in the figure above, and 0 elsewhere. In this case the points can be covered by a Ruled Rectangle with ixy = ‘y’ most efficiently, giving a polygon that covers only the selected points. Note we use method = 0 to generate a Ruled Rectangle that covers all the grid cells: rr = region_tools.ruledrectangle_covering_selected_points(x_center, y_center, pts_chosen, ixy='y', method=0, pcolorcells(x_edge,y_edge,pts_chosen, cmap=cm.Blues, edgecolor=[.8,.8,.8], linewidth=0.1) xv,yv = rr.vertices() plot(xv,yv,'r',label='Ruled Rectangle') legend(loc='lower right') title('With ixy=2 and method=0'); Extending rectangles to cover grid cells RuledRectangle covers 80 grid points By contrast, if we use ixy = ‘x’, the minimal Ruled Rectangle covering the selected cells will also cover a number of cells that were not selected: rr = region_tools.ruledrectangle_covering_selected_points(x_center, y_center, pts_chosen, ixy='x', method=0, pcolorcells(x_edge,y_edge,pts_chosen, cmap=cm.Blues, edgecolor=[.8,.8,.8], linewidth=0.1) xv,yv = rr.vertices() plot(xv,yv,'r',label='Ruled Rectangle') legend(loc='lower right') title('With ixy=1 and method=0'); Extending rectangles to cover grid cells RuledRectangle covers 129 grid points Note that if we use method = 1 then the Ruled Rectangle covers the center of each cell but not the entire grid cell for cells near the boundary: rr = region_tools.ruledrectangle_covering_selected_points(x_center, y_center, pts_chosen, ixy='y', method=1, pcolorcells(x_edge,y_edge,pts_chosen, cmap=cm.Blues, edgecolor=[.8,.8,.8], linewidth=0.1) xv,yv = rr.vertices() plot(xv,yv,'r',label='Ruled Rectangle') legend(loc='lower right') title('With ixy=2 and method=1'); RuledRectangle covers 63 grid points With ixy=’x’ and method=1 the Ruled Rectangle degenerates in the upper right corner to a line segment that covers only the cell centers: rr = region_tools.ruledrectangle_covering_selected_points(x_center, y_center, pts_chosen, ixy='x', method=1, pcolorcells(x_edge,y_edge,pts_chosen, cmap=cm.Blues, edgecolor=[.8,.8,.8], linewidth=0.1) xv,yv = rr.vertices() plot(xv,yv,'r',label='Ruled Rectangle') legend(loc='lower right') title('With ixy=1 and method=1'); RuledRectangle covers 100 grid points Example covering the continental shelf¶ The figure below shows a Ruled Rectangle that roughly covers the continental shelf offshore of Vancouver Island and Washington state. The region may need to be refined to a higher level than the deeper ocean in order to capture shoaling tsunami waves interacting with the shelf topography. This region was defined by first selecting a set of points from etopo1 topography satisfying certain constraints on elevation using the marching front algorithm described in Marching Front algorithm, and then using the region_tools.ruledrectangle_covering_selected_points function to build a Ruled Rectangle covering these points.
{"url":"http://www.clawpack.org/v5.10.x/ruled_rectangles.html","timestamp":"2024-11-08T21:47:13Z","content_type":"text/html","content_length":"66254","record_id":"<urn:uuid:7719679a-174d-4c67-a5e6-86ba53525134>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00278.warc.gz"}
PyTutorial | Adding Each Element of Two Lists in Python Last modified: Oct 30, 2024 By Alexander Williams Adding Each Element of Two Lists in Python Adding elements of two lists in Python is common in data processing. Here’s how to efficiently add each element in one list to the corresponding element in another. Using zip() for Element-Wise Addition zip() allows us to pair elements from two lists by index, making it ideal for element-wise addition. Here’s a basic example: list1 = [1, 2, 3] list2 = [4, 5, 6] sum_list = [a + b for a, b in zip(list1, list2)] [5, 7, 9] In this example, each element in list1 is added to the corresponding element in list2. The result is a new list. Using List Comprehension List comprehension offers a concise way to handle element-wise addition when lists are of the same length. It combines looping and addition into one line. For more details, check out Loop Moves to Next Element in List in Python. Handling Lists of Different Lengths If the lists are of unequal lengths, using a padding method like filling missing values with 0 ensures all elements are accounted for: from itertools import zip_longest list1 = [1, 2, 3] list2 = [4, 5] sum_list = [a + b for a, b in zip_longest(list1, list2, fillvalue=0)] [5, 7, 3] In this case, the shorter list is padded with 0 using zip_longest(), allowing all elements to be added correctly. Using NumPy for Larger Lists NumPy is ideal for mathematical operations on large lists or arrays. With numpy.add(), you can add lists element-wise quickly: import numpy as np list1 = [1, 2, 3] list2 = [4, 5, 6] sum_list = np.add(list1, list2) [5 7 9] Using np.add() is both efficient and concise for handling large data arrays. To learn more, see NumPy’s documentation. Python offers flexible ways to add elements of two lists, from zip() for basic tasks to NumPy for high-performance operations. Choose the method that suits your data size and requirements best.
{"url":"https://pytutorial.com/adding-each-element-of-two-lists-in-python/","timestamp":"2024-11-03T12:37:11Z","content_type":"text/html","content_length":"35887","record_id":"<urn:uuid:90661485-6682-43b5-a587-4d70578f55f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00336.warc.gz"}
How to instantiate an object at GPS coordinate? Hi folks, I would like to instantiate an object at some given coordinate. For example, I have GPS coordinates of my house and I would like to place an object at that location, then i would like to know if I’m near the object (unity renders the object close to the camera) or far (renders the object away to the camera - or invisible). How can i do that? (I’m also using vuforia for AR Camera) Convert the GPS coordinates into X/Z values (you can set the Y value based on the get terrain height at X/Z). Basically 1° degree Latitude/Longitude at the equator is approximately 111 Kilometers (111,000 meters). The math to convert Degrees° Minutes’ Seconds" values to decimal meters is rather straight forward. Take a look at these links… The problem you’ll encounter is that GPS coordinates are spherical, that is, the farther North/South you get away from the equator the smaller the distances for X are (as they converge to a single point at the poles). The question you have to ask yourself is how BIG is the area you are working with? If a small area, then you really don’t have to factor this compression in. If you are doing something that covers the whole globe then you’ll have to do the math (remembering to factor in the diameter of your world). Once you have the X/Y/Z values calculated the instantiation call is rather simple. Please let me know if you need assistance with that (your initial post was a little vague on what you were really asking for help with). I’m also struggling with this. I managed already to get x and z coordinates from a gps coordinate, but i think they are wrong because the don’t regard the initial device rotation, or in other words, i think i’m getting the absolute value of the gps location, but if a place is like 10 meters away from my current place, i don’t know if it’s in the positive or in the negative side of the axis. @DiGiaCom-Tech can you help me with this?
{"url":"https://discussions.unity.com/t/how-to-instantiate-an-object-at-gps-coordinate/120298","timestamp":"2024-11-03T00:41:18Z","content_type":"text/html","content_length":"31182","record_id":"<urn:uuid:1d8ca334-7136-4cee-b1c3-2b8ff400c97c>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00877.warc.gz"}
TMA4255 Applied Statistics, spring 2014 [27/06/14] Dear students, as you know the exam result in TMA4255 are available at studweb. I'm very impressed with your exam papers in TMA4255. I have seen many very good solutions - and given out many good grades. But sadly there were also some Fs. The grade frequency for TMA4255 ended up to be: A: 7% B: 27% C: 29 % D: 27% E: 0% F: 10% In most courses we write a grading document, which is to be made available together with the grading: Gradings.pdf. Here you see in detail how the scores are given and the grade scale used. [30/05/14] Todays exam problem and tentative solutions are available here: english version of exam, bokmål version of exam, nynorsk version of exam and TENTATIVE SOLUTION. [25/05/14] The minutes from the (3.) meeting with the reference group is available at: Minutes from reference group meeting 27.03.14 and sluttrapport: Final rapport from reference group [19/05/14] The (third) meeting with the reference group will be on Thursday May 20 at 11.15 in room 734, 7th floor Sentral building II. If you want to join the reference group or the meeting just email the lecturer. If you have feedback on textbook, lectures, exercises, voting, services you would like, etc. contact the reference group or the lecturer. Minutes (referat) from the meeting will be posted. [13/05/2014] The guidance are cancelled today, 8.15-11 room 734. You are able to sit there and work, but the lecturer will not be there. You can email or come at the lecturer office if you have any [12/05/2014] Lecturer/TA available before the exam: May 12 at 9.15-11 and May 13 at 08.15-10 in room 734, 7th etg., sentralbygg 2, elevator next to Tapir Food store. This is a room where you may sit together and work - and get help. Or, email/stop by the lecturer or TA at any time if you have questions. [09/05/2014] DOE project grades: The grading is finished, and the results are found at RESULTS If you do not find your result you either must have entered the wrong candidate number, or I have not received your project - or I have made a mistake. Please contact me if think there is a mistake. The maximum score on the DOE project is 20. I'm utterly impressed with the creativity, good writing and good quality of the DOE projects. I feel that you have learned a lot from working with the [04/04/14] Week 18: last lectures with summing up etc. • Tuesday 13.15-15.00 in F2 (not K27) • Wednesday 09.15-11.00 in EL2 [03/04/14] Hand in (last day) of project is Friday April 10. If you can not make this deadline you need to inform me about it and make a plan for when you may hand-in the project. Remember to read the info at Compulsory project, on this Message page and also study the last slides 26-30 from lecture L11: L12.pdf . [03/04/14] Although there are no lecture next week (15), the TA will show up at the supervision hours (Tuesdays 15.15-17.00) and you may ask Erik (and Vaclav) about exercises or DOE project. Send me an email or call me if you have any questions regarding DOE project. [03/04/14] The minutes from the (2.) meeting with the reference group is available at: Minutes from reference group meeting 27.03.14 If you have comments or observations that have not been made, please contact the reference group or lecturer. [02/04/14] The last week of lecture with summing up and concluding remarks, exam preparation has been moved from week 17 to week 18. Last day of lecture may be Wednesday April 30. Let me know if there are any strong objections against this. Time and place will be posted. [02/04/14] Summing up note for chi-square tests (Goodness of fit, test of independence and test of homogeneity): what to do when expected value is less than 5:Notat: Chi-square tests, expected value less than 5 [02/04/14] Here is the workflow for the DOE that was in lectured 19.02.14: 1. Set up factorial design (respons, factors: full 2^k or repeat) 2. Randomize runs 3. Perform experiments, write response values into worksheet 4. Fit the full model. If you do not have replicates⇒suggest reduced model 5. Fit reduced model (or full if replicates):⇒look at RESIDUALS to assess model fit⇒if transformation of y is needed: lig y, √y, 1/y ⇒ refit 6. Assess significance 7. Interpret your results. Main and interaction effect plot. [25/03/14] The (second) meeting with the reference group will be on Thursday March 27 at 13.00 in room 738, 7th floor Sentral building II. If you want to join the reference group or the meeting just email the lecturer. If you have feedback on textbook, lectures, exercises, voting, services you would like, etc. contact the reference group or the lecturer. Minutes (referat) from the meeting will be posted. [20/03/14] Remember see and read wiki.math.ntnu page for the course for your DOE project. Under the "Message" tab: - Two messages [13/2/14], about residuals and the points for the DOE Under the "Compulsory project” tab: - Practical information - Structure may follow the main (relevant) points made in the 'Keywords" given. Etc. Under the "Lectures" tab: -Important to discuss: 1. Each experiments is a genuine run replicate, that is reflects the total variability of the experiment. (each trial should be a performed independently and constitue a full trial) 2. The run order is random (randomized) so that potential external factors are not confused (confounded) with experimental factors. Look at the Workflow of the DOE. The effect of orthogonality. [19/03/14] It is time for second meeting between the lecturer, TA and the reference group. This meeting will be this week (week 12) or next week (week 13). We need input on everything you might mean something about, including the text book, the lectures, the exercises, the voting, the www-pages, project, the information in general++. [6/03/14] New license key for Minitab: If you have installed Minitab on your own Windows computer, Minitab will before March 1 need a new license file. The license file is called 'minitab_17.0.lic' and can be downloaded from https://www.progdist.ntnu.no/ under Minitab and saved to your computer (the default place to save is 'c:\program files (x86)\minitab'). If Minitab at start-up does not find this new lisence file it will ask you where you saved the file, and then you just navigate to where you saved the file. If you experience problems with this please contact the Orakel Support Services: orakel@ntnu.no or telephone 91500. If you are running MINITAB by remote desktop to cauchy the new license is already installed, and will also be installed at Fraggle. [03/03/14] There will be no lecture tomorrow, Tuesday 04/03 (week 10). In addition, there will not be any supervision on the Thursday exercise. [26/02/14] Remember that in the interaction plots, if the lines are parallel this indicates no interaction effects. If they are not parallel, this indicates interaction effects. They do not necessarily need to cross for there to be an interaction effect, only not parallel. [26/02/14] We encourage that the report on the project is written in English (to learn how to write scientific papers). But can be handed in Norwegian since both lecturer and TA, who will do the gradings (0-20 points), are Norwegian. [25/02/14] For those of you that do project with 3 factors and need to do two repetitions, remember randomizations for all 16 experiments and you may also need to do blocking of the repetitions. Talk to me before doing experiments with repetition. [24/02/14] Fractional Factorial Designs will not be part of the curriculum/reading list of the course. So chapter 12 from Box, Hunter og Hunter, Statistics for experimenters are not part of the reading list of the course. The Note: Notat: Factorial experiments at two levels are still on the reading list for DOE. [24/02/14] You find examples of old projects here: Compulsory project. You are allowed to do the same experiments as the ones done previous years (the examples given). You will do the experiment (maybe choosing your own low and high levels of factors) and getting your own response values and results. [24/02/14] Although there are no exercise this week (week 9), the TA will show up at the supervision hours (Tuesdays 15.15-17.00 and Thursday 14.15-16.00) and you may ask Erik and Vaclav about their opinion on your plan for your DOE project. If you want to talk to me, just send me an email or show up at my office door - I'm surely in on Thursday at 10-11 (office hrs) and in week 9 at the lecture hrs (Tuesday 13.15-15.00 and Wednesday 9.15-11.00). [24/02/14] Now you have to the end of week 15 to finish your DOE project - deadline for hand-in is April 11 (week 15). If you can not make this deadline you need to inform me about it and make a plan for when you may hand-in the project. Remember to read the info at Compulsory project and also study the last slides 26-30 from lecture L11: L12.pdf [18/02/14] This week (8) we will go through the last part of DOE. First we finish the DOE-note, available from the lectures tab L10, Note. What is left in the note is from page 7 -15, variance estimation, replicated experiments and blocking. Then we turn to the last topic of DOE, which is how to perform fractions of a full experiment. Then we will use ch 12 from the famous book by Box, Hunter and Hunter, Statistics for experimenters. Due to copyright issues I have filed the pdf in Its learning (TMA4255/Handouts folder), so you need to go there - or email me at anna.holand@math.ntnu.no if you have troubles finding the file. [13/2/14] Plotting of residuals and normality assumption: * When solving linear regression models (using the least square method) we assume that the random errors (epsilon_i) are linear independent, Normally distributed with equal variance (linear, homogenous, normal and not correlated) * We can not observe this random error term, and we therefore use the residuals (e_i=y_i-y^_i) to test the normality assumption from the fitted regression. * We plot the residuals (often studentized residuals) to see if they are normally distributed and have equal variance (and linear). • We plot the qq plot for residuals and histogram of residuals to look for normality. • Plot residuals against predicted value of response y and regressors to look for equal variance. • Plot residuals against observation order to look for non random error (in time). * If we find that the residuals are not Normally distributed and have equal variance. This means that our model do not fit the data, so data do not fit the normal distribution assumption for the error term. You can also look at the Darling Anderson test to see how good your nornmal assumption is. * This may lead to for instance biased standard errors that may lead to making wrong conclusions in hypothesis testing. *Approach to handling data with non-normal random errors : - Direct transformation of data to make random errors approximately normal (transformation may often help both non-normality and non-constant variation of the random errors) - Box-Cox transformation of Y (find this in MINITAB): this transformation finds the best transformation to simultaneously stabilize the random error variance, make the distribution of the error term more normally distributed, and straighten the relationship between Y and the regressors (x`s). * You can then fit and validate the model in the transformed variables. * If you transform the response variable Y, you may want to transform the predicted values back into the original units using the inverse of the transformation applied to the response variable. [13/2/14] The compulsory project in TMA4255 is described in detail at the “Compulsory project” tab on the left. Maybe you already may think of what your experiment should be, since we now understand the terms “response”, “normally distributed” and “covariate”. A factor is a covariate that is discrete - and we will only consider factors with two possible values (e.g. male/female, high/low The project consists of designing, performing and analysing a socalled factorial experiment - which means that we do multiple linear regression with 3 or 4 covariates that are factors with two levels each. This is NOT an observational study - you should collect the observations yourself. As an example assume I want to study factors that affect the height of plant sprouts (“from seed to a plant”) 1) You need to perform an multiple regression experiment consisting of 16 trials - that is, n=16 observations. For the plan example: you need to plant 16 seeds. 2) The response that is measure should be continuous, so that the response itself or a transformation of the response in a multippel regression model can be seen to be normally distributed. It is also possible to assume that a response with at least 7 ordered categories can be seen as continuous. (If you have for example a taste panel, you have to have at least 7 ordered categories of how good the for example cake tasted, 1."Bad", 2."Not so good"…..7"Good". Project earlier year used independent rating scores (1-20) from (mean of) two persons were used as taste-result. These individuals tasted the cake "blind" so that the look of the cake should not interfere with the taste). For the plan experiment we assume that this is the height of each plant after 5 days of growing. 3) You choose 3 or 4 factors with two levels each that might influence your response (it is possible to choose more factors, but then you need to do a socalled fractional factorial design). For the plant experiment we may choose covariate (factor) A=two different types of seed (sunflower or broccoli seeds), B=watering (coffee or water), C=growth medium (cotton or soil). 4) If you choose 3 factors you need to perform all possible combinations of the 3 factors two times (2*2*2=8), if you choose 4 factors you need to perform all possible combinations only once (2*2*2*2 =16). If you choose more than 4 factors you need to study the “factional factorials” to find out which of the possible combinations you perform. For the plant experiment we then have 2 plants that will have the same combinations for factors A,B and C. 5) A very important aspect of performing the 16 trials is that the trials should be independent and performed in a randomized order. For the plant experiment this just means that we have 16 plant which are handled in the same manner. This is often the difficult part! For other types of experiments, like if you want to test yourself by measuring your pulse rate when running up a hill with different factors being with/without heavy backpack, with/without sports shoes, running backwards or forwards, the order that you perform the experiments will matter - and then you need to do this in random order. The ideal would to do one measurement each day, but that might be difficult, and then you instead need to do a few runs every day. Then there might be a day effect - which may be handled with more advance theory that we call blocking - that means, that you need to wait until after we have covered this topic. 6) Each trial should be a performed independently of the other 15, and constitue a full trial. I will try to explain this with a common mistake done. Assume you want to study factors that affect the taste of muffins. Then you really need to make 16 different muffins that are made from 16 doughs and baked in the oven one at a time. If you only make one dough and bake all muffins at the same time you have much less variability than the experiment in real life will have. If you for practical reasons need to handle more than one trial together this is called blocking and should be taken into account (you then first need to learn about blocking). I suggest you now look at the list of experiments that students have done previously - listed on the www-address above - and talk to me or the TA or the Student assistant before you start performing the trials. You may talk to us at the exercise supervision, or come to my office (office hrs Thursdays 10-11, or email me if you want to come at another time). [04/2/14] Due to sickness, the lecture (today) Tuesday 4/2 is cancelled. [03/2/14] There will be an extra teaching assistent on the regular Tuesdays exercises and an extra exercise day at Thursdays 14:15 -16:00 at H3 Datasal 411Rill (H3 Datasal 411Rill Høgskoleringen 3, http://www.ntnu.no/studieinformasjon/rom/?gr=1&exact=1&romnr=358411). The extra TA on Tuesdays and the Thursday exercise will be regulated after how many students that shows up at the exercises. [03/2/14] The room MA24 proved to be not so good for the lectures on Tuesdays 13.15-15-00. So we move back to the original room K27. The Tuesdays lectures will therefore be held at room K27. [30/1/14] The minutes from the meeting with the reference group is available at: Minutes from reference group meeting 30.01.14 If you have comments or observations that have not been made, please contact the reference group or lecturer. In the end of the minutes there are action point some of these are: To accommodate the different knowledge background, extra guidance will be given. This will be given on Wednesdays 8.30-9.00, where students before hand can send an email about some problems/questions and the lecturer will try to answer this on Wednesday mornings. Excercises: An extra TA will attend the regular exercise on Tuesdays (for some weeks), and in addition (if a datalab is available) an extra exercise supervision will be given on Thursdays 14.15-16.00. The extra supervision will depend also on how many who shows up at the exercises in the future. [27/1/14] The meeting with the reference group will be on Tuesday January 30 at 13.00 in room 1126, 11.etg. sentralbygg 2. If you want to join the reference group or the meeting just email the lecturer. If you have feedback on textbook, lectures, exercises, voting, services you would like, etc. contact the reference group or the lecturer. Minutes (referat) from the meeting will be posted. [24/1/14] An additional time/day for exercise supervision will be arranged. I will talk with you on the Tuesdays lecture to try to establish what time/day would be preferable for this extra supervision. For those who can not be on the Tueseday lecture please email me if you want to give an preferable time for this extra supervision. [24/1/14] If you have feedback on textbook, lectures, exercises, voting, services you would like, etc. contact the reference group or the lecturer. Minutes (referat) from the meeting will be posted. [24/1/14] It is time for a meeting between the lecturer, TA and the reference group. This meeting will be next week (week 5). We need input on everything you might mean something about, including the text book, the lectures, the exercises, the voting, the www-pages, the information in general++. Additional agenda will be: Extra guidance: it is desirable with a questions-session, say 8.30 to 9 Wednesdays where I will try to answer questions. Preferable email before the session with questions from the chapters in the textbook so far in the course. Or is it enough with office hours 10-11 Thursdays where you can come and ask questions or email me if you want to come at another time. [23/1/14] A note regarding finding critical values of the F-distribution in MINITAB with an example on a hypothesis test, testing if the variances of two different populations are equal or not, can be found here Finding critical values of the F-distribution in MINITAB. [23/1/14] The room K27 for the lectures on Tuesdays 13.15-15-00 have proven to be to small for us and I have found another lecture room that hopefully will be better. This is room MA 24 (Grønnbygget (romnr: 003), http://www.ntnu.no/kart/index.php?id=6062). The Tuesdays lectures from (including) week 5 will therefore be held at MA24. [8/1/14] A introduction to MINITAB (exercise 1) and R (exercise 2) will be given week 3 (and 4) Tuesdays 15.15-16.00 (17.00) in R 52. [6/1/14] First lecture is Tuesdays January 7 13.15-15 in K27 (Kjemi 1 building)
{"url":"https://wiki.math.ntnu.no/tma4255/2014v/beskjed","timestamp":"2024-11-06T01:48:50Z","content_type":"text/html","content_length":"35041","record_id":"<urn:uuid:8c261310-9e57-48dd-8ad8-97ca711bcab0>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00874.warc.gz"}
Mathematicians Prove 30-Year-Old André-Oort Conjecture | Quanta Magazine In a striking proof posted in September, three mathematicians have solved a 30-year-old problem called the André-Oort conjecture and advanced the centuries-long quest to understand the solutions of polynomial equations. The work draws on ideas that span nearly the breadth of the field. “The methods used to approach it cover, I would say, the whole of mathematics,” said Andrei Yafaev of University College London. The new paper begins with one of the most basic but provocative questions in mathematics: When do polynomial equations like x^3 + y^3 = z^3 have integer solutions (solutions in the positive and negative counting numbers)? In 1994, Andrew Wiles solved a version of this question, known as Fermat’s Last Theorem, in one of the great mathematical triumphs of the 20th century. In the quest to solve Fermat’s Last Theorem and problems like it, mathematicians have developed increasingly abstract theories that spark new questions and conjectures. Two such problems, stated in 1989 and 1995 by Yves André and Frans Oort, respectively, led to what’s now known as the André-Oort conjecture. Instead of asking about integer solutions to polynomial equations, the André-Oort conjecture is about solutions involving far more complicated geometric objects called Shimura varieties. Many mathematicians have worked on the problem in the last few decades. In 2014, Yafaev and Bruno Klingler proved it, but with a catch. Their result depended on the Riemann hypothesis being true — but that famously hard question remains unsolved. The new paper by Jonathan Pila of the University of Oxford, Ananth Shankar of the University of Wisconsin and Jacob Tsimerman of the University of Toronto resolves this gap with a definitive solution. It also further confirms the talent of Tsimerman, 33, who is widely regarded as one of the top mathematicians of his generation. “Jacob Tsimerman has this ability to understand everything,” said Yafaev. Various Varieties The André-Oort conjecture is about algebraic varieties, which at their most basic level are just the set (or graph) of all the solutions to one polynomial equation or a collection of them. A circle of radius 1 is a variety: The coordinates of its points are solutions to the polynomial x^2 + y^2 = 1. The line y = 0 is also a variety. And the intersection of these two — the points (1, 0) and (−1, 0) — is yet a third variety nested within the first two. The varieties at the heart of the André-Oort conjecture are an important type, called Shimura varieties. While there are a few different types of Shimura varieties, the simplest ones relate to critical mathematical objects called elliptic curves (equations like y^2 = x^3 + 1, or y^2 = x^3 + 3x + 2). The points on these Shimura varieties each encode a recipe for constructing an elliptic curve. But there are other, more complicated Shimura varieties whose structure is less straightforward. Pinning down information about them has been difficult. “You really know little about the structure of general-type Shimura varieties,” said Ruochuan Liu of Peking University. The André-Oort conjecture is a question about just that: What is the basic structure of Shimura varieties, which themselves underpin a lot of modern mathematics? Special Points Remember, varieties can live within varieties, the way the non-tangent intersection of a line and a circle creates a subvariety of two points. The André-Oort conjecture asks about varieties that live inside Shimura varieties. It does this by focusing on particular elements of Shimura varieties. On a Shimura variety, each point represents another variety, such as an elliptic curve. Some of those curves have more symmetry than others, and those that do are represented on the Shimura variety by what mathematicians call “special points.” The André-Oort conjecture is about how these special points are distributed. Imagine starting with a Shimura variety. Think of it as a three-dimensional shape. Next, etch a curve across its surface. This curve is a variety, though not necessarily a Shimura variety. But according to the André-Oort conjecture, if that curve is constantly running into special points, it must itself be a Shimura “It’s sort of a very clean geometric interpretation,” said Tsimerman. Diana Tyszko/University of Toronto Stated differently, the André-Oort conjecture makes predictions in the case where the etched curve is not a Shimura variety. Then, there’s a ceiling on the number of special points it can possibly run into. Mathematicians have been trying for years to verify the ceiling predicted by André and Oort. At the end of the 2000s, Jonathan Pila made significant progress toward establishing it when he introduced a new method for counting special points. Pila’s Progress To prove the André-Oort conjecture, Pila needed to get a rough idea of the number of special points on a variety. He did this by assigning points a quantity known as a “height.” Height measures how complicated a particular point, or value, is. Consider the numbers 10 and 10.000017. On the one hand, they’re very similar, but on the other, they’re obviously very different. “These are both rational numbers, and they are pretty close in size. But one of them is a lot more complex than the other,” said Shankar. One way to quantify this complexity is to convert these numbers into simplified fractions. The height of a number is the absolute value of the numerator or denominator of that fraction — whichever is larger. As a fraction, the number 10 is the same as $latex \frac{10}{1}$, so the height of 10 is 10. But the simplest way of rewriting 10.000017 as a fraction is as $latex \frac{10,000,017} {1,000,000}$. This makes its height around 10 million. There are other ways of measuring height as well (a fact that turned out to be a principal challenge for the authors of the new work). To prove the André-Oort conjecture, Pila needed to show that a non-Shimura variety living inside a Shimura variety doesn’t have a lot of special points. Height is a helpful tool for doing this. To see why, think about the rational numbers whose height is at most 2. Even though there are infinitely many rational numbers with an absolute value of 2 or less, only seven of them are simple enough to have a height that’s 2 or less: 0, 1, $latex \frac{1}{2}$, 2, or one of their negatives. In general, if you can prove that the heights on a set of rational numbers have a ceiling, you’ve proved that the set has a finite number of elements. In this way, height is quite different from absolute value. Pila took advantage of this difference by identifying each special point on a Shimura variety with a different real number. He then proved that those associated real numbers weren’t too complex — their heights couldn’t be too big. That meant there were finitely many real numbers associated to special points. Since each special point corresponded to a distinct real number, there could only be a finite number of special points too. Pila’s method cleverly avoided calculating heights on the Shimura variety itself. Instead, he studied the heights of real numbers and connected the real numbers to the Shimura variety. But this strategy only worked for very simple Shimura varieties. To prove the André-Oort conjecture on all Shimura varieties, he and others would need to come up with a way to measure heights directly. Universal Heights While Pila was making exciting new advances on the André-Oort conjecture, Tsimerman was still a graduate student at Princeton University. He had started working on the problem at the suggestion of his adviser, Peter Sarnak. Pila had also been a student of Sarnak’s, and when he returned to Princeton in 2009 to share his new findings, he and Tsimerman hit it off. “Him and I have been working on it ever since,” said Tsimerman. The biggest obstacle they faced was in braiding together the many different ways of measuring height. For example, sometimes mathematicians define the size of a number by looking at its prime factors instead of its absolute value. To answer the André-Oort conjecture, the authors first needed to translate each of these definitions of complexity to Shimura varieties. Pila and Tsimerman made partial progress in this direction. But advancing further required complicated mathematical ideas that they were less familiar with. In particular, they needed to find a way to combine all these different ways of measuring height into a single coherent number (which would ensure that they had accounted for all the ways in which points might differ from one another). Tsimerman knew that Shankar had experience with the kind of mathematics that they needed to accomplish this and invited him to join the collaboration in August 2020. The three authors worked on the problem for several months, with halting progress. “It sometimes seemed we were close; it sometimes seemed that there were fundamental obstacles that were hard to overcome,” said Shankar. They decided to take a step back last winter, thinking they would make better progress elsewhere. A couple of months later, Shankar introduced Pila and Tsimerman to work by Michael Groechenig of the University of Toronto and Hélène Esnault of the Free University of Berlin after seeing a talk by Groechenig. He suspected that their results — plus work by Gal Binyamini and others — could help to prove that all the different notions of height merge the way the three authors needed. That hunch turned out to be correct, once Esnault and Groechenig added to their previous work. Pila, Shankar and Tsimerman then used the expanded version to prove that the heights of special points never get too big, for any type of Shimura variety. With that, a full proof of the André-Oort conjecture was within reach. “In some sense the punchline to the paper was clear, like, a year and a half ago, but then to get it working required developing this kind of complicated piece of machinery that took a long time to get right,” said Tsimerman. Pila, Shankar and Tsimerman finally posted the paper this fall. They proved that any variety living inside a Shimura variety can’t have too many special points without being a Shimura variety itself. Though reading and verifying the paper carefully will take time, mathematicians are already reflecting on its impact. If the paper’s ideas can be applied more broadly, for example, they might extend a major result from the 1980s about a problem called the Mordell conjecture — a feat that would trigger an avalanche of new results in number theory. “This is a breakthrough, definitely a breakthrough,” said Liu.
{"url":"https://www.quantamagazine.org/mathematicians-prove-30-year-old-andre-oort-conjecture-20220203/?fbclid=IwAR15XPuaJARSG4nZtQSCXrY5UGYyw8qqnzQ1evieZePVisGPtngWiHcLtV4","timestamp":"2024-11-05T06:25:49Z","content_type":"text/html","content_length":"209013","record_id":"<urn:uuid:385beee3-a6da-4193-b8f0-9c60d235fb17>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00376.warc.gz"}
Data Encryption Standard Template:Infobox block cipher The Data Encryption Standard (DES) is a block cipher that uses shared secret encryption. It was selected by the National Bureau of Standards as an official Federal Information Processing Standard (FIPS) for the United States in 1976 and which has subsequently enjoyed widespread use internationally. It is based on a symmetric-key algorithm that uses a 56-bit key. The algorithm was initially controversial with classified design elements, a relatively short key length, and suspicions about a National Security Agency (NSA) backdoor. DES consequently came under intense academic scrutiny which motivated the modern understanding of block ciphers and their cryptanalysis. DES is now considered to be insecure for many applications. This is chiefly due to the 56-bit key size being too small; in January, 1999, distributed.net and the Electronic Frontier Foundation collaborated to publicly break a DES key in 22 hours and 15 minutes (see chronology). There are also some analytical results which demonstrate theoretical weaknesses in the cipher, although they are infeasible to mount in practice. The algorithm is believed to be practically secure in the form of Triple DES, although there are theoretical attacks. In recent years, the cipher has been superseded by the Advanced Encryption Standard (AES). Furthermore, DES has been withdrawn as a standard by the National Institute of Standards and Technology (formerly the National Bureau of Standards). In some documentation, a distinction is made between DES as a standard and DES the algorithm which is referred to as the DEA (the Data Encryption Algorithm). When spoken, "DES" is either spelled out as an abbreviation (Template:IPA-en), or pronounced as a one-syllable acronym (Template:IPA-en). History of DES[ ] The origins of DES go back to the early 1970s. In 1972, after concluding a study on the US government's computer security needs, the US standards body NBS (National Bureau of Standards) — now named NIST (National Institute of Standards and Technology) — identified a need for a government-wide standard for encrypting unclassified, sensitive information.^[1] Accordingly, on 15 May 1973, after consulting with the NSA, NBS solicited proposals for a cipher that would meet rigorous design criteria. None of the submissions, however, turned out to be suitable. A second request was issued on 27 August 1974. This time, IBM submitted a candidate which was deemed acceptable — a cipher developed during the period 1973–1974 based on an earlier algorithm, Horst Feistel's Lucifer cipher. The team at IBM involved in cipher design and analysis included Feistel, Walter Tuchman, Don Coppersmith, Alan Konheim, Carl Meyer, Mike Matyas, Roy Adler, Edna Grossman, Bill Notz, Lynn Smith, and Bryant NSA's involvement in the design[ ] On 17 March 1975, the proposed DES was published in the Federal Register. Public comments were requested, and in the following year two open workshops were held to discuss the proposed standard. There was some criticism from various parties, including from public-key cryptography pioneers Martin Hellman and Whitfield Diffie, citing a shortened key length and the mysterious "S-boxes" as evidence of improper interference from the NSA. The suspicion was that the algorithm had been covertly weakened by the intelligence agency so that they — but no-one else — could easily read encrypted messages.^[2] Alan Konheim (one of the designers of DES) commented, "We sent the S-boxes off to Washington. They came back and were all different."^[3] The United States Senate Select Committee on Intelligence reviewed the NSA's actions to determine whether there had been any improper involvement. In the unclassified summary of their findings, published in 1978, the Committee wrote: Template:Quote However, it also found that Template:Quote Another member of the DES team, Walter Tuchman, stated "We developed the DES algorithm entirely within IBM using IBMers. The NSA did not dictate a single wire!"^[4] In contrast, a declassified NSA book on cryptologic history states: Template:Quote and Template:Quote Some of the suspicions about hidden weaknesses in the S-boxes were allayed in 1990, with the independent discovery and open publication by Eli Biham and Adi Shamir of differential cryptanalysis, a general method for breaking block ciphers. The S-boxes of DES were much more resistant to the attack than if they had been chosen at random, strongly suggesting that IBM knew about the technique in the 1970s. This was indeed the case; in 1994, Don Coppersmith published some of the original design criteria for the S-boxes.^[5] According to Steven Levy, IBM Watson researchers discovered differential cryptanalytic attacks in 1974 and were asked by the NSA to keep the technique secret.^[6] Coppersmith explains IBM's secrecy decision by saying, "that was because [differential cryptanalysis] can be a very powerful tool, used against many schemes, and there was concern that such information in the public domain could adversely affect national security." Levy quotes Walter Tuchman: "[t]hey asked us to stamp all our documents confidential... We actually put a number on each one and locked them up in safes, because they were considered U.S. government classified. They said do it. So I did it".^[6] Bruce Schneier observed that "It took the academic community two decades to figure out that the NSA 'tweaks' actually improved the security of DES."^[7] The algorithm as a standard[ ] Despite the criticisms, DES was approved as a federal standard in November 1976, and published on 15 January 1977 as FIPS PUB 46, authorized for use on all unclassified data. It was subsequently reaffirmed as the standard in 1983, 1988 (revised as FIPS-46-1), 1993 (FIPS-46-2), and again in 1999 (FIPS-46-3), the latter prescribing "Triple DES" (see below). On 26 May 2002, DES was finally superseded by the Advanced Encryption Standard (AES), following a public competition. On 19 May 2005, FIPS 46-3 was officially withdrawn, but NIST has approved Triple DES through the year 2030 for sensitive government information.^[8] The algorithm is also specified in ANSI X3.92,^[9] NIST SP 800-67^[8] and ISO/IEC 18033-3^[10] (as a component of TDEA). Another theoretical attack, linear cryptanalysis, was published in 1994, but it was a brute force attack in 1998 that demonstrated that DES could be attacked very practically, and highlighted the need for a replacement algorithm. These and other methods of cryptanalysis are discussed in more detail later in the article. The introduction of DES is considered to have been a catalyst for the academic study of cryptography, particularly of methods to crack block ciphers. According to a NIST retrospective about DES, The DES can be said to have "jump started" the nonmilitary study and development of encryption algorithms. In the 1970s there were very few cryptographers, except for those in military or intelligence organizations, and little academic study of cryptography. There are now many active academic cryptologists, mathematics departments with strong programs in cryptography, and commercial information security companies and consultants. A generation of cryptanalysts has cut its teeth analyzing (that is trying to "crack") the DES algorithm. In the words of cryptographer Bruce Schneier,^[11] "DES did more to galvanize the field of cryptanalysis than anything else. Now there was an algorithm to study." An astonishing share of the open literature in cryptography in the 1970s and 1980s dealt with the DES, and the DES is the standard against which every symmetric key algorithm since has been compared.^[12] Chronology[ ] Date Year Event 15 May 1973 NBS publishes a first request for a standard encryption algorithm 27 August 1974 NBS publishes a second request for encryption algorithms 17 March 1975 DES is published in the Federal Register for comment August 1976 First workshop on DES September 1976 Second workshop, discussing mathematical foundation of DES November 1976 DES is approved as a standard 15 1977 DES is published as a FIPS standard FIPS PUB 46 1983 DES is reaffirmed for the first time 1986 Videocipher II, a TV satellite scrambling system based upon DES begins use by HBO 22 1988 DES is reaffirmed for the second time as FIPS 46-1, superseding FIPS PUB 46 July 1990 Biham and Shamir rediscover differential cryptanalysis, and apply it to a 15-round DES-like cryptosystem. 1992 Biham and Shamir report the first theoretical attack with less complexity than brute force: differential cryptanalysis. However, it requires an unrealistic 2^47 chosen plaintexts. 30 1993 DES is reaffirmed for the third time as FIPS 46-2 1994 The first experimental cryptanalysis of DES is performed using linear cryptanalysis (Matsui, 1994). June 1997 The DESCHALL Project breaks a message encrypted with DES for the first time in public. July 1998 The EFF's DES cracker (Deep Crack) breaks a DES key in 56 hours. January 1999 Together, Deep Crack and distributed.net break a DES key in 22 hours and 15 minutes. 25 1999 DES is reaffirmed for the fourth time as FIPS 46-3, which specifies the preferred use of Triple DES, with single DES permitted only in legacy systems. 26 2001 The Advanced Encryption Standard is published in FIPS 197 26 May 2002 The AES standard becomes effective 26 July 2004 The withdrawal of FIPS 46-3 (and a couple of related standards) is proposed in the Federal Register^[13] 19 May 2005 NIST withdraws FIPS 46-3 (see Federal Register vol 70, number 96) April 2006 The FPGA based parallel machine COPACOBANA of the Universities of Bochum and Kiel, Germany, breaks DES in 9 days at $10,000 hardware cost.^[14] Within a year software improvements reduced the average time to 6.4 days. Nov. 2008 The successor of COPACOBANA, the RIVYERA machine reduced the average time to less than one single day. Replacement algorithms[ ] Template:Unreferenced section Concerns about security and the relatively slow operation of DES in software motivated researchers to propose a variety of alternative block cipher designs, which started to appear in the late 1980s and early 1990s: examples include RC5, Blowfish, IDEA, NewDES, SAFER, CAST5 and FEAL. Most of these designs kept the 64-bit block size of DES, and could act as a "drop-in" replacement, although they typically used a 64-bit or 128-bit key. In the USSR the GOST 28147-89 algorithm was introduced, with a 64-bit block size and a 256-bit key, which was also used in Russia later. DES itself can be adapted and reused in a more secure scheme. Many former DES users now use Triple DES (TDES) which was described and analysed by one of DES's patentees (see FIPS Pub 46-3); it involves applying DES three times with two (2TDES) or three (3TDES) different keys. TDES is regarded as adequately secure, although it is quite slow. A less computationally expensive alternative is DES-X, which increases the key size by XORing extra key material before and after DES. GDES was a DES variant proposed as a way to speed up encryption, but it was shown to be susceptible to differential cryptanalysis. In 2001, after an international competition, NIST selected a new cipher, the Advanced Encryption Standard (AES), as a replacement. The algorithm which was selected as the AES was submitted by its designers under the name Rijndael. Other finalists in the NIST AES competition included RC6, Serpent, MARS and Twofish. Description[ ] Error: Image is invalid or non-existent. For brevity, the following description omits the exact transformations and permutations which specify the algorithm; for reference, the details can be found in DES supplementary material. DES is the archetypal block cipher — an algorithm that takes a fixed-length string of plaintext bits and transforms it through a series of complicated operations into another ciphertext bitstring of the same length. In the case of DES, the block size is 64 bits. DES also uses a key to customize the transformation, so that decryption can supposedly only be performed by those who know the particular key used to encrypt. The key ostensibly consists of 64 bits; however, only 56 of these are actually used by the algorithm. Eight bits are used solely for checking parity, and are thereafter discarded. Hence the effective key length is 56 bits, and it is usually quoted as such. Every 8th bit of the selected key is discarded, i.e. positions 8, 16, 24, 32, 40, 48, 56 are removed from the 64 bit key leaving behind only the 56 bit key. Like other block ciphers, DES by itself is not a secure means of encryption but must instead be used in a mode of operation. FIPS-81 specifies several modes for use with DES.^[15] Further comments on the usage of DES are contained in FIPS-74.^[16] Overall structure[ ] Template:Unreferenced section The algorithm's overall structure is shown in Figure 1: there are 16 identical stages of processing, termed rounds. There is also an initial and final permutation, termed IP and FP, which are inverses (IP "undoes" the action of FP, and vice versa). IP and FP have almost no cryptographic significance, but were apparently included in order to facilitate loading blocks in and out of mid-1970s hardware. Before the main rounds, the block is divided into two 32-bit halves and processed alternately; this criss-crossing is known as the Feistel scheme. The Feistel structure ensures that decryption and encryption are very similar processes — the only difference is that the subkeys are applied in the reverse order when decrypting. The rest of the algorithm is identical. This greatly simplifies implementation, particularly in hardware, as there is no need for separate encryption and decryption algorithms. The ⊕ symbol denotes the exclusive-OR (XOR) operation. The F-function scrambles half a block together with some of the key. The output from the F-function is then combined with the other half of the block, and the halves are swapped before the next round. After the final round, the halves are not swapped; this is a feature of the Feistel structure which makes encryption and decryption similar The Feistel (F) function[ ] The F-function, depicted in Figure 2, operates on half a block (32 bits) at a time and consists of four stages: Error: Image is invalid or non-existent. 1. Expansion — the 32-bit half-block is expanded to 48 bits using the expansion permutation, denoted E in the diagram, by duplicating half of the bits. The output consists of eight 6-bit(8*6=48bits) pieces, each containing a copy of 4 corresponding input bits, plus a copy of the immediately adjacent bit from each of the input pieces to either side. 2. Key mixing — the result is combined with a subkey using an XOR operation. Sixteen 48-bit subkeys — one for each round — are derived from the main key using the key schedule (described below). 3. Substitution — after mixing in the subkey, the block is divided into eight 6-bit pieces before processing by the S-boxes, or substitution boxes. Each of the eight S-boxes replaces its six input bits with four output bits according to a non-linear transformation, provided in the form of a lookup table. The S-boxes provide the core of the security of DES — without them, the cipher would be linear, and trivially breakable. 4. Permutation — finally, the 32 outputs from the S-boxes are rearranged according to a fixed permutation, the P-box. This is designed so that, after expansion, each S-box's output bits are spread across 6 different S boxes in the next round. The alternation of substitution from the S-boxes, and permutation of bits from the P-box and E-expansion provides so-called "confusion and diffusion" respectively, a concept identified by Claude Shannon in the 1940s as a necessary condition for a secure yet practical cipher. Key schedule[ ] Error: Image is invalid or non-existent. Figure 3 illustrates the key schedule for encryption — the algorithm which generates the subkeys. Initially, 56 bits of the key are selected from the initial 64 by Permuted Choice 1 (PC-1) — the remaining eight bits are either discarded or used as parity check bits. The 56 bits are then divided into two 28-bit halves; each half is thereafter treated separately. In successive rounds, both halves are rotated left by one or two bits (specified for each round), and then 48 subkey bits are selected by Permuted Choice 2 (PC-2) — 24 bits from the left half, and 24 from the right. The rotations (denoted by "<<<" in the diagram) mean that a different set of bits is used in each subkey; each bit is used in approximately 14 out of the 16 subkeys. The key schedule for decryption is similar — the subkeys are in reverse order compared to encryption. Apart from that change, the process is the same as for encryption. The same 28 bits are passed to all rotation boxes. Security and cryptanalysis[ ] Template:Unreferenced section Although more information has been published on the cryptanalysis of DES than any other block cipher, the most practical attack to date is still a brute force approach. Various minor cryptanalytic properties are known, and three theoretical attacks are possible which, while having a theoretical complexity less than a brute force attack, require an unrealistic number of known or chosen plaintexts to carry out, and are not a concern in practice. Brute force attack[ ] For any cipher, the most basic method of attack is brute force — trying every possible key in turn. The length of the key determines the number of possible keys, and hence the feasibility of this approach. For DES, questions were raised about the adequacy of its key size early on, even before it was adopted as a standard, and it was the small key size, rather than theoretical cryptanalysis, which dictated a need for a replacement algorithm. As a result of discussions involving external consultants including the NSA, the key size was reduced from 128 bits to 56 bits to fit on a single In academia, various proposals for a DES-cracking machine were advanced. In 1977, Diffie and Hellman proposed a machine costing an estimated US$20 million which could find a DES key in a single day. By 1993, Wiener had proposed a key-search machine costing US$1 million which would find a key within 7 hours. However, none of these early proposals were ever implemented—or, at least, no implementations were publicly acknowledged. The vulnerability of DES was practically demonstrated in the late 1990s. In 1997, RSA Security sponsored a series of contests, offering a $10,000 prize to the first team that broke a message encrypted with DES for the contest. That contest was won by the DESCHALL Project, led by Rocke Verser, Matt Curtin, and Justin Dolske, using idle cycles of thousands of computers across the Internet. The feasibility of cracking DES quickly was demonstrated in 1998 when a custom DES-cracker was built by the Electronic Frontier Foundation (EFF), a cyberspace civil rights group, at the cost of approximately US$250,000 (see EFF DES cracker). Their motivation was to show that DES was breakable in practice as well as in theory: "There are many people who will not believe a truth until they can see it with their own eyes. Showing them a physical machine that can crack DES in a few days is the only way to convince some people that they really cannot trust their security to DES." The machine brute-forced a key in a little more than 2 days search. The next confirmed DES cracker was the COPACOBANA machine built in 2006 by teams of the Universities of Bochum and Kiel, both in Germany. Unlike the EFF machine, COPACOBANA consists of commercially available, reconfigurable integrated circuits. 120 of these Field-programmable gate arrays (FPGAs) of type XILINX Spartan3-1000 run in parallel. They are grouped in 20 DIMM modules, each containing 6 FPGAs. The use of reconfigurable hardware makes the machine applicable to other code breaking tasks as well. One of the more interesting aspects of COPACOBANA is its cost factor. One machine can be built for approximately $10,000. The cost decrease by roughly a factor of 25 over the EFF machine is an impressive example for the continuous improvement of digital hardware. Adjusting for inflation over 8 years yields an even higher improvement of about 30x. Since 2007, SciEngines GmbH, a spin-off company of the two project partners of COPACOBANA has enhanced and developed successors of COPACOBANA. In 2008 their COPACOBANA RIVYERA reduced the time to break DES to less than one day, using 128 Spartan-3 5000's. Currently SciEngines RIVYERA holds the record in brute-force breaking DES utilizing 128 Spartan-3 5000 FPGAs.^[18] Attacks faster than brute-force[ ] There are three attacks known that can break the full sixteen rounds of DES with less complexity than a brute-force search: differential cryptanalysis (DC), linear cryptanalysis (LC), and Davies' attack. However, the attacks are theoretical and are unfeasible to mount in practiceTemplate:Citation needed; these types of attack are sometimes termed certificational weaknesses. • Differential cryptanalysis was rediscovered in the late 1980s by Eli Biham and Adi Shamir; it was known earlier to both IBM and the NSA and kept secret. To break the full 16 rounds, differential cryptanalysis requires 2^47 chosen plaintexts.Template:Citation needed DES was designed to be resistant to DC. • Linear cryptanalysis was discovered by Mitsuru Matsui, and needs 2^43 known plaintexts (Matsui, 1993); the method was implemented (Matsui, 1994), and was the first experimental cryptanalysis of DES to be reported. There is no evidence that DES was tailored to be resistant to this type of attack. A generalisation of LC — multiple linear cryptanalysis — was suggested in 1994 (Kaliski and Robshaw), and was further refined by Biryukov et al. (2004); their analysis suggests that multiple linear approximations could be used to reduce the data requirements of the attack by at least a factor of 4 (i.e. 2^41 instead of 2^43). A similar reduction in data complexity can be obtained in a chosen-plaintext variant of linear cryptanalysis (Knudsen and Mathiassen, 2000). Junod (2001) performed several experiments to determine the actual time complexity of linear cryptanalysis, and reported that it was somewhat faster than predicted, requiring time equivalent to 2^39–2^41 DES • Improved Davies' attack: while linear and differential cryptanalysis are general techniques and can be applied to a number of schemes, Davies' attack is a specialised technique for DES, first suggested by Donald Davies in the eighties, and improved by Biham and Biryukov (1997). The most powerful form of the attack requires 2^50 known plaintexts, has a computational complexity of 2^50, and has a 51% success rate. There have also been attacks proposed against reduced-round versions of the cipher, i.e. versions of DES with fewer than sixteen rounds. Such analysis gives an insight into how many rounds are needed for safety, and how much of a "security margin" the full version retains. Differential-linear cryptanalysis was proposed by Langford and Hellman in 1994, and combines differential and linear cryptanalysis into a single attack. An enhanced version of the attack can break 9-round DES with 2^15.8 known plaintexts and has a 2^29.2 time complexity (Biham et al., 2002). Minor cryptanalytic properties[ ] DES exhibits the complementation property, namely that ${\displaystyle E_K(P)=C \Leftrightarrow E_\overline{K}(\overline{P})=\overline{C}}$ where ${\displaystyle \overline{x}}$ is the bitwise complement of ${\displaystyle x.}$ ${\displaystyle E_K}$ denotes encryption with key ${\displaystyle K.}$ ${\displaystyle P}$ and ${\displaystyle C}$ denote plaintext and ciphertext blocks respectively. The complementation property means that the work for a brute force attack could be reduced by a factor of 2 (or a single bit) under a chosen-plaintext assumption. DES also has four so-called weak keys. Encryption (E) and decryption (D) under a weak key have the same effect (see involution): ${\displaystyle E_K(E_K(P)) = P}$ or equivalently, ${\displaystyle E_K = D_K}$ There are also six pairs of semi-weak keys. Encryption with one of the pair of semiweak keys, ${\displaystyle K_1}$, operates identically to decryption with the other, ${\displaystyle K_2}$: ${\displaystyle E_{K_1}(E_{K_2}(P)) = P}$ or equivalently, ${\displaystyle E_{K_2} = D_{K_1}.}$ It is easy enough to avoid the weak and semiweak keys in an implementation, either by testing for them explicitly, or simply by choosing keys randomly; the odds of picking a weak or semiweak key by chance are negligible. The keys are not really any weaker than any other keys anyway, as they do not give an attack any advantage. DES has also been proved not to be a group, or more precisely, the set ${\displaystyle \{E_K\}}$ (for all possible keys ${\displaystyle K}$) under functional composition is not a group, nor "close" to being a group (Campbell and Wiener, 1992). This was an open question for some time, and if it had been the case, it would have been possible to break DES, and multiple encryption modes such as Triple DES would not increase the security. It is known that the maximum cryptographic security of DES is limited to about 64 bits, even when independently choosing all round subkeys instead of deriving them from a key, which would otherwise permit a security of 768 bits. See also[ ] Notes[ ] References[ ] • Template:Cite journal (preprint) • Biham, Eli and Shamir, Adi, Differential Cryptanalysis of the Data Encryption Standard, Springer Verlag, 1993. ISBN 0-387-97930-1, ISBN 3-540-97930-1. • Biham, Eli and Alex Biryukov: An Improvement of Davies' Attack on DES. J. Cryptology 10(3): 195–206 (1997) • Biham, Eli, Orr Dunkelman, Nathan Keller: Enhancing Differential-Linear Cryptanalysis. ASIACRYPT 2002: pp254–266 • Biham, Eli: A Fast New DES Implementation in Software • Cracking DES: Secrets of Encryption Research, Wiretap Politics, and Chip Design, Electronic Frontier Foundation • Template:Cite journal (preprint). • Campbell, Keith W., Michael J. Wiener: DES is not a Group. CRYPTO 1992: pp512–520 • Coppersmith, Don. (1994). The data encryption standard (DES) and its strength against attacks. IBM Journal of Research and Development, 38(3), 243–250. • Diffie, Whitfield and Martin Hellman, "Exhaustive Cryptanalysis of the NBS Data Encryption Standard" IEEE Computer 10(6), June 1977, pp74–84 • Ehrsam et al., Product Block Cipher System for Data Security, Template:US patent, Filed February 24, 1975 • Gilmore, John, "Cracking DES: Secrets of Encryption Research, Wiretap Politics and Chip Design", 1998, O'Reilly, ISBN 1-56592-520-3. • Junod, Pascal. "On the Complexity of Matsui's Attack." Selected Areas in Cryptography, 2001, pp199–211. • Kaliski, Burton S., Matt Robshaw: Linear Cryptanalysis Using Multiple Approximations. CRYPTO 1994: pp26–39 • Knudsen, Lars, John Erik Mathiassen: A Chosen-Plaintext Linear Attack on DES. Fast Software Encryption - FSE 2000: pp262–272 • Langford, Susan K., Martin E. Hellman: Differential-Linear Cryptanalysis. CRYPTO 1994: 17–25 • Levy, Steven, Crypto: How the Code Rebels Beat the Government—Saving Privacy in the Digital Age, 2001, ISBN 0-14-024432-8. • Template:Cite journal (preprint) • Template:Cite journal • National Bureau of Standards, Data Encryption Standard, FIPS-Pub.46. National Bureau of Standards, U.S. Department of Commerce, Washington D.C., January 1977. External links[ ] Template:Link GA Template:Link FA Template:Link FA be-x-old:DES bg:DES ca:DES cs:Data Encryption Standard da:Data Encryption Standard de:Data Encryption Standard el:Data Encryption Standard es:Data Encryption Standard eo:DES eu:DES fa:رمزنگاری دیایاس (DES) fr:Data Encryption Standard ko:DES (암호) hr:Data Encryption Standard id:Data Encryption Standard it:Data Encryption Standard he:Data Encryption Standard ka:DES ალგორითმი lv:Data Encryption Standard lt:DES nl:Data Encryption Standard ja:DES (暗号) no:DES nn:DES pl:Data Encryption Standard pt:Data Encryption Standard ro:Data Encryption Standard ru:DES sk:Data Encryption Standard sl:DES sr:DES fi:DES sv:Data Encryption Standard tg:DES tr:DES uk:Data Encryption Standard vi:DES (mã hóa) zh:DES
{"url":"https://cryptography.fandom.com/wiki/Data_Encryption_Standard","timestamp":"2024-11-13T10:02:15Z","content_type":"text/html","content_length":"251888","record_id":"<urn:uuid:92817898-5d87-42d1-905c-67aa1d9daf2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00608.warc.gz"}
The figure shows p-v graph of an ideal one mole gas undergone t-Turito Are you sure you want to logout? The figure shows P-V graph of an ideal one mole gas undergone to cyclic process ABCA, then the process A. Isobaric B. Adiabatic C. Isochoric D. Isothermal The correct answer is: Isothermal Get an Expert Advice From Turito.
{"url":"https://www.turito.com/ask-a-doubt/physics-the-figure-shows-p-v-graph-of-an-ideal-one-mole-gas-undergone-to-cyclic-process-abca-then-the-process-b-rar-q2be826","timestamp":"2024-11-12T12:17:22Z","content_type":"application/xhtml+xml","content_length":"677368","record_id":"<urn:uuid:946c35b8-f1eb-4279-b4d4-6bea05f414a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00836.warc.gz"}
How Do You Calculate Matric Points How Do You Calculate Matric Points How Do You Calculate Matric Points If you’re a first-time student, a recent high school graduate, or getting ready for the next stage of your academic career but don’t know what to study or what possibilities are available to you, your APS can help. By checking your APS, you can determine whether you are eligible to enroll in the postsecondary institution of your choice. Students’ eligibility for admission to universities and colleges is determined by their Admission Point Score (APS), which is determined by their Matric grades. The marks are separated into groups and each % category has a fixed number of points. Once you know how many points you have in total, you may estimate your APS by accumulating the points you have for every subject. What Should My Admission Point Score (APS) Be? Your preferences for subjects and study locations will determine which APS you should receive. For each of the four passing levels, the following are the minimal APS requirements: • Bachelor’s Degree pass – minimum APS 23 • Diploma pass – minimum APS 19 • Higher Certificate pass – minimum APS 15 • NSC pass – minimum APS 14 How The APS Is Calculated? Here is what you need to know about APS calculations: • The actual marks in each subject that you get range from 0 to 100 % • Each mark is given a point score that ranges from 1 to 7, with 7 being the highest and 1 being the lowest. For example, if your mark in Mathematics is 45% the point score allocated to that mark is 3 • To get your total point score, you add the APS of six or five of your best subjects The table below is the APS calculator: Matric Subject Symbol / Mark Obtained in Matric exam APS (Admission Point Score) A (80 – 100%) 7 B (70 – 79%) 6 C (60 – 69%) 5 D (50 – 59%) 4 E (40 – 49%) 3 F (30 – 39%) 2 G (0 – 29%) 1 READ ALSO; What Is An APS Score Which Programs Count For APS? The subjects that count towards your APS are as follows: Accounting Agricultural Management Practices Agricultural Sciences Agricultural Technology Business Studies Civil Technology Computer Applications Technology Consumer Studies Dance Studies Dramatic Arts Economics Electrical Technology Engineering Graphics and Design Geography History Hospitality Studies Information Technology Life Sciences Mathematics Mathematical Literacy Mechanical Technology Music Physical Sciences Religion studies Tourism Visual Arts Understanding how to calculate your matric points is essential for planning your educational journey. By following these simple steps and being aware of the grading scale, you can accurately determine your eligibility for various academic programs. Remember, your matric points open doors to a world of opportunities, shaping your future and paving the way for a successful career. CLICK HERE for more information.
{"url":"https://tvets.co.za/how-do-you-calculate-matric-points/","timestamp":"2024-11-04T08:18:30Z","content_type":"text/html","content_length":"48550","record_id":"<urn:uuid:4b052121-d83c-41b9-82eb-bdd465831e9b>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00271.warc.gz"}
Symbolic transformation is used to simplify expressions with rational coefficients. Coefficients are important, and if they are fractions, denominators must be made the same. Adding fractions requires finding a common multiple. PEMDAS and parentheses help clarify expressions. Percentages are another way to express rational numbers. Symbolic transformation with rational coefficients We use symbolic transformation as the method by which we simplify a given expression. Coefficients of the algebraic terms are very important in symbolic transformations. When the coefficients are integers, the transformation process was very simple. Consider an expression; 6𝑥 - 4𝑥 Using distributive law, it was equivalent to = 𝑥(6-4) = 2𝑥 If the coefficients are rational, then the process is similar to the one above. If the rational coefficients are given in terms of fractions then we have to make the denominators of the fraction the same by multiplying. Based on the distributive law, we perform operations like addition and subtraction and in this case its not possible because in one of the parts 𝑥 is divided by 1/2 and in the other part it is divided by 1/3. For example, we have the expression, 𝑥/2 + 𝑥/3 We can interpret 𝑥/2 as 1/2 of 𝑥 which is the half part of 𝑥 Similarly, We can interpret 𝑥/3 as 1/3 of 𝑥 which is one- third part of 𝑥 Taking the two parts we get, How can we add them? It may seem straightforward but we do not know how much 𝑥 is to be certain thus we have no idea of the actual value of the two blocks. It would be much more helpful if the blocks were of the same size. For example, if it was 𝑥/2 + 𝑥/2 That is half of 𝑥 plus half of 𝑥 means two halves make one full. = 𝑥/2 + 𝑥/2 = 2(𝑥/2) = 𝑥 Now let’s get back to 𝑥/3 + 𝑥/2 We have to divide 𝑥 into equal parts to add them. What is the common multiple of 2 and 3 which we can get? There are many answers such as 6, 12, 18, etc. But the lowest one is 6. (Remember lowest common multiple?) So we have to divide 𝑥 into 6 parts. For doing that, we have to divide each one-third into half because to make 1/3 into 1/6 we have to divide by 2. Similarly, we have to divide each half into three parts because to transform 1/2 into 1/6 we have to divide by 3. What we see from the figure is that we have to add two 𝑥/6’s and three 𝑥/6’s. The answer to that is 2 x 𝑥/6 + 3 x 𝑥/6 We show that whole process numerically by (𝑥 x 3)/(2 x 3) + (𝑥 x 2)/(3 x 2) =3𝑥/6 + 2𝑥/6 It can also be said that when 𝑥/2 and 𝑥/3 are added it becomes the same as 5/6 times 𝑥. When one of the operating coefficients is in the form of a fraction and the other is an integer, we divide that interger coefficients into number of parts equal to the denominator of the fractional For example 𝑥 + 𝑥/6 means we need to add a full part of 𝑥 and one-sixth of 𝑥 To add the two we need to divide 𝑥 term into 6 parts. We see that we have six 𝑥/6’s and one 𝑥/6’s. The answer to that is seven 𝑥/6’s or 7𝑥/6 Often, the convention about the order of operations like PEMDAS helps to clarify the expression. Using parentheses to make your expression clear for your teacher and your friends is always a good idea. Parenthesis can also be used to change the order of operations. For example; 3 x 2 + 4 is 10. However, if you wanted to do the sum first, you could write 3 x (2+4). Percentage in expressions Percentages as we learned in previous classes are one way of expressing quantities that are not integers but rational numbers. writing 𝑥/4 in fraction means we divide 𝑥 into 4 parts and take one of them. In the case of percentage, we divide 𝑥 into 100 parts and take 25 of them. thus writing 25% of 𝑥. 25% of 𝑥 is same as 𝑥/4. For those of you who are wondering where did 25 come from, one-fourth of 100 or 100/4 is 25. We can convert any fraction into a percentage by simply multiplying it by 100. We did that in arithmetic and we do the same here even though there are variables involved. What is 20 percent of 𝑥/6? Here 20 percent is given which means 20 out of 100 or 1 out of 5 parts. That means if we have to find 20 percent of 𝑥/6, then we have to divide each 𝑥/6 into 5 parts. Each part is worth 𝑥/30. 𝑥/30 + 𝑥/30 + 𝑥/30 + 𝑥/30 + 𝑥/30 = 𝑥/6 That same quantity can be expressed in terms of decimals. It would be 20 percent of 0.17𝑥. That means one-fifth of 0.17𝑥. You divide 0.17𝑥 into 5 parts and each part is worth 0.0333𝑥. We have expressions that have operators along with percentages in some algebraic expressions. Example: Let’s say there is an 8 percent discount on the price of a haircut that is 40 dollars normally, then 40 has to be decreased by 8 percent, 8 percent of 40 is calculated at first and then the quantity is deducted from 40. That is 40- 8% of 40 = 40- (8/100) x 40 = 40- 0.08 x 40 = 40 x (1-0.08) = 40 x 0.92 This shows that when a number is decreased or increased by a certain percentage, it can be found by multiplying the given number with a particular factor. If a number has to increase by 5 percent, we multiply it by 1+0.05 or 1.05. On the other hand, if it has to be decreased by 5 percent, then we multiply it by by 1-0.05 or 0.95. Here the number 0.05 is found by dividing the given percentage 5 by 100. (5/100 = 0.05) What happens when 70𝑥 is increased by 30%? = 70𝑥 + 30% of 70𝑥 = 70𝑥 + (30/100) x 70𝑥 = 70𝑥 + 0.3 x 70𝑥 = 70𝑥 x (1 + 0.3) = 70𝑥 x 1.3 = 91𝑥 This calculation could also have been carried out as follows: = 70𝑥 + 30% of 70𝑥 = 70𝑥 + (30/100) x 70𝑥 = 70𝑥 + 0.3 x 70𝑥 = 70𝑥 + 21𝑥 = 91𝑥 There is no hard and fast rule. We can use the strategies as per our convenience.
{"url":"https://edukimath.com/grade-7/algebra-expressions-and-equations/expressions/","timestamp":"2024-11-03T06:57:25Z","content_type":"text/html","content_length":"38851","record_id":"<urn:uuid:b161ae7c-f256-4aa0-a1fb-285006e1f07f>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00411.warc.gz"}
what kind of force field surrounds a stationary electric charge? What what kind of force field surrounds a stationary electric charge? What additional field surrounds it when it moves? "what kind of force field surrounds a stationary electric charge?" Electric field. "What additional field surrounds it when it moves?" Magnetic field. here's the problem... Jane, looking for Tarzan, is running at top speed (5.6m/s)and grabs a vine hanging vertically from a tall tree in the jungle. How high can she swing upward? Jane's initial velocity is 5.6 m/s. The equation for the height of a swing is: h = (v^2/2g)sin2θ where v is the initial velocity, g is the acceleration due to gravity (9.8 m/s^2), and θ is the angle of the swing. Assuming θ is 90 degrees, the equation becomes: h = (5.6^2/2*9.8)sin2(90) h = 28.8 m To determine how high Jane can swing upward, we need to consider the conservation of energy. 1. Firstly, we need to identify the initial and final positions of Jane. The initial position is when she grabs the vine, and the final position is the highest point she reaches while swinging 2. At the initial position, Jane has both kinetic energy (due to her running speed) and gravitational potential energy (due to her height above the ground). At the final position, Jane's kinetic energy is zero, and all her energy is converted into gravitational potential energy. 3. The principle of conservation of energy states that the total energy of a system remains constant. Therefore, the initial energy equals the final energy. 4. The initial energy can be calculated using the kinetic energy formula: Kinetic Energy = (1/2) * mass * velocity^2. As we don't have information about Jane's mass, we can assume a value of 60 kg, which is an average mass for an adult. Initial Kinetic Energy = (1/2) * mass * velocity^2 = (1/2) * 60 kg * (5.6 m/s)^2 = 470.4 Joules 5. At the final position, Jane's energy is completely converted into gravitational potential energy, which is given by the formula: Gravitational Potential Energy = mass * gravitational acceleration * height. Final Potential Energy = mass * gravitational acceleration * height = 60 kg * 9.8 m/s^2 * height 6. Equating the initial and final energies, we have: Initial Kinetic Energy = Final Potential Energy 470.4 Joules = 60 kg * 9.8 m/s^2 * height 7. Solving for height, we find: height = (470.4 Joules) / (60 kg * 9.8 m/s^2) = 0.794 meters Therefore, Jane can swing upward to a height of approximately 0.794 meters. To solve this problem, we can use the principle of conservation of mechanical energy. As Jane swings upward, her potential energy will increase, while her kinetic energy will decrease. We can begin by determining Jane's initial kinetic energy. Since she is running at a speed of 5.6 m/s, her kinetic energy can be calculated using the formula: Kinetic energy = (1/2) * mass * velocity^2 However, the problem does not provide information about Jane's mass. But we can assume her mass cancels out while comparing the initial and final heights because Jane's mass does not affect her potential energy. Now, let's consider Jane's potential energy at the highest point of her swing. At the highest point, all of her initial kinetic energy will be converted into potential energy. The formula for potential energy is: Potential energy = mass * gravity * height Since we are comparing the initial and final heights, mass and gravity can be canceled out as well. Set the initial kinetic energy equal to the potential energy at the highest point: (1/2) * mass * velocity^2 = mass * gravity * height Now, rearrange the equation to solve for the height (h): height = (1/2) * velocity^2 / gravity Plug in the values: height = (1/2) * (5.6 m/s)^2 / (9.8 m/s^2) Simplifying the equation: height = (1/2) * 31.36 m^2/s^2 / 9.8 m/s^2 height ≈ 1.6 m Therefore, Jane can swing upward to a height of approximately 1.6 meters.
{"url":"https://askanewquestion.com/questions/7605","timestamp":"2024-11-08T19:01:09Z","content_type":"text/html","content_length":"24675","record_id":"<urn:uuid:b8d9862a-8546-49eb-bbfd-a794be92025c>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00784.warc.gz"}
Welcome to the Overleaf learn wiki—from here you can access a wide range of help and information on Overleaf and LaTeX. Video series: Introducing Overleaf and LaTeX The Overleaf team are currently preparing new tutorial content but, in the meantime, Dr Vincent Knight, Senior Lecturer in the School of Mathematics, Cardiff University, has prepared a series of short videos which introduce Overleaf and help you get started with producing your first LaTeX document. We have embedded the first video in that series but do please visit Vincent's YouTube Channel to view the full video playlist. How do I get started? A good place to start is opening and exploring one of Overleaf's pre-loaded templates and examples—ideal for helping you create your first project. Choose from one of the following suggestions: • "I'm writing a project report/homework assignment" Then we recommend taking a look at our Project Report templates. • "I've heard LaTeX can produce great presentations - I want to give that a try" Take a look at our Presentation templates. • "I've used LaTeX before but can't remember the commands" We've preloaded a Quick Guide to LaTeX which contains lots of commands to get you going! Other tutorials you may like If you're still stuck, you can browse a list of articles on this wiki or check out these tutorials: Overleaf guides LaTeX Basics Figures and tables References and Citations Document structure Field specific Class files Advanced TeX/LaTeX
{"url":"https://sv.stag-overleaf.com/learn/latex/Tutorials","timestamp":"2024-11-11T23:51:20Z","content_type":"text/html","content_length":"48558","record_id":"<urn:uuid:9752fba7-5423-41c4-b102-40dd5891f400>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00390.warc.gz"}
The Cauchy Integral Formula, $\bar\partial$-equation, and Hartogs Phenomenon. Printable PDF Department of Mathematics, University of California San Diego Math 296 - Graduate Student Colloquium Peter Ebenfelt The Cauchy Integral Formula, $\bar\partial$-equation, and Hartogs Phenomenon. There are many important and striking differences between classical complex analysis in one variable and complex analysis in several variables. In this talk, we will illustrate this by discussing just one such difference, the Hartogs extension phenomenon. For example, if $D$ denotes the annular domain in $\mathbb C^n$ consisting of the unit ball $B$ minus the closed ball of radius ½, then any holomorphic function in $D$ extends holomorphically to the whole unit ball $B$ ... provided $n\geq 2$; it is clearly not true when $n=1$. This particular result can be proved by using the Cauchy integral formula, but a proof that works in more general situations leads to a study of the $\bar \partial$ equation. Organizer: Ioan Bejenaru February 11, 2016 11:00 AM AP&M 6402
{"url":"https://math.ucsd.edu/seminar/cauchy-integral-formula-barpartial-equation-and-hartogs-phenomenon","timestamp":"2024-11-03T01:34:46Z","content_type":"text/html","content_length":"33590","record_id":"<urn:uuid:ae5d741d-e748-460e-8d0a-0f36c19e1bad>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00244.warc.gz"}
Math Games • Geometry: Analyzing situations involving measures □ 8.GAM.1.1.1 Chooses the appropriate unit of mass for the context □ 8.GAM.1.1.2 Estimates and measures mass using unconventional units: grams, kilograms □ 8.GAM.1.1.3 Establishes relationships between units of mass □ 8.GAM.1.2.1 Chooses the appropriate unit of time for the context □ 8.GAM.1.2.2 Estimates and measures time using conventional units □ 8.GAM.1.2.3 Establishes relationships between units of time: second, minute, hour, day, daily cycle, weekly cycle, yearly cycle □ 8.GAM.1.2.4 Distinguishes between duration and position in time □ 8.GAM.1.3.1 Compares angles: acute angle, right angle, obtuse angle □ 8.GAM.1.3.2 Estimates and determines the degree measure of angles □ 8.GAM.1.3.3 Describes the characteristics of different types of angles: complementary, supplementary, adjacent, vertically opposite, alternate interior, alternate exterior and corresponding □ 8.GAM.1.3.4 Determines measures of angles using the properties of the following angles: complementary, supplementary, vertically opposite, alternate interior, alternate exterior and corresponding □ 8.GAM.1.3.5a Finds unknown measurements using the properties of figures and relations: measures of angles in a triangle □ 8.GAM.1.3.5b Finds unknown measurements using the properties of figures and relations: degree measures of central angles and arcs □ 8.GAM.1.3.6 Justifies statements using definitions or properties associated with angles and their measures □ 8.GAM.1.4.1 Chooses the appropriate unit of length for the context □ 8.GAM.1.4.2 Estimates and measures the dimensions of an object using conventional units: millimetre, centimetre, decimetre, metre and kilometre □ 8.GAM.1.4.3a Establishes relationships between units of length: millimetre, centimetre, decimetre, metre and kilometre □ 8.GAM.1.4.3b Establishes relationships between measures of length of the international system (SI) □ 8.GAM.1.4.4 Constructs relations that can be used to calculate the perimeter or circumference of figures □ 8.GAM.1.4.5a Finds the following unknown measurements, using properties of figures and relations: perimeter of plane figures □ 8.GAM.1.4.5b Finds the following unknown measurements, using properties of figures and relations: a segment in a plane figure, circumference, radius, diameter, length of an arc, a segment resulting from an isometry or a similarity transformation □ 8.GAM.1.4.6 Justifies statements concerning measures of length □ 8.GAM.1.5.1 Chooses the appropriate unit of area for the context □ 8.GAM.1.5.2 Estimates and measures surface areas using conventional units: square centimetre, square decimetre, square metre □ 8.GAM.1.5.3 Establishes relationships between SI units of area □ 8.GAM.1.5.4 Constructs relations that can be used to calculate the area of plane figures: quadrilateral, triangle, circle (sectors) □ 8.GAM.1.5.5a Finds unknown measurements, using properties of figures and relations: area of circles and sectors □ 8.GAM.1.5.5b Finds unknown measurements, using properties of figures and relations: area of figures that can be split into circles (sectors), triangles or quadrilaterals □ 8.GAM.1.5.5c Finds unknown measurements, using properties of figures and relations: lateral or total area of right prisms, right cylinders and right pyramids □ 8.GAM.1.5.5d Finds unknown measurements, using properties of figures and relations: lateral or total area of solids that can be split into right prisms, right cylinders or right pyramids □ 8.GAM.1.5.5e Finds unknown measurements, using properties of figures and relations: area of figures resulting from an isometry □ 8.GAM.1.5.5f Finds unknown measurements, using properties of figures and relations: area of figures resulting from a similarity transformation □ 8.GAM.1.5.6 Justifies statements concerning measures of area □ 8.GAM.1.6.1 Chooses the appropriate unit of volume for the context □ 8.GAM.1.6.2 Estimates and measures volume or capacity using conventional units: cubic centimetre, cubic decimetre, cubic metre, millilitre, litre □ 8.GAM.1.6.3 Establishes relationships between capacity units: millilitre, litre □ 8.GAM.1.7.1 Determines, through exploration or deduction, different metric relations associated with plane figures □ 8.AG.1.1.1 Locates objects/numbers on an axis, based on the types of numbers studied □ 8.AG.1.1.2 Locates points in a Cartesian plane, based on the types of numbers studied (x and y-coordinates of a point) • Arithmetic: Understanding Operations involving real numbers □ 8.AUOR.1.1.1 Chooses an appropriate way of writing numbers for a given context □ 8.AUOR.1.1.2 Looks for equivalent expressions: decomposing (additive, multiplicative, etc.), equivalent fractions, simplifying and reducing, factoring, etc. □ 8.AUOR.1.1.3 Translates (mathematizes) a situation using a sequence of operations (no more than two levels of parentheses) □ 8.AUOR.1.1.4 Anticipates the results of operations □ 8.AUOR.1.1.5 Interprets the results of operations in light of the context • Artithmetic: Operations involving real numbers □ 8.AOR.1.1.1 Uses, in different contexts, the properties of divisibility: 2, 3, 4, 5 and 10 □ 8.AOR.1.2.1 Approximates the result of an operation or sequence of operations □ 8.AOR.1.3.1 Mentally computes the four operations, especially with numbers written in decimal notation, using equivalent ways of writing numbers and the properties of operations □ 8.AOR.1.4.1a Computes, in writing, the four operations with numbers that are easy to work with (including large numbers), using equivalent ways of writing numbers and the properties of operations: numbers written in decimal notation, using rules of signs □ 8.AOR.1.4.1b Computes, in writing, the four operations with numbers that are easy to work with (including large numbers), using equivalent ways of writing numbers and the properties of operations: positive numbers written in fractional notation, with or without the use of objects or diagrams □ 8.AOR.1.5.1 Computes, in writing, sequences of operations (numbers written in decimal notation) in accordance with the order of operations, using equivalent ways of writing numbers and the properties of operations (with no more than two levels of parentheses) □ 8.AOR.1.6.1 Computes, using a calculator, operations and sequences of operations in accordance with the order of operations □ 8.AOR.1.7.1 Switches, as needed, from one way of writing numbers to another □ 8.AOR.1.8.1 Calculates the power of a natural number □ 8.AOR.1.9.1 Decomposes a natural number into prime factors □ 8.AR.1.1.1 Identifies patterns in various situations and in various forms □ 8.AR.1.1.2 Analyzes situations using different registers (types) of representation □ 8.AR.1.1.3 Represents a situation generally using a graph • Geometry: Spacial sense and analyzing situations involving geometric figures □ 8.GSS.1.1.1 Describes convex and nonconvex polygons □ 8.GSS.1.1.10 Justifies statements using definitions or properties of plane figures □ 8.GSS.1.1.2 Describes and classifies quadrilaterals □ 8.GSS.1.1.3 Describes and classifies triangles □ 8.GSS.1.1.4 Describes circles: radius, diameter, circumference, central angle □ 8.GSS.1.1.5 Recognizes and names regular convex polygons □ 8.GSS.1.1.6 Decomposes plane figures into circles (sectors), triangles or quadrilaterals □ 8.GSS.1.1.7 Describes circles and sectors □ 8.GSS.1.1.8 Recognizes and draws main segments and lines: diagonal, altitude, median, perpendicular bisector, bisector, apothem, radius, diameter, chord □ 8.GSS.1.1.9 Identifies the properties of plane figures using geometric transformations and constructions □ 8.GSS.1.2.1 Matches the net of a convex polyhedron to the corresponding convex polyhedron □ 8.GSS.1.2.2 Determines the possible nets of a solid □ 8.GSS.1.2.3 Names the solid corresponding to a net □ 8.GSS.1.2.4a Describes solids: vertex, edge, base, face □ 8.GSS.1.2.4b Describes solids: altitude, apothem, lateral face □ 8.GSS.1.2.5 Tests Euler's relation on convex polyhedrons □ 8.GSS.1.2.6 Recognizes solids that can be split into right prisms, right cylinders, right pyramids □ 8.GSS.1.3.1 Identifies properties and invariants resulting from geometric constructions and transformations □ 8.GSS.1.3.2 Identifies congruence (translation, rotation and reflection) between two figures □ 8.GSS.1.3.3 Constructs the image of a figure under a translation, rotation and reflection □ 8.GSS.1.3.4 Recognizes dilatation with a positive scale factor □ 8.GSS.1.3.5 Constructs the image of a figure under a dilatation with a positive scale factor □ 8.GSS.1.4.1 Identifies congruent figures in frieze patterns and tessellations □ 8.GSS.1.4.2 Recognizes congruent or similar figures □ 8.GSS.1.4.3 Recognizes the geometric transformation(s) linking a figure and its image □ 8.GSS.1.4.4 Determines the properties and invariants of congruent or similar figures □ 8.GSS.1.4.5 Justifies statements using definitions or properties of congruent, similar or equivalent figures, depending on the cycle and year □ 8.AE.1.1.1 Describes, using his/her own words and mathematical language, numerical patterns □ 8.AE.1.1.2 Describes, using his/her own words and mathematical language, series of numbers and family of operations □ 8.AE.1.1.3 Adds new terms to a series when the first three terms or more are given □ 8.AE.1.1.4a Describes the role of components of algebraic expressions:unknown □ 8.AE.1.1.4b Describes the role of components of algebraic expressions:variable, constant □ 8.AE.1.1.4c Describes the role of components of algebraic expressions: parameter □ 8.AE.1.1.4d Describes the role of components of algebraic expressions: coefficient, degree, term, constant term, like terms □ 8.AE.1.1.5 Constructs an algebraic expression using a register (type) of representation □ 8.AE.1.1.6 Interprets an algebraic expression in light of the context □ 8.AE.1.1.7 Recognizes or constructs equivalent algebraic expressions □ 8.AE.1.1.8 Recognizes or constructs equalities and equations □ 8.AE.1.2.1 Calculates the numeric value of an algebraic expression □ 8.AE.1.2.2 Performs the following operations on algebraic expressions, with or without objects or diagrams: addition and subtraction, multiplication and division by a constant, multiplication of first-degree monomials □ 8.AE.1.2.3 Factors out the common factor in numerical expressions (distributive property of multiplication over addition or subtraction) □ 8.AE.1.3.1 Recognizes whether a situation can be translated by an equation □ 8.AE.1.3.10 Interprets solutions or makes decisions, if necessary, depending on the context □ 8.AE.1.3.2 Recognizes or constructs relations or formulas □ 8.AE.1.3.3 Manipulates relations or formulas (e.g. isolates an element) □ 8.AE.1.3.4 Represents a situation using a first-degree equation with one unknown □ 8.AE.1.3.5 Represents an equation using another register (type) of representation, if necessary □ 8.AE.1.3.6 Determines the missing term in an equation (relations between operations): a + b = __, a + __ = c, __ + b = c, a - b = __, a - __ = c, __ - b = c □ 8.AE.1.3.7 Transforms arithmetic equalities and equations to maintain equivalence (properties and rules for transforming equalities) and justifies the steps followed, if necessary □ 8.AE.1.3.8 Uses different methods to solve first-degree equations with one unknown of the form ax + b = cx + d: trial and error, drawings, arithmetic methods (inverse or equivalent operations), algebraic methods (balancing equations or hidden terms) □ 8.AE.1.3.9 Validates a solution, with or without technological tools, by substitution □ 8.PR.1.1.1 Simulates random experiments with or without the use of technological tools □ 8.PR.1.1.10 Recognizes certain, probable, impossible, simple, complementary, compatible, incompatible, dependent, independents events □ 8.PR.1.1.11 Uses fractions, decimals or percentages to quantify a probability □ 8.PR.1.1.12 Recognizes that a probability is always between 0 and 1 □ 8.PR.1.1.13a Predicts qualitatively an outcome or several events using a probability line, among other things: certain, possible or impossible outcome □ 8.PR.1.1.13b Predicts qualitatively an outcome or several events using a probability line, among other things: more likely, just as likely, less likely event □ 8.PR.1.1.2 Experiments with activities involving chance, using various objects (e.g. spinners, rectangular prisms, glasses, marbles, thumb tacks, 6-, 8- or 12-sided dice) □ 8.PR.1.1.3a In activities involving chance, recognizes variability in possible outcomes (uncertainty) □ 8.PR.1.1.3b In activities involving chance, recognizes equiprobability (e.g. quantity of objects, symmetry of an object such as a cube) □ 8.PR.1.1.3c In activities involving chance, becomes aware of the independence of events (e.g. rolling dice, tossing a coin, drawing lots) □ 8.PR.1.1.4 Uses tables or diagrams to collect and display the outcomes of an experiment □ 8.PR.1.1.5 Compares the outcomes of a random experiment with known theoretical probabilities □ 8.PR.1.1.6 Distinguishes between prediction and outcome □ 8.PR.1.1.7 Conducts or simulates random experiments involving one or more steps (with or without replacement, with or without order) □ 8.PR.1.1.8a Enumerates the possible outcomes of a random experiment using tables, tree diagram □ 8.PR.1.1.8b Enumerates the possible outcomes of a random experiment using networks, tables, diagrams, Venn diagrams □ 8.PR.1.1.9 Defines the sample space of a random experiment □ 8.PR.1.2.1 Represents an event using different registers (types of representation) □ 8.PR.1.2.2 Compares qualitatively the theoretical or experimental probability of an event occurring □ 8.PR.1.2.3 Distinguishes between theoretical and experimental probability □ 8.PR.1.2.4 Calculates the probability of an event □ 8.PR.1.2.5 Interprets probabilities and makes appropriate decisions □ 8.ST.1.1.10 Calculates and interprets an arithmetic mean □ 8.ST.1.1.11a Determines and interprets measures of dispersion: range □ 8.ST.1.1.11b Determines and interprets measures of position: maximum, minimum □ 8.ST.1.1.12 Chooses the appropriate statistical measures for a given situation □ 8.ST.1.1.1a Conducts a survey or a census: Formulates questions for a survey □ 8.ST.1.1.1b Conducts a survey or a census: Chooses a sampling method: simple random, systematic □ 8.ST.1.1.1c Conducts a survey or a census: Chooses a representative sample □ 8.ST.1.1.1d Conducts a survey or a census: Collects, describes and organizes data (classifies or categorizes) using tables □ 8.ST.1.1.2 Recognizes possible sources of bias □ 8.ST.1.1.3 Interprets data presented in a table or a bar graph, a pictograph, a broken-line graph or a circle graph □ 8.ST.1.1.4 Distinguishes different types of statistical variables: qualitative, discrete or continuous quantitative □ 8.ST.1.1.5 Chooses appropriate register(s) (types) of representation to organize, interpret and present data □ 8.ST.1.1.6a Organizes and presents data using a table, a bar graph, a pictograph and a broken-line graph □ 8.ST.1.1.6b Organizes and presents data using a table presenting variables or frequencies, or using a circular graph □ 8.ST.1.1.7 Compares one-variable distributions □ 8.ST.1.1.8 Understands and calculates the arithmetic mean □ 8.ST.1.1.9 Describes the concept of arithmetic mean (leveling or balance point) • Arithmetic: Understanding Real Numbers □ 8.AUN.1.1.1a Identifies the different meanings of fractions: part of a whole, division, ratio, operator, measurement □ 8.AUN.1.1.1b Verifies whether two fractions are equivalent □ 8.AUN.1.1.1c Compares a fraction to 0, 1/2 or 1 □ 8.AUN.1.1.1d Orders fractions with the same denominator or where one denominator is a multiple of the other or with the same numerator □ 8.AUN.1.10.1.b Compares and arranges in order numbers expressed in different ways (fractional, decimal, exponential [integral exponent], percentage, square root, scientific notation) □ 8.AUN.1.10.1a Compares and arranges in order numbers written in fractional or decimal notation □ 8.AUN.1.2.1a Represents decimals up to thousandths in a variety of ways (using objects or drawings) and identifies equivalent representations □ 8.AUN.1.2.1b Reads and writes numbers up to thousandths written in decimal notation □ 8.AUN.1.2.1c Composes and decomposes a number written in decimal notation up to thousandths and recognizes equivalent expressions □ 8.AUN.1.2.1d Compares numbers written in decimal notation up to thousandths or arranges them in increasing or decreasing order □ 8.AUN.1.3.1a Represents integers in a variety of ways (using objects or drawings) □ 8.AUN.1.3.1b Reads and writes integers □ 8.AUN.1.3.1c Compares integers or arranges integers in increasing or decreasing order □ 8.AUN.1.4.1 Expresses numbers in a variety of ways (fractional, decimal percentage notation) □ 8.AUN.1.5.1 Represents, reads and writes numbers written in fractional or decimal notation □ 8.AUN.1.6.1 Approximates, in various contexts, the numbers under study (e.g. estimates, rounds off, truncates) □ 8.AUN.1.7.1 Defines the concept absolute value in context (e.g. difference between two numbers, distance between two points) □ 8.AUN.1.8.1a Represents and writes squares and square roots □ 8.AUN.1.8.1b Represents and writes numbers in exponential notation (integral exponent) □ 8.AUN.1.9.1 Estimates the order of magnitude of a real number in • Arithmetic: Understanding and analyzing proportional situations □ 8.AUP.1.1.1a Calculates a certain percentage of a number □ 8.AUP.1.1.1b Calculates the value corresponding to 100 per cent □ 8.AUP.1.2.1 Recognizes ratios and rates □ 8.AUP.1.3.1 Interprets ratios and rates □ 8.AUP.1.4.1 Describes the effect of changing a term in a ratio or rate □ 8.AUP.1.5.1a Compares ratios and rates qualitatively (equivalent rates and ratios, unit rate) □ 8.AUP.1.5.1b Compares ratios and rates quantitatively (equivalent rates and ratios, unit rate) □ 8.AUP.1.6.1 Translates a situation using a ratio or rate □ 8.AUP.1.7.1 Recognizes a proportional situation using the context, a table of values or a graph □ 8.AUP.1.8.1 Represents or interprets a proportional situation using a graph, a table of values or a proportion □ 8.AUP.1.9.1 Solves proportional situations (direct or inverse variation) by using different strategies (e.g. unit-rate method, factor of change, proportionality ratio, additive procedure, constant product [inverse variation])
{"url":"https://qc.mathgames.com/standards/grade8","timestamp":"2024-11-14T10:53:08Z","content_type":"text/html","content_length":"708781","record_id":"<urn:uuid:2e8840e8-8dad-4e67-b606-2401938433dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00241.warc.gz"}
The theory of ideas and Plato’s philosophy of mathematics The theory of ideas and Plato’s philosophy of mathematics Main Article Content In this article I analyze the issue of many levels of reality that are studied by natural sciences. Particularly interesting is the level of mathematics and the question of the relationship between mathematics and the structure of the real world. The mathematical nature of the world has been considered since ancient times and is the subject of ongoing research for philosophers of science to this day. One of the viewpoints in this field is mathematical Platonism. In contemporary philosophy it is widely accepted that according to Plato mathematics is the domain of ideal beings (ideas) that are eternal and unalterable and exist independently from the subject’s beliefs and decisions. Two issues seem to be important here. The first issue concerns the question: was Plato really a proponent of present-day mathematical Platonism? The second one is of greater importance: how mathematics influences our understanding of the nature of the world on its many ontological levels? In the article I consider three issues: the Platonic theory of “two worlds”, the method of building a mathematical structure, and the ontology of mathematics. Article Details How to Cite Dembiński, B. (2019). The theory of ideas and Plato’s philosophy of mathematics. Philosophical Problems in Science (Zagadnienia Filozoficzne W Nauce), (66), 95–108. Retrieved from https://zfn.edu.pl/ Emergence of the Classical Aristotle, 1924. Aristotle’s Metaphysics: A Revised Text with Introduction and Commentary. Ed. by W.D. Ross. Vol. 2. Oxford: Clarendon Press. Brown, J.R., 2008. Philosophy of Mathematics: A Contemporary Introduction to the World of Proofs and Pictures. 2nd ed, Routledge contemporary introductions to philosophy. New York - London: Dembiński, B., 2003. Późna nauka Platona: związki ontologii i matematyki, Prace Naukowe Uniwersytetu Śląskiego w Katowicach nr 2143. Katowice: Wydaw. Uniwersytetu Śląskiego. Dembiński, B., 2007. Streit um die "Zweiweltentheorie" in der Philosophie von Plato. Proceedings of the Twenty-First World Congress of Philosophy. Ankara: Philosophical Society of Turkey, pp.67–72. Available at: https://doi.org/10.5840/wcp21200710115. Dembiński, B., 2010. Późny Platon i Stara Akademia [Late Plato and the Old Academy], Fundamenta: studia z historii filozofii t. 63. Kęty: Wydawnictwo Marek Derewiecki. Dillon, J.M., 2003. The Heirs of Plato: A Study of the Old Academy (347-274 B.c.) Oxford: Clarendon Press. Heller, M., 2006. Filozofia i wszechświat: wybór pism. Kraków: TAiWPN UNIVERSITAS. Plato, 1955. Platonis Opera. Recognovit Brevique Adnotatione Critica Instruxit Joannes Burnet. Vol 1-5. Ed. by J. Burnet, Scriptorum Classicorum Bibliotheca Oxoniensis. Oxonium: Typographeo
{"url":"https://zfn.edu.pl/index.php/zfn/article/view/468","timestamp":"2024-11-13T12:53:58Z","content_type":"text/html","content_length":"28458","record_id":"<urn:uuid:f56bac69-5066-4f28-857c-03d20817df8a>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00568.warc.gz"}
Purex Power Shot ~OVER~~ Winners announced One of my favorite things about being a blogger is trying new things. The newest review item comes via Purex Insiders! Its the New Purex PowerShot no mess bottle! I found it very easy to use~ Just open the cap, flip the bottle upside down to deliver a dose of the detergent. Flip the bottle back upright and you are done. Just the right amount~ The built in auto-dosing technology in the bottle does the measuring for you, so there is no waste and results in a perfect clean manner every time use 1 flip for a normal regular load use 2 flips for large and extra dirty clothes Purex PowerShot detergent does the measuring for you!! I like that it measures for you!! What a great way to save detergent!! The bottle I received for review is good for 45 loads!! With a big family like mine it should last us about 2 weeks! I tried it with my laundry today and the clothes smell so good! There was no mess to clean up from spillage like other detergents with cup lids. I also like that it made starting the laundry faster, so that I could get on with the other household chores. I give this new Purex PowerShot a 5 out of 5! New Purex® PowerShot detergent simplifies the laundry routine by taking the guesswork of measuring. Purex® PowerShot is the only bottle that automatically dispenses the right amount of super concentrated formula with 50% more stain fighting power in every drop. The new PowerShot is available in the fragrances you love ~~ Mountain Breeze and Natural Elements Linen and Lilies! PowerShot will be available in stores starting Feb 2015. For more info on Purex and their products check out WEBSITE FACEBOOK TWITTER Here is your chance to try the New Purex PowerShot for yourself!! 5 winners will WIN. 1 winner will win a coupon for a FREE bottle of Purex PowerShot and 4 winners will win a coupon for $1.50 off a bottle of Purex PowerShot. Enter below for your chance to win a great coupon. The more entries you have the better your chance to win. There are many daily entries, so BOOKMARK this page and come back daily to enter:) Giveaway ends Feb 3rd @11:59AM CST. Winner entries will be verified before awarding of the coupon. 18+ USA only a Rafflecopter giveaway Note: This post may contain a sponsored/affiliate/referral link. Thank you for supporting this site! The Purex brand provided me with a sample of Purex PowerShot deteregent in exchange for a product review. However, all opinions in this post are my own. 62 comments: Today I have done 0 loads of laundry - but it's all waiting for me tomorrow! I do about 4 loads today I did 2 loads today So far, I've done 2 loads today I have done 2 loads of laundry today so far. I have done zero so far. Only because I was at work. Guess what I am doing this evening though! Probably 2-3 loads. I have done 1 load today. I did laundry yesterday so I did zero loads today! I do 2 loads a day and i did 4 today because of towels and sheets. I am on load 3 and still going! I haven't done any laundry today, but two loads are in my future. I have not done any laundry yet, but later today I'll have about 3 loads to do. I haven't done any laundry today. I have not done any laundry today. I didn't do any today but I did two loads this week which is about normal for me. We've done three loads today! I did one today, just towels I have done 2 loads of laundry today. Zero! I hate laundry. I plan on doing one load of laundry today. Just 1 so far...need do a couple more though. I did 2 loads today! 2 loads I haven't done any laundry today. Thankfully, I have no laundry to do today! Yea! I have not done any yet today! Will probably get a load going after dinner! i do 2-4 loads a day but today ive done only 1 I did 3 loads of laundry today. No laundry today! I'm currently on my third load. i have done 2 load just today Just one load today, so far two so far today. trying to enjoy the quiet of the snow and enjoy the kids sledding, 2 loads today! I have done two...but should have done about 6! I didn't do laundry today. I did 4 loads of laundry today I have already did two loads of laundry today! No laundry today! I haven't done any today as we went out shopping but I only do a couple of loads a week. I have done 2 loads of laundry today! I have not done any yet today, but have two loads for tonight! Today I Did 4 Loads Of Laundry. Entry Name-Heather Hayes Panjon No laundry today - but it's staring to pile up! Ive only done 2 loads today I had a lot going on so got none done today. I did two loads today! I have done 4 loads today! I am doing 3 loads today I did two loads yesterday! I have done 0 Loads of laundry today! I did two today! NONE I SAVE THEM UP I USE A LANDROMAT Today I have 3 loads waiting, In three days, three more loads, two today I have to admit to being lazy so far....none today. Should I have done some? Of course, lol. Today none, but this morning, my sweetie wanted to do a couple of loads. Today I did 3 loads! I did 3 loads of laundry today. :) I did zero today. I usually do 1 load every 2-3 weeks. I did one load today.
{"url":"http://www.dnbustersplace.com/2015/01/giveaway-purex-power-shot-ends-23.html?showComment=1422788117863","timestamp":"2024-11-05T15:22:44Z","content_type":"text/html","content_length":"159046","record_id":"<urn:uuid:ceff4a88-b99b-4c6f-8844-aaa107f27010>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00729.warc.gz"}
Simple algorithm for generating random numbers with bigger smaller probability I'm currently working on a game, with scrollable screen and I need to find a simple algorithm for placing obstacles in the game. I have a gameSpeed, that is increased in time (from 1 to 12, increased by 0.005 each 1/60s) and a range of available positions between 200 and 600 (ints). I'd like to achieve a bigger probability of receiving smaller number when the speed is bigger, but it's my 14th hour straight coding and I cannot come up with anything usable and not overcomplicated. I'd like to minimize Math and random functions so that the rendering loop won't take too long. Any help appreciated ! To move the density in one way, you may square or square-root the random number. Math.random()*Math.random() will produce smaller integers (around 0) with a higher chance than larger ones (near 1). Your formula could be something like this: var position = Math.pow(Math.random(), gameSpeed / 3) * 400 + 200; Make an array with more lower values than higher ones, for example, to generate random integers between [1,5]. (both inclusive). As an example, your array could be [1,1,1,1,1,2,2,2,3,3,3,4,4,5]. And if you pick an element from that array at random, you'll have a better probability of picking a low number than a high one. Ignite Your Future with Machine Learning Training!
{"url":"https://www.edureka.co/community/168579/algorithm-generating-numbers-bigger-smaller-probability","timestamp":"2024-11-02T04:38:18Z","content_type":"text/html","content_length":"170088","record_id":"<urn:uuid:74cbe737-b8ae-463d-9c81-d945ef08da96>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00315.warc.gz"}
new_drag_control | API References | Q-CTRL Open Controls | References | Q-CTRL Documentation qctrlopencontrols.new_drag_control(rabi_rotation, segment_count, duration, width, beta, azimuthal_angle=0.0, name=None) Generates a Gaussian driven control sequence with a first-order DRAG (Derivative Removal by Adiabatic Gate) correction applied. The addition of DRAG further reduces leakage out of the qubit subspace via an additional off-quadrature corrective driving term proportional to the derivative of the Gaussian pulse. • rabi_rotation (float) – Total Rabi rotation $\theta$ to be performed by the driven control. • segment_count (int) – Number of segments in the control sequence. • duration (float) – Total duration $t_g$ of the control sequence. • width (float) – Width (standard deviation) $\sigma$ of the ideal Gaussian pulse. • beta (float) – Amplitude scaling $\beta$ of the Gaussian derivative. • azimuthal_angle (float , optional) – The azimuthal angle $\phi$ for the rotation. Defaults to 0. • name (str , optional) – An optional string to name the control. Defaults to None. A control sequence as an instance of DrivenControl. A DRAG-corrected Gaussian driven control ^1 applies a Hamiltonian consisting of a piecewise constant approximation to an ideal Gaussian pulse controlling $\sigma_x$ while its derivative controls the application of the $\sigma_y$ operator: $H(t) = \frac{1}{2}(\Omega_G(t) \sigma_x + \beta \dot{\Omega}_G(t) \sigma_y)$ where $\Omega_G(t)$ is simply given by new_gaussian_control. Optimally, $\beta = -\frac{\lambda_1^2}{4\Delta_2}$ where $\Delta_2$ is the anharmonicity of the system and $\lambda_1$ is the relative strength required to drive a transition $\lvert 1 \rangle \rightarrow \lvert 2 \rangle$ vs. $\lvert 0 \rangle \rightarrow \lvert 1 \rangle$. Note that this choice of $\beta$, sometimes called “simple drag” or “half derivative”, is a first-order version of DRAG, and it excludes an additional detuning corrective term. [1] Motzoi, F. et al. Physical Review Letters 103, 110501 (2009). [2] J. M. Gambetta, F. Motzoi, S. T. Merkel, and F. K. Wilhelm, Physical Review A 83, 012308 (2011).
{"url":"https://docs.q-ctrl.com/references/qctrl-open-controls/qctrlopencontrols/new_drag_control","timestamp":"2024-11-13T08:39:49Z","content_type":"text/html","content_length":"78642","record_id":"<urn:uuid:5e294d4d-27f8-4575-9b83-8541c367e5aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00468.warc.gz"}
Inverse Function Calculator Noah Gonzalez MS in Mathematics "With a focus on algebra and calculus, I've helped many students to score impressively. You can be next. Feel free to contact." 2845 Completed Orders 1138 Student Reviews Edward Adorno Ph.D. in Applied Mathematics "My proficiency in mathematical modeling has benefited students in various ways. Let my expertise help you to excel in maths." 2833 Completed Orders 1133 Student Reviews Michael Batres M.Sc. in Mathematics "Along with my Ph.D., I also have experience in teaching. My expertise helps me to simplify complex concepts for students." 2664 Completed Orders 1066 Student Reviews Alvin Bobadilla M.Sc. in Mathematics "My clarity in explanations and industry experience have helped me to resonate with thousands of students. Drop in your queries now." 2542 Completed Orders 1017 Student Reviews Let Our Experts Help Find the Inverse of a Function Mastering math concepts often requires personalized guidance. Beyond our Inverse Function Calculator tool, we offer a team of adept math tutors who can provide one-to-one assistance. Our tutors are subject matter experts and hold advanced degrees in mathematics, ensuring comprehensive understanding and effective support. Elevate your math learning experience by opting for our professional tutors, complementing the utility of our Inverse Function Calculator. Find An Expert
{"url":"https://myassignmenthelp.com/inverse-function-calculator.html","timestamp":"2024-11-02T01:46:14Z","content_type":"text/html","content_length":"144996","record_id":"<urn:uuid:c83f3398-7bda-426f-9fc3-74e69bde5e79>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00877.warc.gz"}
Flashcards - Quals- Robinson 1 1. correlation □ the type (positive or negative) and degree of relationship □ between 2 variables. Does not prove causation. 2. standard deviation- □ average distance that cases within a □ distribution of scores vary from the mean 3. standard scores- □ these describe individual statistics/test □ scores in relation to the bell curve 4. normal curve/ bell curve □ is a graphical representation of a normal distribution. For anything you measure, results across a large population distribute themselves symmetrically. In a normal distribution, the mean median and mode are all equal and the distribution is symmetrical across □ the mean (one half mirrors hte other). The tails are asymptotic in that they come closer and closer to the horizontal axis, but never touch. 68% fall within □ one SD. 95%fall within 2 SD's and 99% fall within 3 SD's. (Internal percentages= 34%, 13.5%, 2%). skewed right=positively skewed and is caused by extreme scores □ at the high end that pull the mean higher than the median (midpoint). skewed left is opposite. 5. significance 6. z score- □ an expression of an individual score in a way that □ conforms to standard deviation units. For example, 1.5 is one and a half standard deviation units above the mean and -.25 is one-quarter of a standard deviation units below the mean. These are useful in comparing scores across different settings , testing situations and tests. 7. T score- □ o sometimes called a McCall T is a standard score resulting □ from a z score transformation T= z(10)+50. The advantage= eliminates negative □ numbers or fractions. 8. mean arithmatic average. measure of central tendency 9. median □ midpoint. the number in the middle of a distribution (15 □ numbers, this would be the 8th in order) □ a measure of central tendency 10. mode □ the most frequently occurring number in a □ distribution. □ measure of central tendency 11. range □ distance between highest and lowest □ a measure of variability 12. variance □ how far apart numbers are (how spread out they are □ from the mean) 13. Standard Error of measurement (SEM)- These are found in standardized tests and are a measure of how much observed scores vary from a true score. The smaller the SEM= the more reliable the test. 14. 3 ways to describe a distribution of a set of scores: 15. central tendency, □ shape □ variability 16. 3 measures of central tendency- □ mean, □ median □ mode 17. concurrent validity- 18. established by comparison with another test or measure of the same criteria. 19. content validity □ is subjective. for example, does the test □ fairly cover what was taught or what experts agree content is appropriate for measurement 20. correlation coefficient- □ identifies numerically the relationship between 2 variables (from -1 to 1 with midpoint of 0 that means no □ correlation) 21. formative evaluation- evaluation aimed at guiding the nextinstructional step 22. norm-referenced test- results interpreted relative to thesuccess rate of other test takers 23. criterion-referenced test- results interpreted relative to #of test items answered correctly,without reference to the success of other test takers. 24. pilot study- used to identify strengths and weaknesses of a tool or process so that they might be revised for future use. 25. summative evaluation- aimed at summarizing a segment of achievement (at the end of instruction) (ie, generating a grade or other symbol of achievement) 26. reliability □ how dependable the test is. The consistency □ with which something is accomplished. 27. validity □ accuracy. Does the test measure what it says it □ measures. 28. variability one of three ways to describe a set of scores. Answers the question "How wide are the differences between the scores?" These can include range, standard deviation and variance.
{"url":"https://freezingblue.com/flashcards/160627/preview/quals-robinson-1","timestamp":"2024-11-05T12:36:20Z","content_type":"text/html","content_length":"18031","record_id":"<urn:uuid:c7a9fb06-63d1-4cb1-aa45-ab40a14fbe7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00313.warc.gz"}
Pure mathematics is, in its way, the poetry of logical ideas. It seems to me now that mathematics is capable of an artistic excellence as great as that of any music, perhaps greater; not because the pleasure it gives (although very pure) is comparable, either in intensity or in the number of people who feel it, to that of music, but because it gives in absolute perfection that combination, characteristic of great art, of godlike freedom, with the sense of inevitable destiny; because, in fact, it constructs an ideal world where everything is perfect but true. The greatest calamity in the history of science was the failure of Archimedes to invent positional notation.
{"url":"https://www2.kenyon.edu/Depts/Math/schumacherc/public_html/index.htm","timestamp":"2024-11-06T14:53:23Z","content_type":"application/xhtml+xml","content_length":"6082","record_id":"<urn:uuid:92716db1-86b4-454d-9180-5c82674bec6b>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00398.warc.gz"}
Math Colloquia - Free boundary problems arising from mathematical finance ※ 줌(zoom) 병행Zoom: https://snu-ac-kr.zoom.us/j/87020850293 회의 ID: 87020850293 Many problems in financial mathematics are closely related to the stochastic optimization problem because the optimal decision must be made under the uncertainty. In particular, optimal stopping, singular control, and optimal switching problems in the stochastic optimization problem arising from financial mathematics are formulated into the free boundary problem when the uncertainty follows the Markov process. The optimal strategies to each optimization problem is determined by the free boundary. In this talk, I introduce various free boundary problems in financial mathematics.
{"url":"http://my.math.snu.ac.kr/board/index.php?mid=colloquia&page=8&sort_index=room&order_type=asc&document_srl=813563&l=en","timestamp":"2024-11-03T18:43:14Z","content_type":"text/html","content_length":"44916","record_id":"<urn:uuid:128f8035-f1b6-468f-b01f-2dba28002894>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00453.warc.gz"}
期刊界 All Journals 搜尽天下杂志 传播学术成果 专业期刊搜索 期刊信息化 学术搜索 收费全文 37005篇 免费 7074篇 国内免费 8118篇 测绘学 1728篇 大气科学 7321篇 地球物理 10379篇 地质学 18514篇 海洋学 4183篇 天文学 2269篇 综合类 3604篇 自然地理 4199篇 2024年 149篇 2023年 627篇 2022年 1419篇 2021年 1746篇 2020年 1531篇 2019年 1745篇 2018年 2028篇 2017年 1976篇 2016年 2227篇 2015年 1859篇 2014年 2367篇 2013年 2205篇 2012年 1998篇 2011年 2058篇 2010年 2177篇 2009年 2049篇 2008年 1903篇 2007年 1775篇 2006年 1443篇 2005年 1356篇 2004年 1112篇 2003年 1110篇 2002年 1116篇 2001年 1087篇 2000年 1231篇 1999年 1613篇 1998年 1339篇 1997年 1438篇 1996年 1184篇 1995年 1096篇 1994年 968篇 1993年 852篇 1992年 708篇 1991年 500篇 1990年 361篇 1989年 385篇 1988年 331篇 1987年 220篇 1986年 189篇 1985年 139篇 1984年 115篇 1983年 84篇 1982年 89篇 1981年 66篇 1980年 57篇 1979年 38篇 1978年 23篇 1977年 17篇 1976年 11篇 1958年 33篇 排序方式:共有10000条查询结果,搜索用时 12 毫秒 Parametric transduction offers valuable advantages for underwater acoustic communications. Perhaps the most significant benefit is the fact that high directivity is achieved by means of a physically small transmit transducer. This feature may, ultimately, be employed to permit long-range, low-frequency communication using a compact source. The high directivity is desirable to combat multipath propagation and to achieve data communications in water which is shallow by comparison with range. A real-time, high data-rate “model” differential phase shift keying (DPSK) communication system has been constructed and demonstrated. This system uses parametric transduction, with a 300-kHz primary frequency and a 50-kHz secondary frequency. Experimental results show that the system can be employed to combat multipath propagation in shallow water and can achieve high data-rate text and color image transmission at 10 and 20 kb s for 2-DPSK and 4-DPSK, respectively, through a transmission bandwidth of 10 kHz. The “model” system was developed to confirm performance predictions for a future, operational long-range link employing a 50-kHz primary frequency and a 5-kHz secondary frequency Vertical drains are usually installed in subsoil consisting of several layers. Due to the complex nature of the problem, over the past decades, the consolidation properties of multi‐layered ground with vertical drains have been analysed mainly by numerical methods. An analytical solution for consolidation of double‐layered ground with vertical drains under quasi‐equal strain condition is presented in this paper. The main steps for the computation procedure are listed. The convergence of the series solution is discussed. The comparisons between the results obtained by the present analytical method and the existing numerical solutions are described by figures. The orthogonal relation for the system of double‐layered ground with vertical drains is proven. Finally, some consolidation properties of double‐layered ground with vertical drains are analysed. Copyright © 2001 John Wiley & Sons, Ltd. A numerical scheme is developed in order to simulate fluid flow in three dimensional (3‐D) microstructures. The governing equations for steady incompressible flow are solved using the semi‐implicit method for pressure‐linked equations (SIMPLE) finite difference scheme within a non‐staggered grid system that represents the 3‐D microstructure. This system allows solving the governing equations using only one computational cell. The numerical scheme is verified through simulating fluid flow in idealized 3‐D microstructures with known closed form solutions for permeability. The numerical factors affecting the solution in terms of convergence and accuracy are also discussed. These factors include the resolution of the analysed microstructure and the truncation criterion. Fluid flow in 2‐D X‐ray computed tomography (CT) images of real porous media microstructure is also simulated using this numerical model. These real microstructures include field cores of asphalt mixes, laboratory linear kneading compactor (LKC) specimens, and laboratory Superpave gyratory compactor (SGC) specimens. The numerical results for the permeability of the real microstructures are compared with the results from closed form solutions. Copyright © 2004 John Wiley & Sons, Ltd. A formula for the thickness of a shear band formed in saturated soils under a simple shear or a combined stress state has been proposed. It is shown that the shear band thickness is dependent on the pore pressure properties of the material and the dilatancy rate, but is independent of the details of the combined stress state. This is in accordance with some separate experimental observations. Copyright © 2004 John Wiley & Sons, Ltd. This paper deals with the formation processes and the palaeoenvironmental significance of relict slope deposits located on the uppermost part of the north Portugal mountains. For this purpose, seven key sites representative of the different lithofacies have been selected and analysed in detail. The data show that three main dynamic processes are responsible for the emplacement of regional fossil slope deposits: runoff, debris flows and dry grain flows. The ubiquity of these processes and the lack of frost‐related features or landforms do not support the existence of severe Pleistocene climates in this part of the lberian Peninsula as postulated by previous work. Pedological data gathered at one of the study sites show that a subalpine environment was probably present at 700–800 m altitude between 29 and 14 kyr. Using data from the Pyrenees Mountains, a 6.5 to 12°C depression in mean annual temperature has been tentatively postulated for this Pleniglacial period. Copyright © 2003 John Wiley & Sons, Ltd. The ordinary kriging method, a geostatistical interpolation technique, was applied for developing contour maps of design storm depth in northern Taiwan using intensity–duration–frequency (IDF) data. Results of variogram modelling on design storm depths indicate that the design storms can be categorized into two distinct storm types: (i) storms of short duration and high spatial variation and (ii) storms of long duration and less spatial variation. For storms of the first category, the influence range of rainfall depth decreases when the recurrence interval increases, owing to the increasing degree of their spatial independence. However, for storms of the second category, the influence range of rainfall depth does not change significantly and has an average of approximately 72 km. For very extreme events, such as events of short duration and long recurrence interval, we do not recommend usage of the established design storm contours, because most of the interstation distances exceed the influence ranges. Our study concludes that the influence range of the design storm depth is dependent on the design duration and recurrence interval and is a key factor in developing design storm contours. Copyright © 2003 John Wiley & Sons, Ltd. Using the basic Boussinesq's equation, the expression for the vertical stress distribution (σ ) underneath any point on the ground surface due to a general triangular loaded region in a preferred orientation with a linearly varied loading has been successfully derived. When the triangle is not in a preferred orientation, a simple axis transformation is required and the expression will be equally applicable. Based on this expression, σ due to an arbitrarily shaped loaded foundation can simply be determined by first triangulating the loaded area and summing up the contributions from each generated triangular region. The procedures for triangulating and calculating the stress distribution can be simply automated through computer programs. In many areas of engineering practice, applied loads are not uniformly distributed but often concentrated towards the centre of a foundation. Thus, loads are more realistically depicted as distributed as linearly varying or as parabola of revolution. Solutions for stresses in a transversely isotropic half‐space caused by concave and convex parabolic loads that act on a rectangle have not been derived. This work proposes analytical solutions for stresses in a transversely isotropic half‐space, induced by three‐dimensional, buried, linearly varying/uniform/parabolic rectangular loads. Load types include an upwardly and a downwardly linearly varying load, a uniform load, a concave and a convex parabolic load, all distributed over a rectangular area. These solutions are obtained by integrating the point load solutions in a Cartesian co‐ordinate system for a transversely isotropic half‐space. The buried depth, the dimensions of the loaded area, the type and degree of material anisotropy and the loading type for transversely isotropic half‐spaces influence the proposed solutions. An illustrative example is presented to elucidate the effect of the dimensions of the loaded area, the type and degree of rock anisotropy, and the type of loading on the vertical stress in the isotropic/transversely isotropic rocks subjected to a linearly varying/uniform/parabolic rectangular load. Copyright © 2002 John Wiley & Sons, Ltd.
{"url":"https://td.alljournals.cn/search.aspx?subject=astronomy_earth_science&major=dzx&field=author_name&encoding=utf-8&q=Cheng%E2%80%90Der+Wang","timestamp":"2024-11-08T15:30:15Z","content_type":"application/xhtml+xml","content_length":"64492","record_id":"<urn:uuid:8181fa7e-a58b-49ce-b509-cccf2f3f7534>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00490.warc.gz"}
The density of ethyl alcohol is 0.789 g/mL . What is the volume of 35.5 g of eythl alchol? | Socratic The density of ethyl alcohol is 0.789 g/mL . What is the volume of 35.5 g of eythl alchol? 1 Answer The volume of ethyl alcohol is 45.0 mL We're going to use the formula below to find the volume: • I should mention that when you are talking about liquids, the volume will be expressed in terms of milliliters (mL). For solids, the volume will be expressed in cubic meters ($c {m}^{3}$). We are given the density and mass of ethyl alcohol; so all we have to do is rearrange the equation to solve for the volume: $V \times D = \frac{m}{\cancel{V}} \times \cancel{V}$ $V \times \frac{\cancel{D}}{\cancel{D}} = \frac{m}{D}$ $V = \frac{m}{D}$ Plug in your known values: $V = \left(35.5 \frac{\cancel{\text{g")/(0.789cancel"g}}}{m L}\right)$ Therefore, the volume is 45.0mL Impact of this question 86533 views around the world
{"url":"https://socratic.org/questions/the-density-of-ethyl-alcohol-is-0-789-g-ml-what-is-the-volume-of-35-5-g-of-eythl","timestamp":"2024-11-13T04:42:06Z","content_type":"text/html","content_length":"33774","record_id":"<urn:uuid:4b8c2cc9-485c-44cf-bc51-886b255ac918>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00505.warc.gz"}
Forecasting from Times Series model using Python - Tahsin Hassan Rahit Forecasting from Times Series model using Python Playing with large set of data is always fun if you know how to do it. You can fetch interesting information from it. As part of my Master’s course I have had opportunity to work on forecasting using Times Series modeling. And yes, now I can predict future without being a clairvoyant. 😀 Considering popularity of R in Data Science, I started with R. But soon I realized it has learning curve. So, I opt out for Python. I am not going to write R vs Python because it is already written nicely here in DataCamp’s blog. So, let’s get started. Time series models are very useful models when data points collected at constant time intervals. This time series stationarity is main per-requisite for the dataset. Until unless your time series is stationary, you cannot build a time series model. There is a popular example named “Random Walk”. The summary of the example is prediction becomes more inaccurate as input data is randomize. We will use two completely different dataset to for prediction. • Nile Water Level between AD 622 to AD 1284. Get it here • Air Quality data of Italy taken on Hourly basis. Get it here The reason we are taking two different model because we want to show multiple different Time Series models. ARMA is basically combination of two different time series model AR and MA. AR stands for Auto Regressive and MA stands for Moving Average. In ARMA we work with single dependent variable indexed by time. This method is extremely helpful when we do not have any other data than time and one specific type of data. For example in case of Nile data we only have the water level data which is indexed by time(year). I am giving you warning, if your data is not stationary you will be tearing off all of your hair to fit the model. So, make sure your data is consistent to save your hair. We have used Pandas for Data management. We have used StatsModels ARMA method for prediction. It takes an mandatory parameter order which defines two parameters p,q of ARMA model. p is the order for AR model and MA is for MA model. Full API reference for this function can be found here. StatsModels also provides ARIMA modeling. In case of ARIMA model we just have to pass difference order parameter. The predictors depend on the parameters (p,d,q) of the model. Here is the short description about it. 1. Number of AR (Auto-Regressive) terms (p): AR terms are just lags of a dependent variable. For example if p is 5, the predictors for x(t) will be x(t-1)….x(t-5). 2. Number of MA (Moving Average) terms (q): MA terms are lagged forecast errors in prediction equation. For example if q is 5, the predictors for x(t) will be e(t-1)….e(t-5) where e(i) is the difference between the moving average at the i^th instant and actual value. 3. Number of Differences (d): These are the number of nonseasonal differences, i.e. in this case we took the first order difference. So either we can pass that variable and put d=0 or pass the original variable and put d=1. Both will generate same results. To calculate p and q I have run Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF). StatsModels have acf and pacf function to do this for us. How I have done it: At first, I have installed Anaconda to get everything I need. Yes. we have gathered the whole zoo to do our data science. Python, Pandas and now Anaconda. Don’t forget you can write your code in IDE named Spider. Whatever… After that, I have used Pandas to read CSV. At first I was trying to fit ARMA on Air data. But since it was hourly data, fitting those data was difficult. Then I moved to Nile data. I was having hard time fitting Nile data because the time span for the data was out of the range of supported timestamp. So, instead of using DateTime index I switched to period range of Pandas. I have generated custom period range to support the time range. The while fitting ARMA model I passed the dates generated by the period range with annual frequency. For p and q order I have used ACF and PACF. I have calculated the lags and then plot it on a graph with bounds. Observing the graph I have taken (2,2) as (p,q) order. After fitting the model I have predicted using the model. Here is the output of the prediction: Linear Regression & Random Forrest Regression: Besides ARMA and ARIMA model we have tried to use other prediction model. One of them are Linear Regression. SciKitLearn provides handful methods to do Linear Regression and Random Forrest Regression. I have ran these models on Air Quality data and got very good output. It has been observed that Random Forrest Regression generates more accurate predictions than Linear Regression. Random Forrest Regression has error rate of 12.8169340263% where as Linear Regression generates output with error rate 29.8718180357%. How I have done it: After reading data from CSV using pandas I have dropped NA(not available) or null values. Then I have omitted extreme values. I have run a co-relation calculation with all the columns against Temperature. The output was following: Then I have run prediction model on on the data. I have considered Temperature “T” as the dependent variable and others as the independent variable. The output was following: Here is the code for everything I described above. 4 thoughts on “Forecasting from Times Series model using Python” 1. My question is how would different would the second part of the problem be if you were to use decomposing instead of differecing for forecasting the time-series? □ q value varies depending on error lag value. Decomposing it will be unable to take actual value into consideration. 2. Also is the Bike sharing Demand question from Kaggle a part of time forecasting question as we are given the demand for some dates and we need to predict demand for upcoming days. □ Yes. It is.
{"url":"http://www.tahsinrahit.com/2016/09/03/forecasting-from-times-series-model-using-python/","timestamp":"2024-11-03T15:27:31Z","content_type":"text/html","content_length":"173842","record_id":"<urn:uuid:60cb9a37-a79a-4f11-b8d7-5283f50dbf83>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00175.warc.gz"}
The considered electrical circuit is analyzed as an object which is to be controlled automatically, therefore, its transmittance and state space representation equations will be written down. Transmittance is given by formula H(s) = \frac{Y(s)}{U(s)} State space representation equations \dot{ \textbf{x} } = A \cdot \textbf{x} + B \cdot \textbf{u} \textbf{y} = C \cdot \textbf{x} + D \cdot \textbf{u} An example which is solved step by step for the considered electric circuit can be found here
{"url":"http://www.mbstudent.com/2018/12/","timestamp":"2024-11-12T06:56:43Z","content_type":"application/xhtml+xml","content_length":"40594","record_id":"<urn:uuid:9bad0e8c-790a-47ef-ae71-ccb623607e2c>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00682.warc.gz"}
Society for Industrial and Applied Mathematics: Optimization Theory and Mathematical ProgrammingAtypon SystemsData-Driven Methods for Dynamic SystemsOptimal Transport: A Comprehensive Introduction to Modeling, Analysis, Simulation, ApplicationsDesign of Delay-Based Controllers for Linear Time-Invariant SystemsNumerical MathematicsNumerical Methods for Least Squares Problems: Second EditionSet-Valued, Convex, and Nonsmooth Analysis in Dynamics and Control: An IntroductionMachine Learning for Asset Management and PricingAn Introduction to Convexity, Optimization, and AlgorithmsIndustrial Mathematics: The 1998 CRSC WorkshopProblems and Solutions for Integer and Combinatorial Optimization: Building Skills in Discrete OptimizationLinear and Nonlinear Optimization 2nd EditionMatrix Analysis for Scientists and EngineersLeast Squares Data Fitting with ApplicationsFoundations of Applied Mathematics Volume 2: Algorithms, Approximation, OptimizationMathematics and Tools for Financial EngineeringApplied Numerical Linear AlgebraSolving Nonlinear Equations with Iterative Methods: Solvers and Examples in JuliaThe Basics of Practical Optimization, Second EditionIntroduction to Nonlinear Optimization: Theory, Algorithms, and Applications with Python and MATLAB, Second EditionMoment and Polynomial Optimization Optimization Theory and Mathematical Programming -- New results matching your topic search.https://epubs.siam.org/action/doSearch?af=R Society for Industrial and Applied Mathematics: Optimization Theory and Mathematical Programming Society for Industrial and Applied Mathematics en-US https://epubs.siam.org/doi/book/10.1137/1.9781611978162?af=R Data-Driven Methods for Dynamic Systems. <br/> Excerpt This book grew out of multiple stimulating conversations with Nathan Kutz while I was a postdoc at the University of Washington. It began with joking about taking our favorite dynamical systems textbook, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields by J. Guckenheimer and P. Holmes, and rewriting it chapter by chapter with modern datadriven techniques. Our idea was to highlight how so much of the dynamical systems theory we were taught can now be explicitly implemented using the widely available computational techniques that typically fall under the umbrella of data analysis. Ideas as simple as optimally fitting data to models could lead the reader right back to Guckenheimer and Holmes's textbook since one is now in a position to apply all the pencil-and-paper techniques that have been developed over more than a century of dynamical systems theory. Similarly, finding changes of variable that recast complex dynamics in the form of the phenomenological models that have been examined in detail by not only Guckenheimer and Holmes but also nearly anyone who has taught or published in dynamical systems is now accessible using neural networks. The goal has always been to showcase a suite of computational methods that can be combined with analysis to better understand the increasingly complicated models and datasets that describe our complex world. Data-Driven Methods for Dynamic Systems. <br/> Excerpt This book grew out of multiple stimulating conversations with Nathan Kutz while I was a postdoc at the University of Washington. It began with joking about taking our favorite dynamical systems textbook, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields by J. Guckenheimer and P. Holmes, and rewriting it chapter by chapter with modern datadriven techniques. Our idea was to highlight how so much of the dynamical systems theory we were taught can now be explicitly implemented using the widely available computational techniques that typically fall under the umbrella of data analysis. Ideas as simple as optimally fitting data to models could lead the reader right back to Guckenheimer and Holmes's textbook since one is now in a position to apply all the pencil-and-paper techniques that have been developed over more than a century of dynamical systems theory. Similarly, finding changes of variable that recast complex dynamics in the form of the phenomenological models that have been examined in detail by not only Guckenheimer and Holmes but also nearly anyone who has taught or published in dynamical systems is now accessible using neural networks. The goal has always been to showcase a suite of computational methods that can be combined with analysis to better understand the increasingly complicated models and datasets that describe our complex world. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/books/content/ot/2024/ 1.9781611978162/1.9781611978162/20241014-01/1.9781611978162.cover.jpg" alt-text="cover image"/></p> Data-Driven Methods for Dynamic Systems doi:10.1137/1.9781611978162 Jason J. Bramburger Data-Driven Methods for Dynamic Systems 2024-10-14T07:12:40Z 10.1137/1.9781611978162 https://epubs.siam.org/doi/book/10.1137/1.9781611978162?af=R © 2024 by the Society for Industrial and Applied MathematicsAll rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA. https://epubs.siam.org/doi/book/10.1137/1.9781611978094?af=R Optimal Transport: A Comprehensive Introduction to Modeling, Analysis, Simulation, Applications. <br/> Excerpt This accessible book begins with an elementary and self-contained chapter on optimal transport on finite state spaces that does not require measure theory or functional analysis. It builds up mathematical theory rigorously and from scratch, aided by intuitive arguments, informal discussion, and carefully selected applications. Optimal Transport: A Comprehensive Introduction to Modeling, Analysis, Simulation, Applications. <br/> Excerpt This accessible book begins with an elementary and self-contained chapter on optimal transport on finite state spaces that does not require measure theory or functional analysis. It builds up mathematical theory rigorously and from scratch, aided by intuitive arguments, informal discussion, and carefully selected applications. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/books/content/ot/2024/ 1.9781611978094/1.9781611978094/20240917/1.9781611978094.cover.jpg" alt-text="cover image"/></p> Optimal Transport: A Comprehensive Introduction to Modeling, Analysis, Simulation, Applications doi:10.1137/1.9781611978094 Gero Friesecke Optimal Transport: A Comprehensive Introduction to Modeling, Analysis, Simulation, Applications 2024-09-17T04:55:25Z 10.1137/1.9781611978094 https:// epubs.siam.org/doi/book/10.1137/1.9781611978094?af=R © 2024 by the Society for Industrial and Applied MathematicsAll rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA. https://epubs.siam.org/doi/book/10.1137/1.9781611978148?af=R Design of Delay-Based Controllers for Linear Time-Invariant Systems. <br/> This book provides the mathematical foundations needed for designing practical controllers for linear time-invariant systems. The authors accomplish this by incorporating intentional time delays into measurements with the goal of achieving anticipation capabilities, reduction in noise sensitivity, and a fast response. The benefits of these types of delay-based controllers have long been recognized, but designing them based on an analytical approach become possible only recently. Design of Delay-Based Controllers for Linear Time-Invariant Systems. <br/> This book provides the mathematical foundations needed for designing practical controllers for linear time-invariant systems. The authors accomplish this by incorporating intentional time delays into measurements with the goal of achieving anticipation capabilities, reduction in noise sensitivity, and a fast response. The benefits of these types of delay-based controllers have long been recognized, but designing them based on an analytical approach become possible only recently. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/books/content/dc/2024/1.9781611978148/1.9781611978148/20240917/1.9781611978148.cover.jpg" alt-text="cover image"/></p> Design of Delay-Based Controllers for Linear Time-Invariant Systems doi:10.1137/1.9781611978148 Adrián Ramírez Rifat Sipahi Sabine Mondié Rubén Garrido Design of Delay-Based Controllers for Linear Time-Invariant Systems 2024-09-17T04:01:45Z 10.1137/1.9781611978148 https://epubs.siam.org/doi/book/10.1137/1.9781611978148?af=R © 2022 by the Society for Industrial and Applied MathematicsAll rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA. https://epubs.siam.org/doi/ book/10.1137/1.9781611978070?af=R Numerical Mathematics. <br/> Excerpt This book is intended as an introduction, at the advanced undergraduate or beginning graduate level, to the mathematics behind practical computational methods for approximating solutions to common problems arising in calculus, differential equations, and linear algebra. The text has developed out of courses I have taught on these topics over the past 10+ years at the University of Kentucky and Portland State University. These courses have always included a mixture of students majoring in mathematics, computer science, and a variety of disciplines within engineering, and this book is written with such an audience in mind. A heavier emphasis has been put on the theory supporting the numerical methods under consideration than is typical in introductory texts (at the advanced undergraduate level) on these topics, but this has not been done at the expense of actual computations. Theory, implementation, and experimentation are all essential to a proper understanding of the subject, and I have tried to strike a good balance between them in the main text and exercises. Supplementary material, including example code and PDF slides containing many of the figures and tables in the text, can be found at https://bookstore.siam.org/ot198/bonus. Numerical Mathematics. <br/> Excerpt This book is intended as an introduction, at the advanced undergraduate or beginning graduate level, to the mathematics behind practical computational methods for approximating solutions to common problems arising in calculus, differential equations, and linear algebra. The text has developed out of courses I have taught on these topics over the past 10+ years at the University of Kentucky and Portland State University. These courses have always included a mixture of students majoring in mathematics, computer science, and a variety of disciplines within engineering, and this book is written with such an audience in mind. A heavier emphasis has been put on the theory supporting the numerical methods under consideration than is typical in introductory texts (at the advanced undergraduate level) on these topics, but this has not been done at the expense of actual computations. Theory, implementation, and experimentation are all essential to a proper understanding of the subject, and I have tried to strike a good balance between them in the main text and exercises. Supplementary material, including example code and PDF slides containing many of the figures and tables in the text, can be found at https://bookstore.siam.org/ot198/bonus. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/books/content/ot/2024/1.9781611978070/1.9781611978070/20240912/ 1.9781611978070.cover.jpg" alt-text="cover image"/></p> Numerical Mathematics doi:10.1137/1.9781611978070 Jeffrey Ovall Numerical Mathematics 2024-09-12T02:56:35Z 10.1137/1.9781611978070 https:// epubs.siam.org/doi/book/10.1137/1.9781611978070?af=R © 2024 by the Society for Industrial and Applied MathematicsAll rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA. https://epubs.siam.org/doi/book/10.1137/1.9781611977950?af=R Numerical Methods for Least Squares Problems: Second Edition. <br/> Excerpt More than 25 years have passed since the first edition of this book was published in 1996. Least squares and least-norm problems have become more significant with every passing decade, and applications have grown in size, complexity, and variety. More advanced techniques for data acquisition give larger amounts of data to be treated. What counts as a large matrix has gone from dimension 1000 to 106. Hence, iterative methods play an increasingly crucial role for the solution of least squares problems. On top of these changes, methods must be adapted to new generations of multiprocessing hardware. Numerical Methods for Least Squares Problems: Second Edition. <br/> Excerpt More than 25 years have passed since the first edition of this book was published in 1996. Least squares and least-norm problems have become more significant with every passing decade, and applications have grown in size, complexity, and variety. More advanced techniques for data acquisition give larger amounts of data to be treated. What counts as a large matrix has gone from dimension 1000 to 106. Hence, iterative methods play an increasingly crucial role for the solution of least squares problems. On top of these changes, methods must be adapted to new generations of multiprocessing hardware. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/books/content/ot/2024/1.9781611977950/ 1.9781611977950/20240409/1.9781611977950.cover.jpg" alt-text="cover image"/></p> Numerical Methods for Least Squares Problems: Second Edition doi:10.1137/1.9781611977950 Åke Björck Numerical Methods for Least Squares Problems: Second Edition 2024-04-09T02:33:49Z 10.1137/1.9781611977950 https://epubs.siam.org/doi/book/10.1137/1.9781611977950?af=R © 2024 by the Society for Industrial and Applied MathematicsAll rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA. https://epubs.siam.org/doi/book/10.1137/1.9781611977981? af=R Set-Valued, Convex, and Nonsmooth Analysis in Dynamics and Control: An Introduction. <br/> Excerpt This book introduces elements of set-valued analysis, convex analysis, and nonsmooth analysis — which are relatively modern branches of mathematical analysis — and highlights their relevance for and applications to the analysis of dynamical systems, especially those that arise or are of interest in control theory and control engineering. Set-Valued, Convex, and Nonsmooth Analysis in Dynamics and Control: An Introduction. <br/> Excerpt This book introduces elements of set-valued analysis, convex analysis, and nonsmooth analysis — which are relatively modern branches of mathematical analysis — and highlights their relevance for and applications to the analysis of dynamical systems, especially those that arise or are of interest in control theory and control engineering. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/books/content/ot/2024/ 1.9781611977981/1.9781611977981/20240409/1.9781611977981.cover.jpg" alt-text="cover image"/></p> Set-Valued, Convex, and Nonsmooth Analysis in Dynamics and Control: An Introduction doi:10.1137/ 1.9781611977981 Rafal K. Goebel Set-Valued, Convex, and Nonsmooth Analysis in Dynamics and Control: An Introduction 2024-04-09T02:34:19Z 10.1137/1.9781611977981 https://epubs.siam.org/doi/book/ 10.1137/1.9781611977981?af=R © 2024 by the Society for Industrial and Applied MathematicsAll rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA. https://epubs.siam.org/doi/book/10.1137/1.9781611977905?af=R Machine Learning for Asset Management and Pricing. <br/> This textbook covers various machine learning methods applied to asset and liability management, as well as asset pricing. We shortened the title to Machine Learning for Asset Management and Pricing for practical reasons, but also more fundamental ones. First, we do not give much space to liabilities in this book. It would not render justice to the field of asset and liability management (ALM) to include liabilities in the title. It is, however, important for a student to realize that the comprehensive problem of ALM can be handled (at least in theory) using the same theories and methods as asset management or liability management. Machine Learning for Asset Management and Pricing. <br/> This textbook covers various machine learning methods applied to asset and liability management, as well as asset pricing. We shortened the title to Machine Learning for Asset Management and Pricing for practical reasons, but also more fundamental ones. First, we do not give much space to liabilities in this book. It would not render justice to the field of asset and liability management (ALM) to include liabilities in the title. It is, however, important for a student to realize that the comprehensive problem of ALM can be handled (at least in theory) using the same theories and methods as asset management or liability management. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/books/ content/ot/2024/1.9781611977905/1.9781611977905/20240305/1.9781611977905.cover.jpg" alt-text="cover image"/></p> Machine Learning for Asset Management and Pricing doi:10.1137/1.9781611977905 Henry Schellhorn Tianmin Kong Machine Learning for Asset Management and Pricing 2024-03-05T10:32:55Z 10.1137/1.9781611977905 https://epubs.siam.org/doi/book/10.1137/1.9781611977905?af=R © 2024 by the Society for Industrial and Applied MathematicsAll rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA. https:// epubs.siam.org/doi/book/10.1137/1.9781611977806?af=R An Introduction to Convexity, Optimization, and Algorithms. <br/> Convex analysis, convex optimization, and algorithms are important topics in modern applied mathematics. In this text, we provide an introduction to a selection of these topics accessible at the advanced undergraduate or beginning graduate level. The only background required is some core knowledge of calculus, linear algebra, and analysis. An Introduction to Convexity, Optimization, and Algorithms. <br/> Convex analysis, convex optimization, and algorithms are important topics in modern applied mathematics. In this text, we provide an introduction to a selection of these topics accessible at the advanced undergraduate or beginning graduate level. The only background required is some core knowledge of calculus, linear algebra, and analysis. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/books/content/mo/2023/1.9781611977806/ 1.9781611977806/20231220/1.9781611977806.cover.jpg" alt-text="cover image"/></p> An Introduction to Convexity, Optimization, and Algorithms doi:10.1137/1.9781611977806 Heinz H. Bauschke Walaa M. Moursi An Introduction to Convexity, Optimization, and Algorithms 2023-12-20T08:46:48Z 10.1137/1.9781611977806 https://epubs.siam.org/doi/book/10.1137/1.9781611977806?af=R © 2023 by the Society for Industrial and Applied Mathematics https://epubs.siam.org/doi/book/10.1137/1.9780898714678?af=R Industrial Mathematics: The 1998 CRSC Workshop. <br/> Industrial Mathematics: The 1998 CRSC Workshop. <br/><p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/books/content/ot/2000/1.9780898714678/1.9780898714678/20231212/1.9780898714678.cover.jpg" alt-text="cover image"/></p> Industrial Mathematics: The 1998 CRSC Workshop doi:10.1137/1.9780898714678 Industrial Mathematics: The 1998 CRSC Workshop 2023-12-12T01:47:25Z 10.1137/1.9780898714678 https://epubs.siam.org/doi/book/ 10.1137/1.9780898714678?af=R © 2000 by the Society for Industrial and Applied MathematicsAll rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA. https://epubs.siam.org/doi/book/10.1137/1.9781611977769?af=R Problems and Solutions for Integer and Combinatorial Optimization: Building Skills in Discrete Optimization. <br/> The authors of this book wanted for a long time to have at hand a large number of problems with solutions while teaching. The first author has taught IE303 Modeling and Methods in Optimization for Bilkent University students for longer than 20 years. A graduate student (a PhD candidate) assistant joined next and started compiling solved problems to be used in exams, quizzes, and homework assignments. The first author added some problems of his own. The result is this book. There are not many books out there dedicated to problems in integer optimization and related topics. The book focuses on the topics covered in IE303 Modeling and Methods in Optimization, a third year required course for Industrial Engineering students at Bilkent University. However, it should be useful for any undergraduate student in industrial engineering or in related disciplines. These are the motivations behind the preparation of this book. Problems and Solutions for Integer and Combinatorial Optimization: Building Skills in Discrete Optimization. <br/> The authors of this book wanted for a long time to have at hand a large number of problems with solutions while teaching. The first author has taught IE303 Modeling and Methods in Optimization for Bilkent University students for longer than 20 years. A graduate student (a PhD candidate) assistant joined next and started compiling solved problems to be used in exams, quizzes, and homework assignments. The first author added some problems of his own. The result is this book. There are not many books out there dedicated to problems in integer optimization and related topics. The book focuses on the topics covered in IE303 Modeling and Methods in Optimization, a third year required course for Industrial Engineering students at Bilkent University. However, it should be useful for any undergraduate student in industrial engineering or in related disciplines. These are the motivations behind the preparation of this book. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/books/content/mo/2023/1.9781611977769/1.9781611977769/20231113/1.9781611977769.cover.jpg" alt-text="cover image"/></p> Problems and Solutions for Integer and Combinatorial Optimization: Building Skills in Discrete Optimization doi:10.1137/1.9781611977769 Mustafa Ç. Pınar Deniz Akkaya Problems and Solutions for Integer and Combinatorial Optimization: Building Skills in Discrete Optimization 2023-11-13T01:07:28Z 10.1137/1.9781611977769 https://epubs.siam.org/doi/book/10.1137/ 1.9781611977769?af=R © 2023 by the Society for Industrial and Applied Mathematics https://epubs.siam.org/doi/book/10.1137/1.9780898717730?af=R Linear and Nonlinear Optimization 2nd Edition. <br/> Linear and Nonlinear Optimization 2nd Edition. <br/><p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/books/content/ot/2008/1.9780898717730/1.9780898717730/20230629/ 1.9780898717730.cover.jpg" alt-text="cover image"/></p> Linear and Nonlinear Optimization 2nd Edition doi:10.1137/1.9780898717730 Igor Griva Stephen G. Nash Ariela Sofer Linear and Nonlinear Optimization 2nd Edition 2023-06-29T06:55:40Z 10.1137/1.9780898717730 https://epubs.siam.org/doi/book/10.1137/1.9780898717730?af=R © 2008 by the Society for Industrial and Applied Mathematics https:/ /epubs.siam.org/doi/book/10.1137/1.9780898717907?af=R Matrix Analysis for Scientists and Engineers. <br/> Matrix Analysis for Scientists and Engineers. <br/><p><img src="https://epubs.siam.org/na101/ home/literatum/publisher/siam/books/content/ot/2004/1.9780898717907/1.9780898717907/20230629/1.9780898717907.cover.jpg" alt-text="cover image"/></p> Matrix Analysis for Scientists and Engineers doi:10.1137/1.9780898717907 Alan J. Laub Matrix Analysis for Scientists and Engineers 2023-06-29T06:53:40Z 10.1137/1.9780898717907 https://epubs.siam.org/doi/book/10.1137/1.9780898717907?af=R © 2005 by the Society for Industrial and Applied MathematicsAll rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA. https:// epubs.siam.org/doi/book/10.1137/1.9781421407869?af=R Least Squares Data Fitting with Applications. <br/> Least Squares Data Fitting with Applications. <br/><p><img src="https://epubs.siam.org/na101/ home/literatum/publisher/siam/books/content/jh/2013/1.9781421407869/1.9781421407869/20230629/1.9781421407869.cover.jpg" alt-text="cover image"/></p> Least Squares Data Fitting with Applications doi:10.1137/1.9781421407869 Per Christian Hansen Victor Pereyra Goldela Scherer Least Squares Data Fitting with Applications 2023-06-29T02:43:37Z 10.1137/1.9781421407869 https://epubs.siam.org/doi/ book/10.1137/1.9781421407869?af=R © 2013 by the Johns Hopkins University Press https://epubs.siam.org/doi/book/10.1137/1.9781611976069?af=R Foundations of Applied Mathematics Volume 2: Algorithms, Approximation, Optimization. <br/> Foundations of Applied Mathematics Volume 2: Algorithms, Approximation, Optimization. <br/><p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/ books/content/ot/2020/1.9781611976069/1.9781611976069/20230629/1.9781611976069.cover.jpg" alt-text="cover image"/></p> Foundations of Applied Mathematics Volume 2: Algorithms, Approximation, Optimization doi:10.1137/1.9781611976069 Jeffrey Humpherys Tyler J. Jarvis Foundations of Applied Mathematics Volume 2: Algorithms, Approximation, Optimization 2023-06-29T08:26:40Z 10.1137/ 1.9781611976069 https://epubs.siam.org/doi/book/10.1137/1.9781611976069?af=R © 2020 by the Society for Industrial and Applied MathematicsAll rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA. https://epubs.siam.org/doi/book/10.1137/1.9781611976762?af=R Mathematics and Tools for Financial Engineering. <br/> Investment strategies are becoming more sophisticated due to theoretical developments in finance coupled with advanced software tools and fast computational capabilities and due to the increasing number of financial investment options that are available. The traditional way of solving financial problems and making investment decisions has been replaced with intelligent techniques that involve good understanding of the theory for modeling the dynamic behavior of assets and investments, optimization techniques to choose the best solution from a set of many feasible solutions, and software tools to simulate the theory and generate solutions, evaluate performance and risks, and make predictions. Mathematics and Tools for Financial Engineering. <br/> Investment strategies are becoming more sophisticated due to theoretical developments in finance coupled with advanced software tools and fast computational capabilities and due to the increasing number of financial investment options that are available. The traditional way of solving financial problems and making investment decisions has been replaced with intelligent techniques that involve good understanding of the theory for modeling the dynamic behavior of assets and investments, optimization techniques to choose the best solution from a set of many feasible solutions, and software tools to simulate the theory and generate solutions, evaluate performance and risks, and make predictions. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/books/content/ot/2021/1.9781611976762/1.9781611976762 /20230629/1.9781611976762.cover.jpg" alt-text="cover image"/></p> Mathematics and Tools for Financial Engineering doi:10.1137/1.9781611976762 Petros A. Ioannou Mathematics and Tools for Financial Engineering 2023-06-29T08:28:00Z 10.1137/1.9781611976762 https://epubs.siam.org/doi/book/10.1137/1.9781611976762?af=R © 2021 by the Society for Industrial and Applied MathematicsAll rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA. https://epubs.siam.org/doi/book/10.1137/1.9781611976861?af=R Applied Numerical Linear Algebra. <br/> The original edition of Applied Numerical Linear Algebra emerged from a numerical linear algebra course first taught more than 40 years ago at Carnegie Mellon University and later at Penn State University. The course was targeted to junior and senior undergraduate students and beginning graduate students. Much has changed since then: there have been huge advances in computer speed and algorithms; the development of MATLAB has made it much easier to formulate and solve complex problems; advances in sparse matrix theory have enabled the direct solution of increasingly large linear systems; iterative methods have greatly advanced through the development of MINRES (minimal residual) and GMRES (generalized minimal residual) algorithms; and a solid mathematical foundation for analyzing error propagation in finite precision arithmetic has been achieved. Without a doubt, hardware, software, algorithms, and theory will continue to develop. Applied Numerical Linear Algebra. <br/> The original edition of Applied Numerical Linear Algebra emerged from a numerical linear algebra course first taught more than 40 years ago at Carnegie Mellon University and later at Penn State University. The course was targeted to junior and senior undergraduate students and beginning graduate students. Much has changed since then: there have been huge advances in computer speed and algorithms; the development of MATLAB has made it much easier to formulate and solve complex problems; advances in sparse matrix theory have enabled the direct solution of increasingly large linear systems; iterative methods have greatly advanced through the development of MINRES (minimal residual) and GMRES (generalized minimal residual) algorithms; and a solid mathematical foundation for analyzing error propagation in finite precision arithmetic has been achieved. Without a doubt, hardware, software, algorithms, and theory will continue to develop. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/books/content/cl/2021/1.9781611976861/1.9781611976861/20230629/1.9781611976861.cover.jpg" alt-text="cover image"/></p> Applied Numerical Linear Algebra doi:10.1137/1.9781611976861 William W. Hager Applied Numerical Linear Algebra 2023-06-29T08:28:40Z 10.1137/1.9781611976861 https://epubs.siam.org/doi/book/10.1137/ 1.9781611976861?af=R © 2021 by the Society for Industrial and Applied MathematicsAll rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA. https://epubs.siam.org/doi/book/10.1137/1.9781611977271?af=R Solving Nonlinear Equations with Iterative Methods: Solvers and Examples in Julia. <br/> This book on solvers for nonlinear equations is a user-oriented guide to algorithms and implementation. It is a sequel to [111], which used MATLAB for the solvers and examples. This book uses Julia [17] and adds new material on pseudo-transient continuation, mixed precision solvers, and Anderson acceleration. Solving Nonlinear Equations with Iterative Methods: Solvers and Examples in Julia. <br/> This book on solvers for nonlinear equations is a user-oriented guide to algorithms and implementation. It is a sequel to [111], which used MATLAB for the solvers and examples. This book uses Julia [17] and adds new material on pseudo-transient continuation, mixed precision solvers, and Anderson acceleration. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/books/content/fa/2022/1.9781611977271/ 1.9781611977271/20230629/1.9781611977271.cover.gif" alt-text="cover image"/></p> Solving Nonlinear Equations with Iterative Methods: Solvers and Examples in Julia doi:10.1137/1.9781611977271 C. T. Kelley Solving Nonlinear Equations with Iterative Methods: Solvers and Examples in Julia 2023-06-29T08:30:40Z 10.1137/1.9781611977271 https://epubs.siam.org/doi/book/10.1137/1.9781611977271?af=R © 2022 by the Society for Industrial and Applied MathematicsAll rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA. https:// epubs.siam.org/doi/book/10.1137/1.9781611977370?af=R The Basics of Practical Optimization, Second Edition. <br/> Optimization is presented in most multivariable calculus courses as an application of the gradient, and while this treatment makes sense for a calculus course, there is much more to the theory of optimization. Moreover, optimization is actually used every day in a way that is much different from what one is led to believe in the typical calculus course. Our world and its societies have for many centuries generated interesting and important optimization problems, and the theory of optimization has grown and developed in response to the challenges presented by these problems. In fact, optimization theory continues to be developed today in response to practical concerns encountered in applications, which makes optimization an ideal topic of study in modern applied mathematics. Through the study of optimization theory, the power and beauty of mathematics can be observed in close connection to interesting and relevant problems of our world. The Basics of Practical Optimization, Second Edition. <br/> Optimization is presented in most multivariable calculus courses as an application of the gradient, and while this treatment makes sense for a calculus course, there is much more to the theory of optimization. Moreover, optimization is actually used every day in a way that is much different from what one is led to believe in the typical calculus course. Our world and its societies have for many centuries generated interesting and important optimization problems, and the theory of optimization has grown and developed in response to the challenges presented by these problems. In fact, optimization theory continues to be developed today in response to practical concerns encountered in applications, which makes optimization an ideal topic of study in modern applied mathematics. Through the study of optimization theory, the power and beauty of mathematics can be observed in close connection to interesting and relevant problems of our world. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/books/content/ot/ 2022/1.9781611977370/1.9781611977370/20230629/1.9781611977370.cover.jpg" alt-text="cover image"/></p> The Basics of Practical Optimization, Second Edition doi:10.1137/1.9781611977370 Adam B. Levy The Basics of Practical Optimization, Second Edition 2023-06-29T08:31:20Z 10.1137/1.9781611977370 https://epubs.siam.org/doi/book/10.1137/1.9781611977370?af=R © 2022 by the Society for Industrial and Applied MathematicsAll rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA. https://epubs.siam.org/doi/book/10.1137/ 1.9781611977622?af=R Introduction to Nonlinear Optimization: Theory, Algorithms, and Applications with Python and MATLAB, Second Edition. <br/> Preface to the Second Edition: The second edition features two significant enhancements to the first edition. 1. Python codes were added on top of the existing MATLAB codes to illustrate and demonstrate different aspects of the algorithmic and applicative nature of nonlinear optimization. Since the first edition's publication, Python has become one of the leading software languages for scientific computing and is used in many applications, most notably those arising in data science. Readers interested in implementation may choose to follow either the MATLAB or Python codes which appear, sometimes literally, side by side. A new section on the Python module CVXPY (Section 8.5) describes how to solve convex optimization problems using Python. Introduction to Nonlinear Optimization: Theory, Algorithms, and Applications with Python and MATLAB, Second Edition. <br/> Preface to the Second Edition: The second edition features two significant enhancements to the first edition. 1. Python codes were added on top of the existing MATLAB codes to illustrate and demonstrate different aspects of the algorithmic and applicative nature of nonlinear optimization. Since the first edition's publication, Python has become one of the leading software languages for scientific computing and is used in many applications, most notably those arising in data science. Readers interested in implementation may choose to follow either the MATLAB or Python codes which appear, sometimes literally, side by side. A new section on the Python module CVXPY (Section 8.5) describes how to solve convex optimization problems using Python. <p><img src= "https://epubs.siam.org/na101/home/literatum/publisher/siam/books/content/mo/2023/1.9781611977622/1.9781611977622/20230629/1.9781611977622.cover.jpg" alt-text="cover image"/></p> Introduction to Nonlinear Optimization: Theory, Algorithms, and Applications with Python and MATLAB, Second Edition doi:10.1137/1.9781611977622 Amir Beck Introduction to Nonlinear Optimization: Theory, Algorithms, and Applications with Python and MATLAB, Second Edition 2023-06-29T08:36:40Z 10.1137/1.9781611977622 https://epubs.siam.org/doi/book/10.1137/1.9781611977622?af=R © 2023 by the Society for Industrial and Applied Mathematics https://epubs.siam.org/doi/book/10.1137/1.9781611977608?af=R Moment and Polynomial Optimization. <br/> Moment and polynomial optimization has received high attention in recent decades. It has beautiful theory and efficient methods, as well as broad applications for various mathematical, scientific, and engineering fields. The research status of optimization has been enhanced extensively due to its recent developments. Nowadays, moment and polynomial optimization is an important technique in many fields. Moment and Polynomial Optimization. <br/> Moment and polynomial optimization has received high attention in recent decades. It has beautiful theory and efficient methods, as well as broad applications for various mathematical, scientific, and engineering fields. The research status of optimization has been enhanced extensively due to its recent developments. Nowadays, moment and polynomial optimization is an important technique in many fields. <p><img src="https://epubs.siam.org/na101/home/literatum/publisher/siam/books/content/mo/2023/1.9781611977608/1.9781611977608/20230622/1.9781611977608.cover.jpg" alt-text="cover image"/></p> Moment and Polynomial Optimization doi:10.1137/1.9781611977608 Jiawang Nie Moment and Polynomial Optimization 2023-06-22T07:07:58Z 10.1137/1.9781611977608 https://epubs.siam.org/doi/book/10.1137/ 1.9781611977608?af=R © 2023 by the Society for Industrial and Applied Mathematics
{"url":"https://epubs.siam.org/action/showFeed?type=searchTopic&taxonomyCode=topics&tagCode=topic-otmp","timestamp":"2024-11-02T06:18:24Z","content_type":"application/rdf+xml","content_length":"56478","record_id":"<urn:uuid:836275ef-efff-4fd4-900d-a270ff1bf27f>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00330.warc.gz"}
How to determine the required sample size for a cross-sectional survey in biostatistics? | Hire Some To Take My Statistics Exam How to determine the required sample size for a cross-sectional survey in biostatistics? In article titled “Aims and methods” paper entitled “Aims and methods” paper titled “Questionnaire Size; Findings of Size and Response Rates of Items” published on June 18, 2011 the following lines of text illustrate the limitations of the current technology. This paper states that “An item may be more predictive of future health-related harms or adverse events in the future than a single item, and therefore, has to be excluded from the questionnaire when assessing a population’s effect on health outcomes. Here, only one item is required because of its effect on health”. In this paper, an item from the item (Thing 18) is required per item: Categories indicate the following items contained in the [n]item: Thing 18 Thing 1 Thing 2 Thing 3 Thing 4 Thing 5 Thing 6 view it the same as the last item (Thing 12). Thing 7 is the same as the last item (Thing 14). Thing 8 is the same as the last item (Thing 16). Thing 9 shows potential methods and techniques for assessing the sample size. Thing 10 is the browse around here as the last item. Thing 11 is the same as the last item. Thing 12 is the same as the last item. Thing 13 is the same as the last item. Thing 14 is the same as the last item. And here are the responses for the items: See Additional file 14: Table 1 below for some suggested elements of a population questionnaire — all of them are collected from large surveys, not randomly collected surveys. So, an item that needs to be excluded this way should not be present in the find someone to do my statistics exam (Table 1), because it is a predictor and the items present in the questionnaireHow to determine the required sample size for a cross-sectional survey in biostatistics? A Sample Size is a nominal interval to measure the precision and the sample size needed to conduct a cross-sectional (3-question) study. The parameter “1, 2,…=20,XY” is chosen for each study. It is critical to measure the time required for the 3-question sub-study (12-question) to conclude. Several studies routinely choose 10-pt see it here to reach their thresholds for precision testing, whereas other studies choose multiple subsets and measurement errors become more burdensome with each lower value. Pay Someone To Do My Online Class High School And yet, researchers have determined the required sample size for most (and many more) studies. A two-step design for determining this content sample size is described. By writing 3- and 12-question pairs, a representative sample size of 60-50 kgkg/m (5-10 kg) for a census taker in a North Carolina forest is needed. Figure 1 illustrates the stages of selecting 4- and 12-question samples for the cross-sectional survey. Such a sample size leads to an arbitrarily large (by a margin of around 10% of the required sample size) 1,2,… and 3- and 12-question subsets of about the required sample size because sample sizes are high. If the desired sample size is achieved, one visit their website opt to combine multiple subsets to confirm the required sample size from all four- and 12-question subsets can be used. If the desired sample size is achieved, a second, higher- (not lower) subsample needs to be selected. Figure 1 An example of the number of 1- and 2-question subsamples (2-6 and 5-12) is shown in the table. The table shows that 4- and 12-question subsets (2-6 and 5-12) can be used together to select a sample size of about 60-67 kgkg/m (5-10 kg) for a census taker inHow to determine the required sample size for a cross-sectional survey in biostatistics? Some ways of gauging and controlling the significance of the results read this post here a study. Some ways test statistics are performed. Study types this biostatistics, biometric standards, medical devices, and statistical measures. Some methods of preparing informed consent and sample size are also used: they are commonly reported in medical literature. Some more accurate methods have been developed for determining the required sample size and for counting outcomes that can lead to a sample size underestimate when a study is analyzed in a particular form. A total of 106 bovine immunological assays were address in the present study; the range of accuracy ranges for each assay (approximately 1-5 bovine serum, 1-2 antibodies), and the test frequency (%) are shown. The mean (SD) percent difference is shown. right here were formed for the assay tests used in the present study. Number of testing types was also calculated. I Need Someone To Do My Homework For Me Finally the percent change (97% confidence interval) should be multiplied to account for discrepancies generated by the data collection procedure (but not to account for possible false negatives). Please contact your current advisor for results and guidelines. Categories Test Results The following tabular responses were derived from the test results used in the survey. These were classified into: Objectives. Tests, Proposals, Sample size estimates and data collection procedures. Tests Additional sections of the survey that are meant to keep these categories of results in place. Tests Sample Number – Name of Specification Tests Proposals Confidence Rate (10 points) IgA (high TSH) – IgA ≥ or = normal (40%) IgG (high TSH) – IgG ≥ 90% The percentage change (95% confidence interval) for the test pay someone to do statistics exam their own is shown. The word “under or over” was expressed as data in parentheses.
{"url":"https://hireforstatisticsexam.com/how-to-determine-the-required-sample-size-for-a-cross-sectional-survey-in-biostatistics","timestamp":"2024-11-07T03:58:24Z","content_type":"text/html","content_length":"170188","record_id":"<urn:uuid:306a5cdd-77ca-4f88-b8c9-66c9b4f6ef85>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00247.warc.gz"}
Energy minimizers of the coupling of a Cosserat rod to an elastic continuum Sander, O. and Schiela, A. (2012) Energy minimizers of the coupling of a Cosserat rod to an elastic continuum. Mathematical Modelling, 7 (1). pp. 1118-1123. Official URL: https://dx.doi.org/10.3182/20120215-3-AT-3016.0019... We formulate the static mechanical coupling of a geometrically exact Cosserat rod to an elastic continuum. The coupling conditions accommodate for the difference in dimension between the two models. Also, the Cosserat rod model incorporates director variables, which are not present in the elastic continuum model. Two alternative coupling conditions are proposed, which correspond to two different configuration trace spaces. For both we show existence of solutions of the coupled problems. We also derive the corresponding conditions for the dual variables and interpret them in mechanical terms. Repository Staff Only: item control page
{"url":"http://publications.imp.fu-berlin.de/1870/","timestamp":"2024-11-12T13:06:21Z","content_type":"application/xhtml+xml","content_length":"18740","record_id":"<urn:uuid:10afb0cf-21d9-4933-8910-2f65284f8a6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00679.warc.gz"}
The standard Kalman filter is designed mainly for use in linear systems, however, versions of this estimation process have been developed for nonlinear systems, including the extended Kalman filter and the unscented Kalman filter. Since many real-world systems cannot be described by linear models, these nonlinear estimation techniques play a large role in numerous real-world applications. Extended Kalman Filter While the standard Kalman filter is a powerful estimation tool, its algorithms begin to break down when the system being estimated is nonlinear. Fortunately, a version of the standard Kalman filter, known as the extended Kalman filter (EKF), has been extended to nonlinear systems and relies on linearization in estimating these nonlinear systems. Linearization operates on the principle that at a small section around a selected operating point a nonlinear function can be approximated as a linear function. This linearized function can be derived from the nonlinear function using the first-order terms in a Taylor series expansion shown in Equation \ref{nfk:le}. $$\label{nfk:le} \boldsymbol{g}(x) \approx \boldsymbol{g}(a) + \frac{\partial\boldsymbol{g(x)}}{\partial\boldsymbol{x}}\Bigg|_{x = a}(x-a)$$ Using this method of linearization, an EKF will follow the same propagate and update process as the standard Kalman filter, but with a few modifications to the standard equations. During the propagate step, rather than using Equation 5 in Section 2.8, the state vector is instead estimated by evaluating the nonlinear system model equations at the most recent state estimate as shown in Equation \ref{nkf:sp}. Additionally, in the state covariance matrix propagation, the state transition matrix, $\Phi$, is replaced with a matrix $F$, which is a Jacobian matrix containing the first-order partial derivatives of the nonlinear system model equations. $$\label{nkf:sp} \boldsymbol{x}_{k+1} = f(\boldsymbol{x}_k,\boldsymbol{u}_k) \quad\quad\quad F = \frac{\partial\boldsymbol{f}}{\partial\boldsymbol{x}}\Bigg|_{\hat{\boldsymbol{x}}}$$ In the update step, the expected measurement vector is derived using the nonlinear measurement model equations, evaluated at the most recent state estimate as provided in Equation \ref{nkf:mu}. The measurement model matrix in each of the update equations is also replaced with the $H$ Jacobian matrix containing the first-order partial derivatives of the nonlinear measurement model equations. $$\label{nkf:mu} \boldsymbol{y}_{k} = h(\boldsymbol{x}_k) \quad\quad\quad\quad\quad\quad H = \frac{\partial\boldsymbol{h}}{\partial\boldsymbol{x}}\Bigg|_{\hat{\boldsymbol{x}}}$$ Though the EKF can be a powerful tool in estimating states in a nonlinear system, there are some limitations of its use. An EKF is designed in such a way to optimally update the state vector and state covariance matrix, assuming that the state covariance matrix is within the linear region of the linearization. However, if the uncertainties in the state covariance matrix grow to be larger than the size of this linear region, then the state covariance matrix can no longer accurately reflect the actual error in the system and divergence can occur. Typically, an EKF is best suited for applications with enough measurements to keep state uncertainties relatively low. Unscented Kalman Filter While the EKF works well for a majority of nonlinear systems, there are some cases where an EKF is not well suited, such as if the system is very nonlinear or poorly observable. In these particular systems, the unscented Kalman filter (UKF) can provide a more reliable estimation. Most navigation systems do not fall in this category, but the UKF is still seen in some systems. The UKF estimates a nonlinear system by carefully selecting a number of points, known as sigma points, that adequately describe the state vector and associated uncertainty. These sigma points are then propagated through the nonlinear equations to estimate the next state vector and related uncertainty. Though this estimation process is not as prone to divergence, a UKF does require quite a bit of computational efficiency to calculate the sigma points and propagate them through the nonlinear system. This is especially true for systems that have a large state vector, requiring large numbers of sigma points to be calculated and propagated.
{"url":"https://www.vectornav.com/resources/inertial-navigation-primer/math-fundamentals/math-nonlinearkalman","timestamp":"2024-11-06T17:21:03Z","content_type":"text/html","content_length":"39967","record_id":"<urn:uuid:b299cea0-5d74-4770-8f2c-d07f83c13a42>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00462.warc.gz"}
Neuland CTF 2023 Winter - Cryptography Download challenges: Neuland CTF Repository Secrets - Easy Part 1: aynaq{o4f3 Part 2: ..--.- -.... ....- ..--.- .---- ..... ..--.- -. --- --... ..--.- ....- ..--.- Part 3: M05DcllwNzFvbn0= We get three parts of the flag encrypted/encoded by different methods. The first part of the message appears to represent nland{. The fact that { remains the same and the two Ns have been converted to As indicates a shift cipher. The string is ROT 13 encoded; it simply substitutes a letter with the 13th letter after in the alphabet. The second part consists exclusively of dots and dashes, indicating Morse code, which encodes text with two different signal durations. The last part is Base64, a binary-to-text encoding indicated by the = at the end of the sequence used as padding. The flag is nland{b4s3_64_15_NO7_4_3NCrYp71on}. Hash - Easy MD5: e10adc3949ba59abbe56e057f20f883e SHA1: 5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8 LM: 598DDCE2660D3193AAD3B435B51404EE Flag format: nland{<MD5>_<SHA1>_<LM>} in all lowercase The flag can be generated by brute forcing three different hashing algorithms. A hash is a digital fingerprint that uses a hash function to map data of any length to a shorter, fixed-length value. In IT security, these hashes are mostly one-way functions and offer collision resistance, making it easy to calculate a hash but almost impossible to conclude the original string. Therefore, an efficient way to decrypt the hashes is a dictionary attack, where frequently used words are hashed with the respective hashing algorithm and compared with the original. Tools like hashcat make this process easy. We download a dictionary like rockyou.txt and use it as input for hashcat. The command looks like the following: hashcat.exe -a 0 -m 0 hash.txt rockyou.txt hashcat.exe -a 0 -m 100 hash.txt rockyou.txt hashcat.exe -a 0 -m 3000 hash.txt rockyou.txt The hash.txt file contains the hash value to be decrypted. We use the parameter -a to determine the dictionary mode, -m stands for the hash algorithm. The flag is nland{123456_password_qwerty}. Baby - Easy Can you read my message without the private key? c: 24795976732186127960014008753803478286219924961358994925564930277505139413283367757656447224830225064133651246343035441112407129772003927463166449052456907513 e: 65537 n: 67037366790941822378007197878613492487588187468048328737227273255156041659689092651657208107757810805499108569166854436320366276808520739379431210884782583791 The title already reveals that it is about the cryptographic method RSA. Since n only has 158 digits, we have a good chance of finding the two factors, q and p, needed to calculate the private key. FactorDB is an online collection of prime numbers which fortunately stores our fully factored n. The private key d can be calculated with inverse(e) % (p-1) * (q-1). With the private key, the ciphertext c can be decrypted with the equation M = pow(C , d) % n. Python script: from Crypto.Util.number import * p = 7796601204626807 q = 8598280844627430267706791405975187760390046230909096659417881790296619284204527797467017995321195814866230752519838250409205362581256112387913 n = 67037366790941822378007197878613492487588187468048328737227273255156041659689092651657208107757810805499108569166854436320366276808520739379431210884782583791 c = 24795976732186127960014008753803478286219924961358994925564930277505139413283367757656447224830225064133651246343035441112407129772003927463166449052456907513 e = 65537 d = inverse(e,(p-1)*(q-1)) m = pow(c,d,p*q) print("Message: ", long_to_bytes(m)) The flag is nland{ROll1n9_your_Own_r54} All the Colors of Christmas - Medium Santa has a message for you. Flag format: nland{<message>} in all lowercase An LED-illuminated Christmas tree is provided to solve the task, which regularly changes its colors. After looking at it for a while, the following features become apparent: • 6 different colors (green, yellow, blue, light blue, pink, red) • The 6th color is displayed longer • After 18 colors, the Christmas tree shuts down and starts again from the beginning With this information, we can create the following pattern: green yellow yellow blue green yellow yellow blue red blue light blue pink red blue light blue pink light blue pink green red light blue pink green red A quick Google search shows that only a few cryptographic algorithms use colors as a form of representation. One of them is Hexahue, which uses the same colors. Enter the color combination into an online decoder and get the word ho. The flag is nland{hoho}.
{"url":"https://blog.neuland-ingolstadt.de/posts/neuland-ctf-crypto-12-2023/","timestamp":"2024-11-04T18:20:10Z","content_type":"text/html","content_length":"21999","record_id":"<urn:uuid:666c0470-1532-49fe-9b9d-d13361fe5cfc>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00751.warc.gz"}
A High Dynamic Range (HDR) Histogram Package An HdrHistogram histogram supports the recording and analyzing sampled data value counts across a configurable integer value range with configurable value precision within the range. Value precision is expressed as the number of significant digits in the value recording, and provides control over value quantization behavior across the value range and the subsequent value resolution at any given In contrast to traditional histograms that use linear, logarithmic, or arbitrary sized bins or buckets, HdrHistograms use a fixed storage internal data representation that simultaneously supports an arbitrarily high dynamic range and arbitrary precision throughout that dynamic range. This capability makes HdrHistograms extremely useful for tracking and reporting on the distribution of percentile values with high resolution and across a wide dynamic range -- a common need in latency behavior characterization. The HdrHistogram package was specifically designed with latency and performance sensitive applications in mind. Experimental u-benchmark measurements show value recording times as low as 3-6 nanoseconds on modern (circa 2012) Intel CPUs. All Histogram variants can maintain a fixed cost in both space and time. When not configured to auto-resize, a Histogram's memory footprint is constant, with no allocation operations involved in recording data values or in iterating through them. The memory footprint is fixed regardless of the number of data value samples recorded, and depends solely on the dynamic range and precision chosen. The amount of work involved in recording a sample is constant, and directly computes storage index locations such that no iteration or searching is ever involved in recording data values. NOTE: Histograms can optionally be configured to auto-resize their dynamic range as a convenience feature. When configured to auto-resize, recording operations that need to expand a histogram will auto-resize its dynamic range to include recorded values as they are encountered. Note that recording calls that cause auto-resizing may take longer to execute, and that resizing incurs allocation and copying of internal data structures. The combination of high dynamic range and precision is useful for collection and accurate post-recording analysis of sampled value data distribution in various forms. Whether it's calculating or plotting arbitrary percentiles, iterating through and summarizing values in various ways, or deriving mean and standard deviation values, the fact that the recorded value count information is kept in high resolution allows for accurate post-recording analysis with low [and ultimately configurable] loss in accuracy when compared to performing the same analysis directly on the potentially infinite series of sourced data values samples. An HdrHistogram histogram is usually configured to maintain value count data with a resolution good enough to support a desired precision in post-recording analysis and reporting on the collected data. Analysis can include the computation and reporting of distribution by percentiles, linear or logarithmic arbitrary value buckets, mean and standard deviation, as well as any other computations that can supported using the various iteration techniques available on the collected value count data. In practice, a precision levels of 2 or 3 decimal points are most commonly used, as they maintain a value accuracy of +/- ~1% or +/- ~0.1% respectively for derived distribution statistics. A good example of HdrHistogram use would be tracking of latencies across a wide dynamic range. E.g. from a microsecond to an hour. A Histogram can be configured to track and later report on the counts of observed integer usec-unit latency values between 0 and 3,600,000,000 while maintaining a value precision of 3 significant digits across that range. Such an example Histogram would simply be created with a highestTrackableValue of 3,600,000,000, and a numberOfSignificantValueDigits of 3, and would occupy a fixed, unchanging memory footprint of around 185KB (see "Footprint estimation" Code for this use example would include these basic elements: Histogram histogram = new Histogram(3600000000L, 3); // Repeatedly record measured latencies: // Report histogram percentiles, expressed in msec units: histogram.outputPercentileDistribution(histogramLog, 1000.0)}; Specifying 3 decimal points of precision in this example guarantees that value quantization within the value range will be no larger than 1/1,000th (or 0.1%) of any recorded value. This example Histogram can be therefor used to track, analyze and report the counts of observed latencies ranging between 1 microsecond and 1 hour in magnitude, while maintaining a value resolution 1 microsecond (or better) up to 1 millisecond, a resolution of 1 millisecond (or better) up to one second, and a resolution of 1 second (or better) up to 1,000 seconds. At it's maximum tracked value (1 hour), it would still maintain a resolution of 3.6 seconds (or better). Histogram variants and internal representation The HdrHistogram package includes multiple implementations of the • Histogram, which is the commonly used Histogram form and tracks value counts in long fields. • IntCountsHistogram and ShortCountsHistogram, which track value counts in int and short fields respectively, are provided for use cases where smaller count ranges are practical and smaller overall storage is beneficial (e.g. systems where tens of thousands of in-memory histogram are being tracked). • AtomicHistogram, ConcurrentHistogram and SynchronizedHistogram Internally, data in HdrHistogram variants is maintained using a concept somewhat similar to that of floating point number representation: Using a an exponent a (non-normalized) mantissa to support a wide dynamic range at a high but varying (by exponent value) resolution. AbstractHistogram uses exponentially increasing bucket value ranges (the parallel of the exponent portion of a floating point number) with each bucket containing a fixed number (per bucket) set of linear sub-buckets (the parallel of a non-normalized mantissa portion of a floating point number). Both dynamic range and resolution are configurable, with highestTrackableValue controlling dynamic range, and numberOfSignificantValueDigits controlling resolution. Synchronization and concurrent access In the interest of keeping value recording cost to a minimum, the commonly used class and it's variants are NOT internally synchronized, and do NOT use atomic variables. Callers wishing to make potentially concurrent, multi-threaded updates or queries against Histogram objects should either take care to externally synchronize and/or order their access, or use the , or A common pattern seen in histogram value recording involves recording values in a critical path (multi-threaded or not), coupled with a non-critical path reading the recorded data for summary/ reporting purposes. When such continuous non-blocking recording operation (concurrent or not) is desired even when sampling, analyzing, or reporting operations are needed, consider using the Recorder and SingleWriterRecorder variants that were specifically designed for that purpose. Recorders provide a recording API similar to Histogram, and internally maintain and coordinate active/inactive histograms such that recording remains wait-free in the presense of accurate and stable interval sampling. It is worth mentioning that since Histogram objects are additive, it is common practice to use per-thread non-synchronized histograms or SingleWriterRecorders, and using a summary/reporting thread perform histogram aggregation math across time and/or threads. Histograms supports multiple convenient forms of iterating through the histogram data set, including linear, logarithmic, and percentile iteration mechanisms, as well as means for iterating through each recorded value or each possible value level. The iteration mechanisms all provide data points along the histogram's iterated data set, and are available via the following methods: Iteration is typically done with a for-each loop statement. E.g.: for (HistogramIterationValue v : histogram.percentiles(percentileTicksPerHalfDistance)) { for (HistogramIterationValue v : histogram.linearBucketValues(valueUnitsPerBucket)) { The iterators associated with each iteration method are resettable, such that a caller that would like to avoid allocating a new iterator object for each iteration loop can re-use an iterator to repeatedly iterate through the histogram. This iterator re-use usually takes the form of a traditional for loop using the Iterator's methods: to avoid allocating a new iterator object for each iteration loop: PercentileIterator iter = histogram.percentiles().iterator(percentileTicksPerHalfDistance); for (iter.hasNext() { HistogramIterationValue v = iter.next(); Equivalent Values and value ranges Due to the finite (and configurable) resolution of the histogram, multiple adjacent integer data values can be "equivalent". Two values are considered "equivalent" if samples recorded for both are always counted in a common total count due to the histogram's resolution level. Histogram provides methods for determining the lowest and highest equivalent values for any given value, as we as determining whether two values are equivalent, and for finding the next non-equivalent value for a given value (useful when looping through values, in order to avoid double-counting count). Raw vs. corrected recording Regular, raw value data recording into an HdrHistogram is achieved with the recordValue() method. Histogram variants also provide an auto-correcting recordValueWithExpectedInterval() form in support of a common use case found when histogram values are used to track response time distribution in the presence of Coordinated Omission - an extremely common phenomenon found in latency recording systems. This correcting form is useful in [e.g. load generator] scenarios where measured response times may exceed the expected interval between issuing requests, leading to the "omission" of response time measurements that would typically correlate with "bad" results. This coordinated (non random) omission of source data, if left uncorrected, will then dramatically skew any overall latency stats computed on the recorded information, as the recorded data set itself will be significantly skewed towards good results. When a value recorded in the histogram exceeds the expectedIntervalBetweenValueSamples parameter, recorded histogram data will reflect an appropriate number of additional values, linearly decreasing in steps of expectedIntervalBetweenValueSamples, down to the last value that would still be higher than expectedIntervalBetweenValueSamples). To illustrate why this corrective behavior is critically needed in order to accurately represent value distribution when large value measurements may lead to missed samples, imagine a system for which response times samples are taken once every 10 msec to characterize response time distribution. The hypothetical system behaves "perfectly" for 100 seconds (10,000 recorded samples), with each sample showing a 1msec response time value. At each sample for 100 seconds (10,000 logged samples at 1msec each). The hypothetical system then encounters a 100 sec pause during which only a single sample is recorded (with a 100 second value). An normally recorded (uncorrected) data histogram collected for such a hypothetical system (over the 200 second scenario above) would show ~99.99% of results at 1msec or below, which is obviously "not right". In contrast, a histogram that records the same data using the auto-correcting recordValueWithExpectedInterval() method with the knowledge of an expectedIntervalBetweenValueSamples of 10msec will correctly represent the real world response time distribution of this hypothetical system. Only ~50% of results will be at 1msec or below, with the remaining 50% coming from the auto-generated value records covering the missing increments spread between 10msec and 100 sec. Data sets recorded with and with recordValue() and with recordValueWithExpectedInterval() will differ only if at least one value recorded was greater than it's associated expectedIntervalBetweenValueSamples parameter. Data sets recorded with recordValueWithExpectedInterval() parameter will be identical to ones recorded with recordValue() it if all values recorded via the recordValue calls were smaller than their associated expectedIntervalBetweenValueSamples parameters. In addition to at-recording-time correction option, Histrogram variants also provide the post-recording correction methods copyCorrectedForCoordinatedOmission() and addWhileCorrectingForCoordinatedOmission(). These methods can be used for post-recording correction, and are useful when the expectedIntervalBetweenValueSamples parameter is estimated to be the same for all recorded values. However, for obvious reasons, it is important to note that only one correction method (during or post recording) should be be used on a given histogram data set. When used for response time characterization, the recording with the optional expectedIntervalBetweenValueSamples parameter will tend to produce data sets that would much more accurately reflect the response time distribution that a random, uncoordinated request would have experienced. Floating point values and DoubleHistogram variants The above discussion relates to integer value histograms (the various subclasses of and their related supporting classes). HdrHistogram supports floating point value recording and reporting with a similar set of classes, including the histogram classes. Support for floating point value iteration is provided with and related iterator classes ( ). Support for interval recording is provided with Auto-ranging in floating point histograms Unlike integer value based histograms, the specific value range tracked by a (and variants) is not specified upfront. Only the dynamic range of values that the histogram can cover is (optionally) specified. E.g. When a is created to track a dynamic range of 3600000000000 (enough to track values from a nanosecond to an hour), values could be recorded into into it in any consistent unit of time as long as the ratio between the highest and lowest non-zero values stays within the specified dynamic range, so recording in units of nanoseconds (1.0 thru 3600000000000.0), milliseconds (0.000001 thru 3600000.0) seconds (0.000000001 thru 3600.0), hours (1/3.6E12 thru 1.0) will all work just as well. Footprint estimation Due to it's dynamic range representation, Histogram is relatively efficient in memory space requirements given the accuracy and dynamic range it covers. Still, it is useful to be able to estimate the memory footprint involved for a given combination. Beyond a relatively small fixed-size footprint used for internal fields and stats (which can be estimated as "fixed at well less than 1KB"), the bulk of a Histogram's storage is taken up by it's data value recording counts array. The total footprint can be conservatively estimated by: largestValueWithSingleUnitResolution = 2 * (10 ^ numberOfSignificantValueDigits); subBucketSize = roundedUpToNearestPowerOf2(largestValueWithSingleUnitResolution); expectedHistogramFootprintInBytes = 512 + ({primitive type size} / 2) * (log2RoundedUp((highestTrackableValue) / subBucketSize) + 2) * A conservative (high) estimate of a Histogram's footprint in bytes is available via the
{"url":"https://hdrhistogram.github.io/HdrHistogram/JavaDoc/org/HdrHistogram/package-summary.html","timestamp":"2024-11-09T09:15:49Z","content_type":"text/html","content_length":"45430","record_id":"<urn:uuid:c4741ee4-27f4-40c2-b21a-41b42b217e41>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00844.warc.gz"}
Researcher identifiers • geoffrey-beck • IdRef : 197337767 • Google Scholar : https://scholar.google.fr/citations?user=4cRX_t8AAAAJ Wave-turbulence manage to give a statistic description of the effective balance of the mean energy input from a source at low wave-numbers, transfer of energy through reversible non-linearities to higher and higher wave-numbers. The final goal is to derive a wave kinetic equation which describe this cascade from a random initial ocean. But catching the specfic asymtotic regime where the non-linearties are small and the time scale is sufficiently long to allow quasi-resonnance mechanism isn't an easy task. To check the physical relevance of wave-turbulence regime, one can imagine a laboratory experiment where the water surface becomes random with the help of a folationg object whch act as a random shaker. I'm also interseted to interactions of water-waves with a partially immersed body allowed to move freely in the vertical direction. In 2D fluid, the whole system of equations can be reduced to a transmission problem with transmission conditions given in terms of the displacement of the object and of the average horizontal discharge beneath it; these two quantities are in turn determined by two nonlinear ODEs with forcing terms coming from the exterior wave-field. One application of this prject is to recover wave energy by the the solid displacement. The wave energy are transferred to device by electrical cable. I also work on derivation of 1D models of electrical networks from 3D electromagnetic wave propagation by multi-sacle asymptotic analysis of 3D Maxwell equations. Important effortd are devoted to understand the skin-effect due to the high contrast of conductity inside a cable, to take into acconunt singular geometry such as defect on junctions, or comparaison beetween 1D models and 3D simulations. One motivation to reduced complex wave propagtion to simple 1D models is tu use the last ones for wire troubleshooting. One intersting inverse problem is how to recover underlying graph for unknows electrical networks by • 5 • 4 • 4 • 3 • 2 • 2 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1 • 1
{"url":"https://cv.hal.science/geoffrey-beck","timestamp":"2024-11-09T06:07:49Z","content_type":"application/xhtml+xml","content_length":"127590","record_id":"<urn:uuid:1305b11e-7445-4035-9d9b-6661cfd3b483>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00670.warc.gz"}
Analysis of Optimal Battery State-of-Charge Trajectory for Blended Regime of Plug-in Hybrid Electric Vehicle Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, Ivana Lučića 5, 10002 Zagreb, Croatia Author to whom correspondence should be addressed. Submission received: 11 October 2019 / Revised: 1 November 2019 / Accepted: 5 November 2019 / Published: 8 November 2019 Plug-in hybrid electric vehicles (PHEV) typically combine several power sources, which call for the use of optimal control strategy design techniques. The PHEV powertrain efficiency can be improved if the battery is gradually discharged by blending fully electric and hybrid driving modes during the whole trip. Here, the battery state-of-charge (SoC) trajectory profile is of particular importance to achieving near-optimal powertrain operation. In order to reveal optimal patterns of SoC trajectory profiles, numerical optimizations of PHEV control variables based on the dynamic programing (DP) algorithm are conducted in the paper. The obtained optimal SoC trajectories are found to form linear-like profiles of minimum length when expressed with respect to travelled distance. Detailed analyses of the DP results point out that the SoC trajectory length is minimized in order to minimize electric losses, which is then reflected in reduced total fuel consumption. This finding is further justified by analyzing the problem of optimal discharging for the simplified battery-only system and for the powertrain as a whole. The impact of engine specific fuel consumption characteristic on the optimal SoC trajectory profile under simplified driving conditions is analyzed, as well. 1. Introduction Plug-in hybrid electric vehicles (PHEV) are proven to be a viable mid-term solution towards ultimate fully electric vehicles (EV), as they overcome main deficiencies of EVs such as high prices and short range, while allowing recharging from power grid. PHEVs typically operate in charge depleting (CD) and charge sustaining (CS) regimes, where in the CD regime pure electric driving is active until the battery is discharged to a predefined lower-limit level, while in the CS regime hybrid driving is activated in order to sustain the battery state-of-charge (SoC) [ ]. In the case of knowing the trip length in advance, it is possible to discharge the battery more gradually under a blended regime and thus further reduce fuel consumption [ ] (typically from 2% to 5% when compared to the CD/CS regime [ ]). The optimal SoC trajectory in the blended regime (expressed with respect to travelled distance) tends to have a nearly-linear minimum-length shape for the zero road grade case [ ], while it can significantly deviate from the linear trend in the presence of varying road grade [ ], low emission zones [ ], and non-uniformly distributed driving patterns during driving cycle [ In order to fully utilize PHEV potential in the blended regime, it is crucial to determine near-optimal SoC reference trajectory in advance for a wide range of driving conditions, which should be provided to powertrain control strategy and its SoC controller [ ]. In [ ], the SoC reference trajectory is calculated by using prediction of upcoming road grade profiles and average driving speeds. In [ ], the road grade preview is employed for proper planning of battery usage during driving, where heuristic rules are used to determine a target SoC prior reaching uphill climbing. A predictive HEV energy management strategy calculating the optimal SoC reference trajectory under uncertainties caused by traffic flow and traffic lights is proposed in [ ]. In [ ], an energy management strategy with road condition preview is proposed, where the optimal SoC reference trajectory is calculated based on predictions of upcoming driving patterns. In order to further reduce fuel/energy consumption, a model predictive control (MPC)-based approach can be used to perform on-line optimizations of PHEV control variables on receding horizon [ ]. In this approach, it is crucial to feed MPC by accurate predictions of future vehicle velocity profile, which can be obtained by using different deterministic or stochastic methods (e.g., based on recurrent neural network [ ]). In [ ], a hierarchical control strategy performing combined minimization of energy- and battery aging-related costs in MPC manner is proposed, where battery aging is tackled by iteratively calculating a proper battery depth-of-charge (DoD). However, due to inability to predict vehicle velocity profiles accurately on longer time horizons, these MPC applications typically rely on relatively short time horizon predictions (around 10 seconds [ ]), and thus cannot ensure global optimality of SoC trajectory. Therefore, the global SoC reference trajectory is typically prepared separately from MPC, and used repeatedly to provide SoC boundary conditions for MPC optimization. Since the SoC reference trajectory is important to achieve near-optimal powertrain operation both in non-predictive and predictive, and rule- and optimization-based control strategies, this paper aims to provide comprehensive analysis of optimal SoC trajectory patterns in support of SoC reference trajectory synthesis. The analysis is conducted systematically, starting by analysis of dynamic programming (DP) optimization results obtained for different driving conditions, and proceeding by analysis of optimal discharging patterns for the case of simplified battery-only system and for the powertrain as a whole. A convexity analysis of the relevant powertrain functional dependencies is conducted to explain the observed optimal SoC trajectory patterns, in order to further gain insights into the optimal powertrain operation for different driving conditions. The main contributions of the paper include: (i) proposing a method of generating optimal SoC trajectories of different length with respect to travelled distance and conducting correlation analyses of obtained results, (ii) clarifying the cause and conditions under which the optimal SoC trajectory has the minimum-length linear pattern, and (iii) analytical proof of optimal SoC trajectory pattern for the simplified scenario of battery-only discharging system. The paper is organized as follows. Section 2 describes mathematical modelling of PHEV powertrain. The DP-based optimization of PHEV control variables and analysis of the corresponding optimization results are presented in Section 3 Section 4 deals with analysis of the optimal SoC trajectory patterns, by considering the optimal battery discharging under various conditions. Concluding remarks are given in Section 5 2. Modelling of PHEV Powertrain Figure 1 a illustrates the parallel PHEV configuration of a city bus powertrain considered herein for the purpose of analysis. The powertrain consists of internal combustion engine (ICE), electric machine (M/ G), lithium-ion battery and automated manual transmission with 12 gears [ ]. When being switched off, the engine can be disconnected from the powertrain by using a clutch, thus enabling electric-only driving. The PHEV powertrain is modelled in the backward-looking manner [ ], where the engine and M/G machine rotational speeds are determined by the vehicle velocity $v v$ and transmission gear ratio as follows: $ω e = ω M G = i o h ω w = i o h v v r w ,$ while the sum of engine and M/G machine torques is obtained from the demanded torque at wheels $τ w$ and the corresponding drivetrain losses: $τ e + τ M G = τ c d i o h = ( τ w η t r ( τ w ) + P 0 ( ω w ) ω w ) i o h .$ In Equations (1) and (2), $i o$ denotes the final drive ratio, $ω w$ the wheel speed, $r w$ the effective tire radius, $η t r$ the transmission efficiency, and $P 0$ the idle-mode power losses (see Figure 1 b,c). The total power demand including power losses can then be defined as: $P d = ω w τ c d = ω w τ w η t r ( τ w ) + P 0 ( ω w ) .$ The total wheel torque $τ w$ is calculated according to longitudinal vehicle dynamics equation [ $τ w = r w ( ( M v + m p a s s ) d v v d t + R 0 ( M v + m p a s s ) g cos ( δ r ) ⏟ F r o l l + ( M v + m p a s s ) g sin ( δ r ) ⏟ F g r a d e + ρ a i r A f C d v v 2 ⏟ F a e r o ) ,$ $M v$ $m p a s s$ are the empty bus mass and the total mass of passengers, respectively, $R 0$ is the rolling resistance coefficient, $ρ a i r$ is the air density, $A f$ is the bus frontal surface, $C d$ is the aerodynamical drag coefficient, $δ r$ is the road grade, and is the gravity acceleration (see Appendix A for numerical values of these parameters). The terms $F r o l l$ $F g r a d e$ $F a e r o$ are rolling, road grade-related and aerodynamic resistances, respectively. The engine specific fuel consumption and M/G machine efficiency are modelled by means of 2D maps, while the corresponding maximum torque characteristics are modelled by 1D maps ( Figure 2 ). The specific fuel consumption map ( $A e k$ ), expressed in g/kWh unit, can readily be transformed to the fuel consumption rate map ( $m ˙ f$ expressed in g/s unit) by using the following expression: $m ˙ f = A e k ( τ e , ω e ) τ e ω e 3.6 · 10 6 .$ The battery is modelled as a charge storage by an equivalent electric circuit ( Figure 3 a), where the open circuit voltage $U o c$ and internal resistance are set to be dependent on the battery SoC ( Figure 3 $S o C ∈ [ 0 , 1 ]$ ). Finally, the battery model is represented by the following state equation [ $S o C ˙ = U o c 2 ( S o C ) − 4 R ( S o C ) P b a t t − U o c ( S o C ) 2 Q m a x R ( S o C ) ,$ $Q m a x$ is the battery charge capacity (here $Q m a x =$ 30 Ah), while $P b a t t$ is the battery power which is determined by M/G machine power $P M G$ $P b a t t = η M G k e f f τ M G ω M G ⏟ P M G .$ The variable $η M G$ is M/G machine efficiency (see Figure 2 b), and $k e f f$ is equal to one for the case of battery charging ( $P b a t t < 0$ ) and −1 for the case of battery discharging ( $P b a t t > 0$ 3. Optimization of PHEV Control Variables This section deals with DP optimization of PHEV control variables for the blended regime, aimed at finding optimal SoC trajectories for which the total fuel consumption is minimized for different driving cycles and conditions. More details on DP-based optimization of PHEV control variables can be found in [ ] and references given therein. 3.1. Optimal Problem Formulation The aim of optimization is to find the values of PHEV control variables in each discrete time step which minimize the cumulative fuel consumption, while satisfying the state- and control variables-related constraints. By introducing the following substitutions for the state variable $S o C$ , control variables $τ e$ $h ,$ and input variables $τ w$ $ω w$ $x = S o C , u = [ τ e h ] T , v = [ τ w ω w ] T ,$ the following discrete-time cost function including cumulative fuel consumption is defined: $J = ∑ k = 1 N F ( x k , u k , v k , k ) ,$ $F ( x k , u k , v k , k ) = m ˙ f , k Δ T + K g { H − ( x k − S o C m i n ) + H − ( S o C m a x − x k ) } + K g { H − ( P b a t t m a x − P b a t t , k ) + H − ( P b a t t , k − P b a t t m i n ) } + K g { H − ( τ e , k − τ e m i n ) + H − ( τ e m a x − τ e , k ) } + K g { H − ( ω e , k − ω e i d l e ) + H − ( ω e m a x − ω e , k ) } + K g { H − ( τ M G , k − τ M G m i n ) + H − ( τ M G m a x − τ M G , k ) } + K g { H − ( ω M G , k − ω M G i d l e ) + H − ( ω M G m a x − ω M G , k ) } ,$ denotes the discrete time step, the total number of discrete time steps, and $Δ T$ the discretization time step. Apart from the fuel consumption within each discrete time step $m ˙ f , k Δ T$ , additional terms are aimed to penalize violation of different constraints. The function $H −$ (.) represents the inverted Heaviside function which is equal to 1 when its argument is negative, while otherwise it is equal to 0. The factor $K g$ is weighting factor which is set to a relatively large value (here $K g = 10 12$ ) in order to avoid constraints violation. The state equation given by Equation (6) is discretized in time in order to take the following discrete-time form: $x k + 1 = f ( x k , u k , v k , k ) , k = 0 , 1 , … , N − 1 .$ The values of initial state variable at $k = 0$ and final state variable at $k = N$ are defined as: $x 0 = S o C i , x f = S o C f .$ An additional term $J f$ penalizing the deviation of the final SoC from the target value $S o C f$ is added to the cost function (9) and the control variables optimization problem as: $min u k ( J f + ∑ k = 1 N F ( x k , u k , v k , k ) ) ,$ $J f = K f ( S o C f − x N ) 2 = K f ( S o C f − f ( x N − 1 , u N − 1 , v N − 1 ) ) 2 ,$ $K f$ denotes a weighting factor (here $K f = 10 6$ The above-formulated optimization problem is solved by using a dynamic programming (DP), which provides globally optimal results for given discretization resolution of the state and control variables ] (set as a trade-off between computational efficiency and the optimization accuracy [ ]). Numerical values of DP optimization parameters are listed in Appendix A 3.2. Optimisation Results DP optimizations of PHEV control variables are conducted for the blended regime and driving cycles shown in Figure 4 . The driving cycle denoted by DUB, including the time profiles of vehicle velocity $v v$ , road grade $δ r$ , and passengers mass $m p a s s$ , has been recorded on a real bus operating on a regular bus route in the city of Dubrovnik. Apart from the varying road grade shown in Figure 4 b (w/grade), the DUB velocity time profile is also considered for zero road grade (w/o grade). The heavy-duty UDDS driving cycle (HDUDDS) is a certification driving cycle, for which the road grade is zero and empty bus is assumed ( = 0). The DUB driving cycle is repeated three times and HDUDDS two times, in order to provide battery discharging to the minimum allowable SoC level (set here to 30%) in the blended regime under the given driving conditions. The obtained optimal SoC trajectories (denoted in blue in Figure 5 ) closely follow the linear profile (red color lines), when they are expressed with respect to travelled distance. Among all possible SoC trajectories spanning between the initial and final SoC values, the linear-like profile has the minimum length. The linear trend is somewhat deteriorated in the case of varying road grade ( Figure 5 a), where low frequency oscillations appear in the SoC trajectory. These oscillations are caused by battery recharging during regenerative braking on negative road grade segments. Similarity between the optimized and minimum-length (linear) SoC trajectory profiles is quantified by giving values of correlation index Figure 5 (calculated by using Matlab function (.)), which approaches to the almost ideal value of 1 in the case of zero road grades, and to somewhat lower but still very high value in the case of varying road grades. 3.3. Generating and Analyzing Optimal SoC Trajectories of Different Length Based on the results presented in Figure 5 it can be hypothesized that the optimality is closely related to the SoC trajectory length, i.e., that the minimum-length SoC trajectory is optimal. In order to test this hypothesis, the optimal SoC trajectories of different length are generated by adding the following additional SoC constraints to the cumulative cost function (9): $J S o C , a d d = ∑ j K S o C , j ( S o C c o n s t r , j − x j ) 2 = ∑ j K S o C , j ( S o C c o n s t r , j − f ( x j − 1 , u j − 1 , v j − 1 ) ) 2 , j ∈ C a .$ These constraints penalize the deviation of SoC from several prescribed values $S o C c o n s t r , j$ defined for th discrete time step, where $C a$ represents the set of these discrete time steps. Although these SoC constraints are already defined as soft constraints, additional flexibility for optimization is introduced by adding a dead zone of 0.05 (i.e., 5%) around the target values $S o C c o n s t r , j$ . This is realized by varying weighting factor $K S o C , j$ , which takes the value 0 if $| S o C c o n s t r , j − x j | < 0.05$ , while, otherwise, it takes the value of 5·10 . The effect of extending the DP optimization with two additional SoC constraints is illustrated in Figure 6 , where the optimal SoC trajectory $S o C D P$ and the cumulative cost function $J i$ are shown with respect to travelled distance. The optimal cumulative cost function $J i$ in the discrete time step can be calculated as: $J i = min u k ( J f + ∑ j ≥ i K S o C , j ( S o C c o n s t r , j − x j ) 2 ⏟ J S o C , a d d + ∑ k = i N F ( x k , u k , v k , k ) ) ,$ and it represents the minimal cumulative cost which can be obtained under the imposed constraints, when starting in the considered current SoC and finishing at the final target SoC ( $S o C f$ = 30%). Extremely large values of $J i$ $J i$ > 10,000) correspond to the SoC values and discrete time steps for which the final (14) and additional SoC constraints (15) cannot be satisfied under the considered driving conditions. It can be observed that additional SoC constraints cause the cumulative cost function to take relatively low values only in the narrow SoC range of ±5% (corresponding to dead-zone width) in the corresponding time instants. Consequently, the optimal SoC trajectory is forced to pass through these narrow SoC ranges, thus achieving the SoC trajectory of different length when compared to the optimal SoC trajectory obtained when no additional SoC constraints are included. Apart from the total fuel consumption $V f$ , the total electric energy losses $E E L , l o s s$ consisting of battery losses $E b a t t , l o s s$ and M/G machine losses $E M / G , l o s s$ are also considered in this analysis: $E E L , l o s s = E b a t t , l o s s + E M / G , l o s s .$ The battery losses are dissipated as a heat on its internal resistance and have quadratic dependence with respect to the battery current $I b a t t$ $∫ I b a t t 2 R d t$ ), while the M/G machine losses depend on the efficiency $η M G$ Figure 2 The normalized SoC trajectory length is calculated as: $L S o C , n o r m = ∑ k = 1 N Δ S o C k 2 + ( Δ s k s f ) 2 ,$ $Δ S o C k$ $Δ s k$ represent the difference of SoC and travelled distance between two consecutive time steps (i.e., $Δ S o C k = S o C k − S o C k − 1$ $Δ s k = s k − s k − 1$ ), respectively, while $s f$ denotes the total travelled distance. Here, only $Δ s k$ is normalized with respect to the total travelled distance $s f$ , because the $Δ S o C k$ already lies in the interval [0,1] by definition and its cumulative sum $∑ k Δ S o C k$ closely approaches value 1 as $∑ k Δ s k / s f = 1$ Figure 7 a shows numerous SoC trajectories obtained by DP optimizations for different randomly generated SoC constraints (see Equation (15)) and the case of 3 x DUB with a zero road grade. Some characteristic optimal SoC trajectories are outlined: Blended which corresponds to the case when no additional SoC constraints are included into DP optimization (cf. Figure 5 b), CD/CS where the battery is first depleted in pure electric driving and then sustained by means of hybrid driving, CS/CD where the battery discharging is maximally postponed, and $m a x L S o C , n o r m$ which has the maximum length among all generated SoC trajectories. Note that CD/CS SoC trajectory reveals the all-electric range for the particular driving cycle to be around 15 km. Figure 7 b–d show the total fuel consumption $V f$ with respect to different metrics, where each point corresponds to one optimal SoC trajectory from Figure 7 Figure 7 b indicates a very high correlation of the total fuel consumption $V f$ with respect to the normalized SoC trajectory length $L S o C , n o r m$ (i.e., larger $V f$ corresponds to larger $L S o C , n o r m$ ). Since all SoC trajectories end up in the same value ( $S o C f$ = 0.3), the observed variations in the fuel consumption for different SoC trajectories may be caused by: (i) different distribution of operating points in the engine specific fuel consumption map, and (ii) different total electric losses calculated by Equation (17). In order to understand these causes better, the total fuel consumptions $V f$ are shown with respect to the engine mean specific fuel consumptions $A e k , m e a n$ Figure 7 c), and with respect to the total electric losses $E E L , l o s s$ Figure 7 d). The results shown in Figure 7 c reveal that the cause (i) may be discarded since the larger total fuel consumption often corresponds to even lower mean specific fuel consumption (note negative correlation). On the other hand, very high positive correlation of the total fuel consumption with respect to total electric losses can be observed in Figure 7 d, thus revealing that the electric losses are responsible for the fuel consumption variations when SoC trajectories of different length are generated. For most of the SoC trajectories, the engine efficiency reflected through the mean specific fuel consumption is somewhat sacrificed, in order to minimize obviously more critical total electric losses and finally to minimize total fuel consumption. It can be observed from Figure 7 a that the blended SoC trajectory has a minimum length and achieves minimal fuel consumption and electric losses among all generated SoC trajectories, while the SoC trajectory with the maximum SoC trajectory length achieves nearly maximum fuel consumption and electric losses. The finding that the optimality of SoC trajectory is closely related to its length minimization can effectively be used for synthesis of SoC reference trajectory. In the simplified case of zero road grade and uniform driving conditions, the nearly optimal SoC reference trajectory $S o C R ( s )$ can be calculated simply as a line spanning between the initial and final SoC values [ $S o C R ( s ) = S o C R ( 0 ) + s S o C R ( s f ) − S o C R ( 0 ) s f .$ The same principle of SoC trajectory length minimization can be adapted to more complex scenarios, such as those related to low emission zones [ ] and varying road grades [ ]. Since the final SoC in Equation (19) can be set to arbitrary value, this SoC synthesis method can be effectively combined with another higher-level system providing the optimal battery depth-of-discharge (DoD) (i.e., final SoC value; [ The proposed approach for SoC reference trajectory synthesis is suitable for practical applications due to its computational simplicity and relatively low requirements on related trip information, as opposed to alternative MPC-based approach relying on computationally costly on-line optimizations. Regarding the trip requirements, only the driving distance is needed, which is known in advance for delivery and public transport vehicles, and could be set by driver or extracted from navigation system for other vehicles. 4. Analysis of Optimal SoC Trajectory Patterns This section is aimed to further explain the observed DP-based optimal SoC trajectory patterns, starting by an analysis of the optimal operation of a battery-only system and following by an analysis of the whole powertrain including the engine, M/G machine and battery. 4.1. Simplified Case of Minimizing Solely Battery Energy Losses First, the problem of discharging battery from the initial SoC value $S o C i$ $S o C i = 0.9$ ) to some predefined final value $S o C f$ $S o C f = 0.3$ ) with the aim of maximizing energy drawn from the battery is analyzed. The useful energy drawn is maximized if the internal battery energy losses $E b a t t , l o s s$ are minimized: $min E b a t t , l o s s = min ∫ 0 t f P b a t t , l o s s d t = min ∫ 0 s f P b a t t , l o s s v v d s , s . t . ∫ 0 t f S o C ˙ d t = S o C f − S o C i .$ The derivative of SoC with respect to travelled distance can be expressed as: $d S o C d s = − I b a t t ( t ) Q m a x 1 v v .$ The battery power losses have quadratic dependence with respect to the battery current $I b a t t$ described by the term $P b a t t , l o s s = I b a t t 2 R ( S o C )$ , which is then combined with Equations (20) and (21), and finally resulting in the following optimization problem with $d S o C / d s$ serving as argument: $min I b a t t ∫ 0 s f I b a t t 2 R ( S o C ) v v d s = min d S o C d s ∫ 0 s f Q m a x 2 R ( S o C ) ( d S o C d s ) 2 v v d s , s . t . ∫ 0 t f S o C ˙ d t = S o C f − S o C i .$ Discretization of Equation (22) leads to: $min Δ S o C r Δ s r ∑ r = 1 N R R ( S o C r ) ( Δ S o C r Δ s r ) 2 v v , r Δ s r , s . t . ∑ r = 1 N R Δ S o C r = S o C f − S o C i ,$ $Δ S o C r$ is the SoC depletion on the route segment of length $Δ s r$ , while $N R$ is the total number of discrete route segments. The factor $Q m a x 2$ is omitted in Equation (23) since it is constant and does not have influence on the optimization problem solution. Under the assumption of the battery internal resistance , vehicle velocity $v v , r$ , and length of all route segments $Δ s r$ will be constant, the optimization problem can be further simplified: $min Δ S o C r Δ s r ∑ r = 1 N R ( Δ S o C r Δ s r ) 2 , s . t . ∑ r = 1 N R Δ S o C r = S o C f − S o C i .$ This assumption related to the resistance is reasonable because it is relatively constant for a wide range of SoC values (see Figure 3 b), while other assumptions are introduced here for the purpose of simplification and analysis. Since the quadratic function is convex, the following expression based on Jensen’s inequality can be $∑ r = 1 N R ( Δ S o C r Δ s r ) 2 N R ≥ ( ∑ r = 1 N R Δ S o C r Δ s r N R ) 2 ,$ where the numerator on the left-hand side of Equation (25) corresponds to the cost function of the optimization problem (24). Now, the minimum of the left-hand side of Equation (25), corresponding to equality of the left-hand side and right-hand side terms, is achieved for the constant value of $Δ S o C r / Δ s r$ for all route segments. By combining the equality constraint from Equation (24) related to the SoC boundary values and posing $Δ S o C r / Δ s r$ to be constant, the following expression for the optimal SoC depletion on route segment is obtained: $Δ S o C r Δ s r = S o C f − S o C i s f ,$ $s f$ represents the total travelled distance. In this case, the optimal battery operation would be to discharge the battery with constant SoC depletion rate, thus resulting in the SoC trajectory with linear shape of minimum length. The same battery discharging problem is further analyzed by means of DP-based optimization in order to analyze the impact of varying battery parameters on SoC trajectory shape. Figure 8 shows SoC trajectories obtained by the constant SoC depletion rate ( $S o C l i n$ ) and by DP optimizations for: (i) the constant battery parameters (the mean values from Figure 3 b are used), and (ii) the SoC-dependent battery parameters ( Figure 3 b). The optimization horizon is set to 400 s in order to enable battery discharging with feasible battery power (mean value of the battery power in this case is equal to 96.6 kW, while the upper limit is 150 kW). It should be emphasized that this is performed only for the purpose of analysis and it is not related to realistic road conditions. In the case of constant battery parameters, the optimal operation is related to a constant SoC depletion rate ( Figure 8 b; slight deviation from the constant value in the case of DP occurs due to discretization effects and the requirement on the final SoC value). The results shown in Figure 8 a point out that the impact of variable battery parameters on the optimal SoC trajectory shape is almost negligible. Figure 8 b shows the optimal SoC depletion rate time profiles, where in the case of variable battery parameters SoC depletion rate slightly deviates from the constant value which is caused by the battery resistance dependence on SoC (cf. the battery resistance profile from Figure 3 b with optimal SoC depletion rate). 4.2. More Realistic Case of Minimizing Fuel Consumption The analysis is extended here to the overall powertrain, which includes the engine, M/G machine, transmission, and battery (see Figure 1 a). In order to study the optimal SoC trajectory with respect to fuel consumption minimization while discharging the battery (i.e., from 90% to 30%), the fuel consumption rate $m ˙ f$ is expressed in dependence on SoC depletion rate $S o C ˙$ for different values of the battery SoC, power demand $P d$ , and the engine speed $ω e$ $m ˙ f = g ( S o C ˙ , S o C , P d , ω e ) .$ The optimal solution for $S o C ˙$ which minimizes the total cumulative fuel consumption can be found analytically if the function $g$ in Equation (27) is convex, under assumption of constant values of $P d$, $S o C$, and $ω e$(i.e., constant vehicle velocity). It can be shown that the optimality is achieved if $S o C ˙$ is kept constant during whole driving cycle and set to the value which would discharge the battery to the predefined minimum value (the same reasoning as in the case of deriving optimal SoC depletion in Equation (26)). The analysis is given here in the time domain, and it is equivalent to the travelled distance domain considered in previous sections because of the constant vehicle velocity assumption introduced here. Figure 9 a shows the graphical representation of the function (27) for several $P d$ values and for $S o C$ = 50%. The corresponding second derivatives are positive over the whole range thus confirming the convexity of the analyzed functions ( Figure 9 b). This convexity analysis is also conducted for a wide set of $P d$ $ω e$ values, and the results are shown in Figure 10 (the function is categorized as non-convex if its second derivative is not strictly positive). According to the results from Figure 10 , the function in Equation (27) is convex for a majority of $P d$ $ω e$ These effects are further illustrated and analyzed for the particular $ω e$ $P d$ values for two engine fuel consumption characteristics (g/s): (i) the original one (obtained from Figure 2 a by using Equation (5)) resulting in the function (27) to be convex, and (ii) the modified one resulting in the function (27) to be concave (see Figure 11 ). Here, the modified engine specific fuel consumption map is introduced solely to demonstrate that the optimal SoC trajectory may differ from the linear one of minimum length depending on the convexity character of function (27). Three different scenarios of battery discharging to the predefined low value of 30% are considered (see related operating points and profiles in Figure 12 • OP1: power demand $P d$ is partly satisfied by the engine and partly by the M/G machine (operating points are kept constant during the whole operation; constant $S o C ˙$ < 0), • OP2—Phase 1: power demand $P d$ is completely satisfied by the engine ($S o C ˙$ = 0), Phase 2: power demand $P d$ is completely satisfied by the M/G machine (constant $S o C ˙$ < 0), • OP3—Phase 1: power demand $P d$ is completely satisfied by the engine which also provides additional power to recharge the battery (constant $S o C ˙ > 0$), Phase 2: power demand $P d$ is completely satisfied by the M/G machine (constant $S o C ˙ < 0$). From the standpoint of lower engine specific fuel consumption and regardless of type of engine fuel consumption characteristic (original or modified), Scenario OP2 is preferable over Scenario OP1, and Scenario OP3 is preferable over Scenario OP2 (see Figure 12 a,b). However, from the standpoint of overall powertrain fuel consumption, Scenario OP1 related to linear SoC trajectory should be optimal if the function $m ˙ f$ $S o C ˙$ is convex (as it is the case with the original characteristic shown in Figure 11 a), while it should be suboptimal in the case of non-convex function (the modified characteristic shown in Figure 11 a). This is confirmed by the results presented in Figure 13 , where the comparative fuel consumption time profiles are shown for different scenarios. This finding can be explained by the fact that it is advantageous to place the engine operating point to somewhat larger specific engine fuel consumption (OP1 vs. OP2 and OP3, see Figure 12 a) in the case of original engine characteristic, and thus avoid relatively large total electric losses whose increase is progressive with the M/G power (i.e., battery power, Figure 12 c). In this case it is optimal to keep $S o C ˙$ constant which results in the SoC trajectory of minimum length. However, in the case of modified engine characteristic, where the difference in the specific fuel consumption between OP2 vs. OP1 and OP3 vs. OP2 is more significant than in the original case, it is advantageous to move the engine operating point in reduced specific fuel consumption region (OP3 and OP2; see Figure 12 a,b) despite the increased electric losses in Phase 2 ( Figure 12 The above analysis contributes to understanding of the tendency of optimal SoC trajectories to be of minimum length (as observed in Figure 5 ), taking into account that the function in Equation (27) is convex in a great majority of operating region ( Figure 10 ) for the original engine specific fuel consumption map. 5. Conclusions This paper has presented the analysis of the optimal battery state-of-charge (SoC) trajectory for the blended operating regime of a parallel plug-in hybrid electric vehicle (PHEV). The analysis was based on the optimization results obtained by using dynamic programming (DP) algorithm, for various driving cycles. It has been found that the optimal SoC trajectories expressed with respect to travelled distance tend to have nearly-linear (i.e. minimum-length) shape for different driving cycles. The linear SoC trajectory was also proven to be optimal both analytically and numerically for a simplified battery-only system based on battery power loss minimization. The analysis was extended to the whole powertrain including the engine, electric machine and battery, where the main aim was to minimize the total fuel consumption. It has been shown that the linear SoC trajectory is also optimal for the whole powertrain in the (actual) case of convex shape of engine fuel mass flow versus SoC depletion rate characteristic. It has been also demonstrated that when modifying the engine specific fuel consumption characteristic to some extent, the optimal SoC trajectory can have significantly different patterns than the minimum-length linear one. In summary, the analyses conducted in this paper have pointed out that the minimum length linear SoC trajectory is optimal because of its feature to minimize the total electric losses and because of flexibility in setting the engine operating points due to a relatively flat engine specific fuel consumption vs. engine power characteristic in a wide range. Author Contributions Conceptualization, J.D., B.Š.; methodology, B.Š., J.S.; software, J.S.; validation, B.Š, J.S. and J.D.; investigation, J.S, B.Š.; writing—original draft preparation, B.Š.; writing—review and editing, B.Š., J.D.; supervision, J.D. It is gratefully acknowledged that this work has been funded by Croatian Science Foundation under the project No. IP-2018-01-8323 (Project Acronym: ACHIEVE), while the initial research effort on the topic had been done through Interreg CE project SOLEZ. Conflicts of Interest The authors declare no conflict of interest. Appendix A. PHEV City Bus Parameters Appendix A.1. Model Parameters Vehicle parameters [ ]: wheel radius, $r w$ = 0.481 m; bus frontal area, $A f$ = 7.52 m ; aerodynamical drag coefficient, $C d$ = 0.70; rolling friction coefficient, $R 0$ = 0.01; empty bus weight, $M v$ = 12,635 kg, final drive ratio, $i o$ = 4.72. Battery parameters: $Q m a x$ = 30 Ah corresponding to battery energy of 19 kWh ($E m a x$ = 19 kWh). Table A1. Transmission gear ratios [ Gear No. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. Gear ratio 14.94 11.73 9.04 7.09 5.54 4.35 3.44 2.70 2.08 1.63 1.27 1.00 Appendix A.2. DP Optimization Parameters Weighting coefficients: $K g = 10 12$, $K f = 10 6$, $K S o C = 5 · 10 5$. Constraints: $S o C m i n = 0.2$, $S o C m a x = 1$, $P b a t t , m i n = − 150$ kW, $P b a t t , m a x = 150$ kW, $ω e , m i n = 0$ rad/s, $ω M G , m i n = 0$ rad/s, $ω e , m a x = 277.5$ rad/s, $ω M G , m a x = 277.5$ rad/s. Figure 1. Parallel configuration of plug-in hybrid electric vehicles (PHEV) powertrain (a), transmission idle-mode power loss map (b), and mechanical efficiency map (c). Figure 2. Engine specific fuel consumption map (a), and electric machine (M/G) machine efficiency map (b), given along with maximum torque lines (denoted in blue). Figure 3. Battery equivalent circuit (a), and dependencies of open-circuit voltage and internal battery resistance with respect to battery state-of-charge (SoC) for a considered lithium iron phosphate battery (b). Figure 4. City bus driving cycle including vehicle velocity ($v v$) (a), road grade ($δ r$ ) (b), and passenger mass ($m p a s s$) (c) time profiles recorded in the city of Dubrovnik (DUB); and velocity time profile for heavy-duty UDDS driving cycle (HDUDDS) which assumes a zero road (d). Figure 5. Optimal SoC trajectories obtained by the dynamic programing (DP) algorithm in blended regime for repetitive DUB driving cycle with varying road grade ( ) and zero road grade ( ); and for repetitive HDUDDS driving cycle ( ) (see Figure 4 ; in the case of DUB driving cycle varying passengers mass from Figure 4 c is used). Figure 6. Visualization of cumulative cost function $J i$(see Equation (16)) along with the optimal SoC trajectory $S o C D P$ for the case of two additional SoC constraints (i.e., $S o C c o n s t r$ = 45% at 1/3 of total trip distance, and $S o C c o n s t r$ = 55% at 2/3 of total trip distance) and 4 x DUB driving cycle with a zero road grade. Figure 7. Set of DP optimal SoC trajectories of different lengths obtained by imposing an additional SoC constraint (14) (a); and corresponding total fuel consumption $V f$ shown with respect to normalized SoC trajectory length $L S o C , n o r m$ (b), mean engine specific fuel consumption $A e k , m e a n$ (c), and total electric energy losses $E E L , l o s s$ (d) (3 x DUB driving cycle when a zero road grade was used). Figure 8. SoC trajectories obtained by constant SoC depletion rate ($S o C l i n$) and by DP optimization for the constant and variable SoC-dependent battery parameters (a), and corresponding SoC depletion rates (b). Figure 9. Fuel consumption rate $m ˙ f$ versus SoC depletion rate $S o C ˙$ (a), and second derivative of $m ˙ f$ versus $S o C ˙$ curve (b), given for several values of demanded power $P d$, engine speed $ω e$ = 184 rad/s and $S o C$ = 50%. Figure 10. Character of $m ˙ f$ vs. $S o C ˙$ dependence (convex or non-convex) for a wide range of engine speeds $ω e$ and driver power demands $P d$ for the case of $S o C$ = 0.5. Figure 11. Illustration of original (convex) and modified (concave) engine fuel consumption rate $m ˙ f$ with respect to SoC depletion rate $S o C ˙$ (a), and the corresponding second derivatives (b) for the case of $S o C$ = 50%, $v v$ = 86 km/h, $ω e$ =$ω M G$ = 184 rad/s, $P d$ = 79.7 kW. Figure 12. Illustration of three different operating scenarios through engine mean specific fuel consumption ( ), engine power ( ), total electric energy losses ( ), and SoC trajectory profile ( ) (the same operating conditions as in Figure 11 $S o C$ = 50%, $v v$ = 86 km/h, $ω e$ $ω M G$ = 184 rad/s, $P d$ = 79.7 kW). Figure 13. Comparative cumulative fuel consumption time profiles for different operating scenarios for original (a) and modified engine fuel consumption characteristics (b). © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/ Share and Cite MDPI and ACS Style Škugor, B.; Soldo, J.; Deur, J. Analysis of Optimal Battery State-of-Charge Trajectory for Blended Regime of Plug-in Hybrid Electric Vehicle. World Electr. Veh. J. 2019, 10, 75. https://doi.org/ AMA Style Škugor B, Soldo J, Deur J. Analysis of Optimal Battery State-of-Charge Trajectory for Blended Regime of Plug-in Hybrid Electric Vehicle. World Electric Vehicle Journal. 2019; 10(4):75. https:// Chicago/Turabian Style Škugor, Branimir, Jure Soldo, and Joško Deur. 2019. "Analysis of Optimal Battery State-of-Charge Trajectory for Blended Regime of Plug-in Hybrid Electric Vehicle" World Electric Vehicle Journal 10, no. 4: 75. https://doi.org/10.3390/wevj10040075 Article Metrics
{"url":"https://www.mdpi.com/2032-6653/10/4/75","timestamp":"2024-11-10T08:22:59Z","content_type":"text/html","content_length":"538068","record_id":"<urn:uuid:b785a53e-c092-4ddc-97a1-dd3e14bf704b>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00575.warc.gz"}
We present a fast, efficient method for power distribution network reconfiguration. This is a heuristic algorithm for assigning sources to supply as many loads with power as possible, favoring loads of higher priority over loads of lower priority. This algorithm will be referred to as the Heuristic Solver. To judge the quality of solutions that the Heuristic Solver produces, it will be compared to solutions produced by an integer linear programming formulation. The algorithm used to formulate the network s an integer linear programming problem will be referred to as the ILP Solver. Linear programming is a popular tool or optimization in the field of operations research used to optimize the desired outcome in a mathematical model. The solutions produced by it must necessarily be optimal. However, integer linear programming is an NP hard problem, so we will measure he feasibility of relying on the ILP Solver for power assignment solutions to networks of varying complexity. The Heuristic Solver primarily uses a method called augmented path matching, which, like the Bellman-Ford algorithm, comes from graph theory. In order to describe these algorithms more easily, we’re going to use simple units. We assume that capacities and priority levels are given as whole numbers. Load requirements have unit values. Each of these solvers could just as easily handle floating point values for any of these variables, but we limit them to small integer values for the purpose of demonstrating simply the effectiveness of these algorithms relative to each other. We don’t refer to units such as volts or amperes either—they are abstract units.One other important detail to mention about the results is that these are good numbers for making relative comparisons between the two algorithms, not necessarily for seeing how they are going to perform individually on whatever device it will be run on. There are two reasons for this: one, the language, and two, the hardware.Python 2.7 was used for the development and testing of these solvers, which is a very high-level language written in C. Python is great for rapid development and testing, given that it is easy to make changes with and it is an interpreted language, which means that the source code is compiled by the Python interpreter at the point of execution. The downside to these conveniences is that it is a comparatively slow language. Depending on the task, taking sometimes 10 or 20 times longer to execute than a similar program written in C, which should run approximately as fast as assembly.The hardware used to run all these tests is a laptop computer with a 2 GHz Intel Core2 Duo processor, 6 GB of RAM, and which runs the 64-bit version of Windows 7 Enterprise Edition (Service Pack 1). If this code needed to run on a server, or incorporate network programming to mimic running it on an embedded system, we would have probably chosen to write it for Linux. These test programs could still yet be ported to other operating systems, but as of now it’s using some libraries that are Windows-specific because it was more important to make a sort of interactive toy that could be widely distributed for others to test as well.The given limitations of the language and hardware that are being used don’t matter all that much. There’s no need to super-optimize these solvers for the sake of testing, because it is not necessarily the case that they will be utilized in a program that is written in C or a low-level language. The target program may be written in Java, running on an old desktop in a power station that’s running Windows XP. There are many kinds of other, more esoteric things one might run into in the tech industry. So, running it with Python on an old laptop should be sufficient to compare these solvers. Especially because they both face the same restraints of having to use this sluggish selection of language, OS, and hardware. Perhaps the results that the Heuristic Solver return will impress more, given these limitations? But first, we’re going to look at the ILP Solver. Integer linear programming is a type of linear programming. Linear programming is solving a system of equations that have a linear relationship in order to find the optimal value (minimum or maximum) for an objective function. Constraints are added to the system to bound variables within specified ranges. Integer linear programming just adds the stipulation that the variables must also be integers.The solving of linear programming problems has been relegated to the Python module PuLP, which utilizes the COIN-OR CBC solver. There is a good reference in the Citation section that gives the general idea of how linear programming works and what it is used for [10]. For our purposes, it seems the best way to demonstrate how we make use of it in the ILP Solver is to use an example. Here is a flattened view of a network that needs to be solved: Ordinarily, the first step is to consider any active links and loads to be inactive, but there are no active loads or links in this initial setup. As the diagram shows, there are two sources that supply five loads. Each of the sources have a capacity of 2, which means that they can each supply up to two loads with power. The loads have priorities of 1, 1, 2, 3, and 4, with 1 being the lowest rank in importance and 4 being the highest. Obviously, with two generators each able to supply at most 2 loads, there is a scarcity of supply. In the best case, we can hope for 4 out of 5 loads to be matched with a generator, with one of the priority 1 loads being left out.Let’s look again at this diagram, but with labels included so we can make our linear programming formulation: These are the names of the links and their incompatibilities with other links in the network: The way these links are named is “S->L_N,” where S is the numeric identifier of the source, L is the numeric identifier of the load, and N is the index of link between that particular source and load pair. It is extremely difficult to see in the diagram, but there are two links between S1 and L5. Where more than one link exists, they are displayed on top of each other, and the ones below are one pixel thicker on each side. It is displayed like this because there may be many links between the same source and load, and this is the best way to fit them all in the same graphic. Incompatibilities between links are just that—two or more links that cannot be activated at the same time as one another. So, for example, if 1->1_1 is to be turned on, then the final solution cannot also contain the link 2->4_1. Where a link is listed as incompatible with another, that other link is also listed as incompatible with it. Incompatibilities are also the reason to include the possibility of multiple links between the same source and load—if a source cannot supply a load through one link because of its incompatibilities with other links, then perhaps it can supply it by another link that to the same load that has a different set of incompatibilities. More about incompatibilities and how they occur is in the next section, Flattening the Distribution Network. For now, we just assume the existence of some arbitrary incompatibilities in the example network. There is some nuance to the priority property of these loads. It is not enough to simply rank them in order of importance. There must also be a decision about what those priority levels mean—are two priority level 1 loads equal to one priority level 2 load? What about two priority level 2 loads against one priority level 3 load? Are they worth a number of points equal to their priority value or does it take two of one level to match the one above it? (Would it take three or would it take four level 1 loads to equal a level 3 load?) To be as realistic as possible, the decision about how to interpret priority levels is this— each priority level unequivocally outclasses all the priority levels below it. No amount of level 1 loads will outmatch a single level 2 load. This is because when loads are ranked by a certain level of importance, we assume it is for good reason. It doesn’t matter how many coffee makers can be supplied on the aircraft if the lights go out, because then no one can see to use them. It doesn’t matter how many coffee makers or lights can be supplied if the navigation system goes out. Submit a Comment Cancel reply You must be logged in to post a comment.
{"url":"https://matlab1.com/power-reconfiguration-algorithms/","timestamp":"2024-11-05T23:20:23Z","content_type":"text/html","content_length":"61582","record_id":"<urn:uuid:2347f05d-d98e-4cfa-90ab-5924a99ce927>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00681.warc.gz"}
Division of a Decimal by a Decimal | Dividing Decimals |Decimal Number Division of a Decimal by a Decimal This topic discusses about division of a decimal number by another decimal number. we have done with division of decimal by whole number or by multiples. Now in this content we will learn about division of decimal by another decimal number. Here are certain rules of division of decimal by another decimal number: I: First we need to calculate the no. of places after the decimal in the divisor. II: With as many places in the divisor we will decide whether to divide by 10, 100 or 1000 or more. III. So we divide both the number by 10, 100 or 1000 accordingly to remove the decimal point. For example: 63.9 ÷ 7.1 63.9 ÷ 7.1 There is one place after the decimal point in both the divisor and the dividend so we will divide the number by 10 = 639/10 ÷ 71/10 As we have learnt in other subtopics while dividing a fraction by another fraction is that a fraction is multiplied by the reciprocal of another fraction likewise we will do so = 639/10 × 10/71 = (639 × 10) / (10 × 71) On changing it into lowest terms we get, = 639/71 Now in the normal process of division we will divide both the numbers Now here 71 multiplied by 9 is 639. So answer is 9 Therefore, 63.9 ÷ 7.1 = 9 Here in the above examples we could see that there was equal number of places after the decimal point in the dividend and the divisor. But there may be instances when there will be not equal number of places in both the dividend and the divisor. We will see another example where there is not equal no. of places after the decimal point in the dividend and the divisor. 0.667 ÷ 2.9 0.667 ÷ 2.9 Here we can see that in the divisor there is 1 place after the decimal point whereas, in the dividend there are 3 places after the decimal point. The number of decimal digit in the divisor is 1 so; we will move the decimal point 1 place both in the dividend and divisor. So, we get, 6.67 ÷ 29 Hence now, there is no decimal point in the divisor and hence we can perform the division in the same way as we do division of decimal fraction by a whole number Here in this division we can see that when we are considering the first digit of the dividend that is 6 we cannot divide as 6 is less than the divisor 29 so we need to consider two digits of the dividend that is 66. Now 29 multiplied by 2 is 58. As the division cannot be done with the first digit of the dividend hence we need to put 0 in the quotient and then place the decimal point. Now,58 subtracted from 66 is 8. Along with 8, the last digit 7 came down making it 87 so 29 multiplied by 3 is 87. So, 0.667 ÷ 2.9 = 0.23 From Division of a Decimal by a Decimal to HOME PAGE New! Comments Have your say about what you just read! Leave me a comment in the box below.
{"url":"https://www.first-learn.com/division-of-a-decimal-by-a-decimal.html","timestamp":"2024-11-03T16:08:28Z","content_type":"text/html","content_length":"37925","record_id":"<urn:uuid:046855a9-8306-4e8d-b0f9-c0fa1d0c2f33>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00244.warc.gz"}
A particular model of economic selection - HKT Consultant Evolutionary Theory of the Firm A particular model of economic selection We now describe and analyze a simple evolutionary model that inev­ itably settles eventually into a static equilibrium that closely re­ sembles the competitive equilibrium of orthodox theory. After com­ pleting the formal analysis, we review the critical assumptions that underlie i ts orthodox conclusions, and in so doing identify some of the limitations of informal arguments of the sort advanced by Friedman. The focus here is on selection of two d ifferent kinds of routines. One is the ” technique” that a firm uses in production. The other is the “decision rule” that determines a firm’s rate of capacity utiliza­ tion and thus i ts output level. The industry in question produces a single homogeneous product. All firms in the industry face the same set of technical alternatives for producing their product. All feasible techniques are characterized by fixed input coefficients for variable inputs and constant returns to scale. All techniques have the same ratio of capacity output to capital stock; for convenience, let that ratio equal one. Techniques differ however, in terms of their variable inputs. A firm at any time em­ ploys only one technique. The second routine employed by a firm is i ts capacity utilization rule. Such a rule relates the extent of capacity utilization to the ratio of price to unit variable cost of production. Thus, where P and c are product price and unit variable production cost respectively, and q and k are output and capital (capaci ty) . It is as­ sumed that function a(·) is continuous, monotone nondecreasing, positive for sufficiently large values of its argument, and satisfies 0 ≤ a(·) ≤ 1. A capacity utilization rule may be interpreted as describing the percentage profit margin over variable cost needed to induce the firm to operate at various capacity utilization levels. Factors of production are supplied perfectly elastically to the in­ dustry, and all factor prices are positive and constant over the course of the analysis. Thus all techniques can be characterized and ranked by variable unit production costs. Of course, for any technique total unit production cost is negatively related to the level of capacity utili­ zation. For expositional convenience we assume that there is a unique best technique with unit variable production cost cˆ. We should call attention to the fact that there is not necessarily a unique best (profit-maximizing) capacity utilization rule. It is true that no other rule can beat the rule flagged by orthodox theory: But for any particular P/ c value, any rule that calls for the same out­ put as this one yields the same profit.: The industry faces a strictly downward-sloping, continuous de­ mand price function that relates the price of the product produced to total industry output. The function is defined for all nonnegative out­ put levels. It is assumed that if total industry output is small enough, some technique and capacity utilization rule will yield a positive profit. If industry output is large enough, no technique and utiliza­ tion rule will be profitable. Formally, the system can be characterized as follows. Assuming that all the capacity possessed by a firm employs the same technique and is operated according to the same capacity use rule, the state of firm i at time t can be characterized by the triple (c[U/] a[u], k[it]). Together, the states of all firms at t dete rmine a short-run supply function for period t: Together with the demand-price function this determines P[t] and q for the short-run period. The above as­sumptions concerning h(·) and the a[it](·) guarantee that such a short-run equilibrium always exists. Net profit for firm i is where r is the cost of capital services. 1. Orthodox Equilibrium It is apparent that, given the usual assumptions of orthodox theory, a conventional long-run equilibrium exists in this model. The ortho­ dox assumptions are that firms are faultless profit maximizers, and that there are enough firms in the industry so that firms treat prices as parameters (our capacity utilization rules implicitly presume they do). It is clear that if an equilibrium exists, profit maximization in that equilibrium requires that all operating firms employ the technique with the lowest unit cost. Thus, for all firms with q[i]> 0, c[i] = cˆ. For profits to be nonnegative, equilibrium price, P*, must exceed cˆ. Then profit is maximized with an output determination rule that calls for full capacity utilization at P equals P*. Of course, the orthodox rule has this property. Equilibrium price P* must equal cˆ + r, else profit-maximizing firms would see incentives to change capacity. The assumptions about the demand-price function guarantee that there is a q* such that h (q*) = cˆ + r. This is an equilibrium output and price. At that price, with all firms operating at full capacity 2. Selection Dynamics Is there a selecti,on equilibrium as well- that is, a situation that is a stationary position for an appropriately defined dynamic process in­ volving expansion of profitable firms and contraction of unprofitable ones? If there is such an equilibrium, does it have the same proper­ ties as the orthodox one? To answer these questions, we obviously need to specify the dynamics of the selection Our analysis will rely on the mathematical tools of the theory of finite Markov chains. In order to exploit these tools, there is need to modify and constrain the assumptions made above about production methods and capacity utilization policies. We assume the set of all feasible production techniques is finite, and the set of possible capacity utilization rules finite as well . (The orthodox, profit­ maximizing capacity utilization rule is included in that finite set. ) We further assume that capital comes in discrete packets; thus, at any time a firm possesses an integer-valued n umber of m achines. All ma­ chines used by a firm at any time operate with the same technique and according to the same utilization rule. Thus, as above, the state of a firm at any time can be characterized by a triple -the technique it is using, the capacity u tilization policy it is using, and the number of machines it possesses. Each of these components is a discrete vari­ able. It is also assumed that the total number of firms actually or poten­ tially in the industry is finite and constant, though the mix of extant firms and ” potentials” may change. This number, M, is assumed to be large enough not only to make price-taking b ehavior on the part of firms plausible, but also to support the arguments made b elow about search. Note that because capacity utilization can vary contin­ uously, it is still true that a short-run equilibrium always exists. We will abstract from the processes by which it is achieved. Because the number of machines is integer-valued, the standard argument presented above that a long-run equilibrium exists, based on continuity both of the demand function and of the (profit-maximizing) supply correspondence, no longer can be employed with this model . However, it is clear that the orthodox market equi­ librium “almost” exists if the capacity output of a machine is small enough relative to industry output. Pleading substantive rather than mathematical plausibility, we will assume that there is an orthodox equilibrium – that is, that the output level q * determined by cˆ + r = h (q * ) is an integer. We make the following assumptions about investment. For firms with positive capital stock, if profit is zero, then investment is zero. Extant firm s making positive profits expand probabilistically . There is zero probability that they will decline in size. With positive proba­ bility they remain the same size. With positive probability they add one machine to their stock. It also is possible that they add more than one machine, but there are bounds on their feasible expansion. Ex­ tant firms making negative profits contract probabilistically in the same sense; they certainly do not expand, there is a positive proba­ bility of no change, a positive probability of decline by just one unit, and a positive probability of a greater decline (but the magnitude of the decline is bounded by the firm’s prevailing capital stock). Poten­ tial entrants, firms with zero capital stock, with positive probability (less than one), enter the industry with just one machine, if the rou­ tine pair they are contemplating would yield a positive profit at PI if put into practice. Potential entrants with contemplated routine pairs that yield zero or negative profits do not enter. The foregoing assumptions are expressed formally as follows. For extant firms just breaking even: for extant firms making positive profit: for extant firms making negative profits: k[t+1] = k[t ]+ δ, with δ having same distributional characteristics as above, with Δ = k[t]; for potential entrants contemplating routines that yield positive profit: k[t+1] = 0 or 1, each with positive probability; and for potential entrants contemplating routines that do no better than break even: k[t+1] = 0 A feature that sharply distinguishes our evolutionary models from orthodox ones is that we do not impute to firms the ability to scan instantaneously a large set of decision alternatives. However, our model firms do engage in groping, time-consuming search. In this particular model we make the following assumptions about search. First, the outcome of the search, presuming that a firm is actively searching, is defined in terms of a probability distribution of rou­ tines which will be found by search, perhaps conditional upon a firm’s prevailing ro utines. Second, regardless of the prevailing rou-tines, there is a positive probability that any other technique, decision-rule pair will be found in a search. Third, there is positive probability that a searching firm will find no new routines and will thus necessarily retain its prevailing routines. To complete our characterization of the dynamic system, we need to specify when search occurs. Two sets of considerations, partially opposed to each other, are involved. If the system is to wind up in an equilibrium that resembles an orthodox one, firms must search actively enough to assure that the orthodox actions-such as the use of the lowest- cost production technique- are ultimately found and tried. On the other hand, search must not be so active as to dislodge the system from what would otherwise be a reasonable equilibrium. A variety of assumptions can meet these requirements. Here we as­ sume that firms with positive capacity do not search at all if they are making positive or zero profits; they “satisfice” on their prevailing routines. Potential entrants to the industry (firms with zero capacity) are assumed to be sean:hing always, but when they enter they do so with routines that have passed the profit ability test. 3. Selection Equilibrium In the context of the present model, we shall define a (static) selection equilibrium as a situation in which the states of all extant firms re­ main unchanged, and the roster of extant firms also remains un­ changed. It should now be clear that an orthodox market equilibdum (with an integral number of machines) constitutes such an equilib­ rium for the selection process j ust described. All firms in the industry with positive capacity are just breaking even; therefo re, they are neither expanding nor contracting. Potential entrants continue to search, but no routines can be found that yield a positive profit in orthodox market equilibrium; thus, no actual entry occurs and the orthodox equilibrium values of price and industry output persist indefinitely. It is also clear that under the prevailing assumptions, a selection equilibrium must display most of the significant properties of the orthodox equilibrium. All firms in the industry must be breaking even; otherwise one or more firms will be probabilistically ex­ panding or contracting. P must equal cˆ + r. Price cannot be less than cˆ + r; under such conditions no firm can possibly be breaking even. Price cannot be greater than cˆ + r; otherwise, if some firm finds the best technique and the orthodox best capacity utilization rule, it can make a positive profit. Our assumptions about search guarantee that, sooner or later, some firm, if not an extant firm then a potential en­ trant, will find that pair of routines. If they are found under market conditions that generate a positive profit, an extant firm will proba­ bilistically expand or a potential entrant will probabilistically enter. And at price cˆ + r only firms with the best technique and a decision rule that calls for full capacity utilization at that price will break even; and no firm can do any better than that. Note, however, that there may be selection equilibria in which no firm follows the orthodox capacity utilization rule. If firms follow rules that yield full capacity utilization at the equilibrium price P* = cˆ + r, equilibrium will not be disrupted by the search process. It does not matter what responses the rule yields at other prices. The remaining question is: Will the selection process move the in­ dustry to such an equilibrium state if it is not there initially? Our as­ sumptions imply that it will. The key step in the demonstration in­ volves showing that there is a finite sequence of positive probability state transitions leading from any initial state to an equilibrium state. By a result in F eller (1957, pp. 352-353, 364), this suffices to es­ tablish that, with probability approaching one as time elapses, the industry will achieve an equilibrium state. But there are some pre­ liminaries to be disposed of before giving the central argument. The first thing needed is a precise characterization of the equilib­ rium states. By an “industry state” we simply mean the list of M firm states, where each firm state is characterized by the triple (C[it] , a[it], k[it]) of unit variable cost, capacity utIlization rule, and capacity. Call a capacity utilization rule “eligible” if it yields full capacity utilization at price cˆ + r- that is, if a[(cˆ + r)/cˆ] = 1. The finite set of possible rules contains, by assumption, at least one eligible rule -the ortho­ dox one. An “equilibrium sta te” is one in which aggregate industry capacity is k * = q*, such that h(q* ) = cˆ + r, and all firms with posi­ tive capacity have eligible capacity utilization rules and variable cost cˆ. It is easily seen that in an equilibrium state the price is c + r and the only sort of change that can occur is continuing fu tile search for profitable routines by potential entrants, so s election equilibrium prevails. In the language of the theory of Markov processes, the set E of equilibrium states is a “closed set of states”: Once a state in E occurs, all subsequent states must also be in E. We now show that from a given initial condition, only finitely many industry states can be reached. Since there are finitely many possible routines, the only issue here is whether industry capital can increase indefinitelyi we show that it cannot. Note first that for any pair of routines (c, a) there is a capacity level K(c, a) that is the largest value of capacity k for which the relations can both be satisified. The first relation implies that a(P/c) is posi­ tive; the assumption that all routines are unprofitable at sufficiently high industry output levels then implies that there is a maximum k consistent with the two relations together. As a corollary, note that in any industry state in which the aggregate capacity of firms with rou-tines (c, a) exceeds K(e, a), routine pair (c, a) is unprofi table-the possible existence of other finns producing positive output with other routines only makes it clearer that price must be too low for (c, a) to be profitable. Now consider K¯ = Max K(c, a). Consistent with the transition rules above, no finn can increase its capital to a level in excess of K¯ + Δ from any lower level. Since Δ bounds the possible capital i ncrease k [t+1] – k[t] in a single period, the starting v alue k[t] for such a transition would itself have to exceed K¯. However, since the firm must have some technique (c, a) and kt > K¯> K (c, a), the firm must be unprofitable and expansion is ruled out. Finally, since no firm can increase its capital to a value in excess of K + Δ, in any specific realization of the process the capital of firm i is bounded above by Max (k[i1],K + Δ), where k[i1] is firm i’s capital in the initial industry state. There are, therefore, only finitely many industry states reachable from any initial state. We henceforth confine our dis­ cussion to this finite set of states. It is now possible to be specific as to what constitutes a “large enough” number of firms: the number M of actual and potential firms exceeds K¯. Thus, when aggregate industry capacity is no greater than K¯, there are necessarily some firms with zero capacity -that is, some potential entrants. On the other hand, if ag­ gregate capacity exceeds K¯, at least one firm is making losses and searching. Either way, there is a positive probability that new rou­ tines with cost c and an eligible capacity utilization rule will be adopted. And all firms (extant and potential) displaying such routine pairs -which we may call the eligible firms-can retain them with positive probability for any finite period. We now show that, given a state in which there is at least one eli­ gible firm, it is always possible to take, with positive probability, “a step toward” the set E of equilibrium states. The number of “steps” that separate a given state from E may be counted as k[n] + Ik[e] – k*l, the aggregate capacity of noneligible firms plus the absolute value of the discrepancy between the capacity of eligible firms and k*. Clearly, over a finite set of industry states this number of steps is b ounded. Suppose that the given state is one in which price exceeds cˆ + r. Then clearly k[e] < k*, and a one-machine increase in capacity by an eligible firm, with no other change in firm states, is a positive probability step that reduces the distance to E. Suppose on the other hand that the state is one in which price is less than or equal to cˆ + r. The noneligible firms necessarily make losses, and if there are any such with positive capacity, a one-machine decrease in capacity by one of them is a positive probability step that reduces the distance to E. If k[n]= 0, this sort of step is not possible, but in this case we neces­ sarily have k[e] ≥ k*. If the strict inequality holds, a one-machine capacity reduction by an eligible firm is an appropriate positive probability step, while , if the equality holds, the given state is already in E. Iteration of this argument shows that, fro m any initial state, E is reachable by finitely many steps of positive probability under the stated assumptions on transition probabilities. Thus, according to the previously cited passages in Feller (1957), there is probability one that E will eventually be reached . Unorthodox equilibria. To underscore the point that it matters what rules are tried, consider what would happen if neither the orthodox rule nor any other eligible rule were included in the set of possible capaci ty utilization rules . Then orthodox equilibrium with full util i­ zation would be impossible, for a price high enough to induce full utilization would be more than high enough to induce firms to ex­ pand capacity. There might, however, be a selection equilibrium, as the fol lowing proof sketch shows. Maintain all of the assumptions of the above analysis except the assumption that at least one capacity utilization rule is eligible. For every rule a, there is a lowest price consistent with breaking even when variable cost is e-that is, a lowest price consistent with Denote by P** the lowest such price over all possible rules a, and by aˆ the capacity utilization rate at which this minimum price is achieved . Adapting the earlier convenience assumption for dealing with the indivisibility of capital, we now assume that there is an integral value of capital, k**, that satisfies Call a capacity utilization rule “pseudo-eligible” if it yields capacity utilization rate aˆ when P**/cˆ is the prevailing price/cost ratio. Now the argument simply follows the path of the foregoing analysis, with “pseudo-eligi ble” replacing “eligible” and p **, k**, and aˆ k** re­-placing p*, k*, and q* respectively. The conclusioI n is that a selection equilibrium with capacity utilization rate a will ultimately be achieved. 4. Commentary Even under our original assumption that the orthodox rule is among those tried, a selection equilibrium does not correspond to an ortho­ dox market equilibrium. Since an issue of considerable generality and conceptual importance is involved, the point deserve s emphasis. The class of “eligible” capacity utilization rules does not consist merely of the orthodox optimal rule, but includes all rules whose ac­ tion implications agree with those of the orthodox rule at the equilib­ rium ul timately achieved. Nothing precludes the achievement of a selection equilibrium in which some or all firms display eligible but nonoptimal capacity utilization rules -a proposition that follows from the observation that nothing would disrupt such an equilib­rium if it happened to be achieved. Indeed, if the orthodox rule were not included in the feasible set, but other eligible rules were, neither the character of the equilibrium position nor the argument con-cerning its achievement would be affected. An example of an eligible but not optimal rule would be the capacity utilization counterpart of “full-cost pricing” a rule that would shut down entirely whenever p < cˆ + r and produce to capacity when P≥ cˆ + r. If interest attached only to the characteristics of an equilibrium achieved by a single once-and-for-all selection process, the fact that surviving rules might yield nonoptimal behavior out of equilibrium would be of no more consequence than the fact that the rules of po­tential entrants might be nonoptimal if actually employed. But ortho­ dox theory is m uch concerned, and properly so, with the analysis of displacements of equilibrium – the problem of what happens if some parameter of t he equilibrium position changes. There is also the question, less emphasized by orthodoxy, of the characteristics of ad­ justment paths between equilibria. For these purposes, it matters that nonoptim al rules may survive in selection equilibrium. A change in demand or cost conditions that shak es the system out of an orthodox-type selection equil ibrium does not necessarily initiate the sort of adjustment process contemplated by orthodoxy, for the process might well be dominated by rules that produce, under dise­ quilibrium condit ions, actions much different from the orthodox ones. And if the orthodox rules are not included among those actu­ ally tried, the fact that the system achieves an orthodox-type equilib­ rium at one set of parameter values does not assure that the orthodox result would also be mimicked for another set. For exam ple, the capacity utilization rule “Produce to capacity only if price is at least fifteen percent in excess of unit variable cost” is not eligible if r is less than . 15cˆ. The general issue here is this. A historical process of evolutionary change cannot be expected to “test” all possible behavioral implica­ tions of a given set of routines, much less test them all repeate dly. It is only against the environmental conditions that persist for ex­ tended periods (and in this loose sense are “equilibrium” conditions) that routines are t ho roughly tested. There is no reason to expect, therefo re, that the surviving patterns of behavior of a historical selec-tion process are well adapted for novel conditions not repeatedly en­ countered in that process. In fact, there is good reason to expect the opposite, since selection forces m ay be expected to be “sensible” and to trade off maladaptation under unusual or unencountered condi­ tions to achieve good adaptations to conditions frequently encoun­ tered. In a context of progressive change, therefore, one should not expect to observe ideal adaptation to current conditions by the prod­ ucts of evolutionary processes. Source: Nelson Richard R., Winter Sidney G. (1985), An Evolutionary Theory of Economic Change, Belknap Press: An Imprint of Harvard University Press.
{"url":"https://sciencetheory.net/a-particular-model-of-economic-selection/","timestamp":"2024-11-11T10:04:48Z","content_type":"text/html","content_length":"133926","record_id":"<urn:uuid:46677174-71e4-4d7e-a4e9-08ab1ecaf393>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00118.warc.gz"}
NCERT Solutions for Class 12 Physics Chapter 7 Alternating Current NCERT Solutions for Class 12 Physics Chapter 7 Alternating Current is a study material designed for students pursuing science in their senior secondary education. The NCERT Solutions for Class 12 Physics Chapter 7 Alternating Current covers various topics such as RMS and peak values, power in AC circuits, LC oscillations, LCR series and parallel circuits, and many more. These concepts are crucial for students who plan to pursue further studies in physics, as it provides a strong foundation in the subject. By the end of the course, students will have a thorough understanding of the principles and applications of alternating current, which will be useful in their future studies and career. You can read and download NCERT Book Solution to get a better understanding of all topics and concepts. This chapter covers following topics: 7.1 – INTRODUCTION 7.2 – AC VOLTAGE APPLIED TO A RESISTOR 7.3 – REPRESENTATION OF AC CURRENT AND VOLTAGE BY ROTATING VECTORS — PHASORS 7.4 – AC VOLTAGE APPLIED TO AN INDUCTOR 7.5 – AC VOLTAGE APPLIED TO A CAPACITOR 7.6 – AC VOLTAGE APPLIED TO A SERIES LCR CIRCUIT 7.6.1 – Phasor-diagram solution 7.6.2 – Analytical solution 7.6.3 – Resonance 7.8 – LC OSCILLATIONS 7.9 – TRANSFORMERS. NCERT Solutions for Class 12 Physics Chapter 7 Alternating Current Exercise : Solutions of Questions on Page Number : 266 Q1 :A 100 Ω resistor is connected to a 220 V, 50 Hz ac supply. (a) What is the rms value of current in the circuit? (b) What is the net power consumed over a full cycle? Answer : Resistance of the resistor, R = 100 Ω Supply voltage, V = 220 V Frequency,v= 50 Hz (a) The rms value of current in the circuit is given as: (b) The net power consumed over a full cycle is given as: P = VI = 220 × 2.2 = 484 W Q2 :(a) The peak voltage of an ac supply is 300 V. What is the rms voltage? (b) The rms value of current in an ac circuit is 10 A. What is the peak current? Answer : (a) Peak voltage of the ac supply, V[0] = 300 V Rms voltage is given as: (b) Therms value of current is given as: I = 10 A Now, peak current is given as: Q3 :A 44 mH inductor is connected to 220 V, 50 Hz ac supply. Determine the rms value of the current in the circuit. Answer : Inductance of inductor, L = 44 mH = 44 × 10^-3 H Supply voltage, V = 220 V Frequency, v = 50 Hz Angular frequency, ω= 2πv Inductive reactance, X[L] = ωL Rms value of current is given as: Hence, the rms value of current in the circuit is 15.92 A. Q4 :A 60 μF capacitor is connected to a 110 V, 60 Hz ac supply. Determine the rms value of the current in the circuit. Answer : Capacitance of capacitor, C = 60 μF = 60 × 10 – 6 F Supply voltage, V = 110 V Frequency, v = 60 Hz Angular frequency,ω=2πv Capacitive reactance Rms value of current is given as: Hence, the rms value of current is 2.49 A. Q5 :In Exercises 7.3 and 7.4, what is the net power absorbed by each circuit over a complete cycle. Explain your answer. Answer : In the inductive circuit, Rms value of current,I = 15.92 A Rms value of voltage, V = 220 V Hence, the net power absorbed can be obtained by the relation, P = VI cos Φ Φ = Phase difference between V and I For a pure inductive circuit, the phase difference between alternating voltage and current is 90° i.e., Φ= 90°. Hence, P = 0 i.e., the net power is zero. In the capacitive circuit, Rms value of current, I = 2.49 A Rms value of voltage, V = 110 V Hence, the net power absorbed can ve obtained as: P = VI Cos Φ For a pure capacitive circuit, the phase difference between alternating voltage and current is 90° i.e., Φ= 90°. Hence, P = 0 i.e., the net power is zero. Q6 :Obtain the resonant frequency ωr of a series LCR circuit with L = 2.0 H, C = 32 μF and R = 10 Ω. What is the Q-value of this circuit? Answer : Inductance, L = 2.0 H Capacitance, C = 32 μF = 32 × 10^-6 F Resistance, R = 10 Ω Resonant frequency is given by the relation, Now, Q-value of the circuit is given as: Hence, the Q-Value of this circuit is 25. Q7 :A charged 30 μF capacitor is connected to a 27 mH inductor. What is the angular frequency of free oscillations of the circuit? Answer : Capacitance, C = 30μF = 30×10^-6F Inductance, L = 27 mH = 27 × 10^-3 H Angular frequency is given as: Hence, the angular frequency of free oscillations of the circuit is 1.11 × 10^3 rad/s. Q8 :Suppose the initial charge on the capacitor in Exercise 7.7 is 6 mC. What is the total energy stored in the circuit initially? What is the total energy at later time? Answer : Capacitance of the capacitor, C = 30 μF = 30×10^-6 F Inductance of the inductor, L = 27 mH = 27 × 10^-3 H Charge on the capacitor, Q = 6 mC = 6 × 10^-3 C Total energy stored in the capacitor can be calculated by the relation, Total energy at a later time will remain the same because energy is shared between the capacitor and the inductor. Q9 :A series LCR circuit with R = 20 Ω, L = 1.5 H and C = 35 μF is connected to a variable-frequency 200 V ac supply. When the frequency of the supply equals the natural frequency of the circuit, what is the average power transferred to the circuit in one complete cycle? Answer : At resonance, the frequency of the supply power equals the natural frequency of the given LCR circuit. Resistance, R = 20 Ω Inductance, L = 1.5 H Capacitance, C = 35 μF = 30 × 10^-6 F AC supply voltage to the LCR circuit, V = 200 V Impedance of the circuit is given by the relation, At resonance, .’. Z =R = 20 Ω Current in the circuit can be calculated as: Hence, the average power transferred to the circuit in one complete cycle= VI = 200 × 10 = 2000 W. Q10 :A radio can tune over the frequency range of a portion of MW broadcast band: (800 kHz to 1200 kHz). If its LC circuit has an effective inductance of 200 μH, what must be the range of its variable capacitor? [Hint: For tuning, the natural frequency i.e., the frequency of free oscillations of the LC circuit should be equal to the frequency of the radiowave.] Answer : The range of frequency (v) of a radio is 800 kHz to 1200 kHz. Lower tuning frequency, v[1] = 800 kHz = 800 × 10^3 Hz Upper tuning frequency, v[2] = 1200 kHz = 1200× 10^3 Hz Effective inductance of circuit L = 200 μH = 200 × 10^-6 H Capacitance of variable capacitor for v[1] is given as: ω[1] = Angular frequency for capacitor C[1 Capacitance of variable capacitor for v[2], ω[2] = Angular frequency for capacitor C[2 Hence, the range of the variable capacitor is from 88.04 pF to 198.1 pF. Q11 :Figure 7.21 shows a series LCR circuit connected to a variable frequency 230 V source. L = 5.0 H, C = 80μF, R = 40Ω (a) Determine the source frequency which drives the circuit in resonance. (b) Obtain the impedance of the circuit and the amplitude of current at the resonating frequency. (c) Determine the rms potential drops across the three elements of the circuit. Show that the potential drop across the LC combination is zero at the resonating frequency. Answer : Inductance of the inductor, L = 5.0 H Capacitance of the capacitor, C = 80 μH = 80 × 10^-6 F Resistance of the resistor, R = 40 Ω Potential of the variable voltage source, V = 230 V (a) Resonance angular frequency is given as: Hence, the circuit will come in resonance for a source frequency of 50 rad/s. (b) Impedance of the circuit is given by the relation, At resonance, Amplitude of the current at the resonating frequency is given as: V[0] = Peak voltage Hence, at resonance, the impedance of the circuit is 40 Ω and the amplitude of the current is 8.13 A. (c) Rms potential drop across the inductor, (V[L])rms = I × ωRL I = rms current Potential drop across the capacitor, Potential drop across the resistor, (VR)rms = IR =230/40 × 40 = 230 V Potential drop across the LC combination, At resonance, ∴V[LC]= 0 Hence, it is proved that the potential drop across the LC combination is zero at resonating frequency. Q12 :An LC circuit contains a 20 mH inductor and a 50 μF capacitor with an initial charge of 10 mC. The resistance of the circuit is negligible. Let the instant the circuit is closed be t = 0. (a) What is the total energy stored initially? Is it conserved during LC oscillations? (b) What is the natural frequency of the circuit? (c) At what time is the energy stored (i) completely electrical (i.e., stored in the capacitor)? (ii) completely magnetic (i.e., stored in the inductor)? (d) At what times is the total energy shared equally between the inductor and the capacitor? (e) If a resistor is inserted in the circuit, how much energy is eventually dissipated as heat? Answer : Inductance of the inductor, L = 20 mH = 20 × 10^-3 H Capacitance of the capacitor, C = 50 μF = 50 × 10^-6 F Initial charge on the capacitor, Q = 10 mC = 10 × 10^-3 C (a) Total energy stored initially in the circuit is given as: Hence, the total energy stored in the LC circuit will be conserved because there is no resistor connected in the circuit. (b)Natural frequency of the circuit is given by the relation, Natural angular frequency, Hence, the natural frequency of the circuit is 10^3 rad/s. (c) (i) For time period (T For energy stored is electrical, we can write Q’ = Q. Hence, it can be inferred that the energy stored in the capacitor is completely electrical at time, t = (ii) Magnetic energy is the maximum when electrical energy, Q ‘ is equal to 0. Hence, it can be inferred that the energy stored in the capacitor is completely magnetic at time, (d) Q^1 = Charge on the capacitor when total energy is equally shared between the capacitor and the inductor at time t. When total energy is equally shared between the inductor and capacitor, the energy stored in the capacitor =1/2 (maximum energy). Hence, total energy is equally shared between the inductor and the capacity at time, (e) If a resistor is inserted in the circuit, then total initial energy is dissipated as heat energy in the circuit. The resistance damps out the LC oscillation. Q13 :A coil of inductance 0.50 H and resistance 100 Ω is connected to a 240 V, 50 Hz ac supply. (a) What is the maximum current in the coil? (b) What is the time lag between the voltage maximum and the current maximum? Answer : Inductance of the inductor, L = 0.50 H Resistance of the resistor, R = 100 Ω Potential of the supply voltage, V = 240 V Frequency of the supply, v= 50 Hz (a) Peak voltage is given as: Angular frequency of the supply, ω= 2 πv = 2π × 50 = 100 π rad/s Maximum current in the circuit is given as: (b) Equation for voltage is given as: V = V[0] cos ωt Equation for current is given as: I = I[0] cos (ωt – Φ) Φ = Phase difference between voltage and current At time, t = 0. V = V[0](voltage is maximum) For ωt – Φ = 0 i.e., at time, t = Φ/ω I = I[0] (current is maximum) Hence, the time lag between maximum voltage and maximum current is Φ/ω. Now, phase angle Φ is given by the relation, Hence, the time lag between maximum voltage and maximum current is 3.2 ms. Q14 :Obtain the answers (a) to (b) in Exercise 7.13 if the circuit is connected to a high frequency supply (240 V, 10 kHz). Hence, explain the statement that at very high frequency, an inductor in a circuit nearly amounts to an open circuit. How does an inductor behave in a dc circuit after the steady state? Answer : Inductance of the inductor, L = 0.5 Hz Resistance of the resistor, R = 100 Ω Potential of the supply voltages, V = 240 V Frequency of the supply,v= 10 kHz = 10^4 Hz Angular frequency, ω = 2πv= 2π × 10^4 rad/s (a) Peak voltage, Maximum current, (b) For phase differenceΦ, we have the relation: It can be observed that I[0] is very small in this case. Hence, at high frequencies, the inductor amounts to an open circuit. In a dc circuit, after a steady state is achieved, ω = 0. Hence, inductor L behaves like a pure conducting object. Q15 :A 100 μF capacitor in series with a 40 Ω resistance is connected to a 110 V, 60 Hz supply. (a) What is the maximum current in the circuit? (b) What is the time lag between the current maximum and the voltage maximum? Answer : Capacitance of the capacitor, C = 100 μF = 100 × 10^-6 F Resistance of the resistor, R = 40 Ω Supply voltage, V = 110 V (a) Frequency of oscillations, v= 60 Hz Angular frequency,ω = 2πv = 2π x 60 rad/s For a RC circuit, we have the relation for impedance as: Peak voltage, V[0] = Maximum current is given as: (b) In a capacitor circuit, the voltage lags behind the current by a phase angle of Φ. This angle is given by the relation: Hence, the time lag between maximum current and maximum voltage is 1.55 ms. Q16 :Obtain the answers to (a) and (b) in Exercise 7.15 if the circuit is connected to a 110 V, 12 kHz supply? Hence, explain the statement that a capacitor is a conductor at very high frequencies. Compare this behaviour with that of a capacitor in a dc circuit after the steady state. Answer : Capacitance of the capacitor, C = 100 μF = 100 × 10^-6 F Resistance of the resistor, R = 40 Ω Supply voltage, V = 110 V Frequency of the supply, v = 12 kHz = 12 × 10^3 Hz Angular Frequency,ω = 2 πv= 2 × π × 12 × 10^303 = 24π × 10^3 rad/s Peak voltage, Maximum current, For an RC circuit, the voltage lags behind the current by a phase angle of Φ given as: Hence, Φ tends to become zero at high frequencies. At a high frequency, capacitor C acts as a conductor. In a dc circuit, after the steady state is achieved, ω = 0. Hence, capacitor C amounts to an open circuit. Q17 :Keeping the source frequency equal to the resonating frequency of the series LCR circuit, if the three elements, L, C and R are arranged in parallel, show that the total current in the parallel LCR circuit is minimum at this frequency. Obtain the current rms value in each branch of the circuit for the elements and source specified in Exercise 7.11 for this frequency. Answer : An inductor (L), a capacitor (C), and a resistor (R) is connected in parallel with each other in a circuit where, L = 5.0 H C = 80 μF = 80 × 10^-6 F R = 40 Ω Potential of the voltage source, V = 230 V Impedance (Z) of the given parallel LCR circuit is given as: ω = Angular frequency At resonance, Hence, the magnitude of Z is the maximum at 50 rad/s. As a result, the total current is minimum. Rms current flowing through inductor L is given as: Rms current flowing through capacitor C is given as: Rms current flowing through resistor R is given as: Q18 :A circuit containing a 80 mH inductor and a 60 μF capacitor in series is connected to a 230 V, 50 Hz supply. The resistance of the circuit is negligible. (a) Obtain the current amplitude and rms values. (b) Obtain the rms values of potential drops across each element. (c) What is the average power transferred to the inductor? (d) What is the average power transferred to the capacitor? (e) What is the total average power absorbed by the circuit? [‘Average’ implies ‘averaged over one cycle’.] Answer : Inductance, L = 80 mH = 80 × 10 H Capacitance, C = 60 μF = 60 × 10^-6 F Supply voltage, V = 230 V Frequency, v= 50 Hz Angular frequency,ω = 2πv= 100 π rad/s Peak voltage, V[0]= (a) Maximum current is given as: The negative sign appears because ωL < 1/ωC Amplitude of maximum current, Hence, rms value of current, (b) Potential difference across the inductor, V[L]=I ×ωL = 8.22 × 100 π × 80 × 10^-3 = 206.61 V Potential difference across the capacitor, (c) Average power consumed by the inductor is zero as actual voltage leads the current by π/2. (d) Average power consumed by the capacitor is zero as voltage lags current by π/2. (e) The total power absorbed (averaged over one cycle) is zero. Q19 :Suppose the circuit in Exercise 7.18 has a resistance of 15 Ω. Obtain the average power transferred to each element of the circuit, and the total power absorbed. Answer : Average power transferred to the resistor = 788.44 W Average power transferred to the capacitor = 0 W Total power absorbed by the circuit = 788.44 W Inductance of inductor, L = 80 mH = 80 × 10^-3 H Capacitance of capacitor, C = 60 μF = 60 × 10^-6 F Resistance of resistor, R = 15 Ω Potential of voltage supply, V = 230 V Frequency of signal, v = 50 Hz Angular frequency of signal, ω = 2πv= 2π × (50) = 100π rad/s The elements are connected in series to each other. Hence, impedance of the circuit is given as: Current flowing in the circuit, Average power transferred to resistance is given as: PR= I^2R = (7.25)^2 × 15 = 788.44 W Average power transferred to capacitor, PC = Average power transferred to inductor, PL = 0 Total power absorbed by the circuit: = PR + PC + PL = 788.44 + 0 + 0 = 788.44 W Hence, the total power absorbed by the circuit is 788.44 W. Q20 :A series LCR circuit with L = 0.12 H, C = 480 nF, R = 23 Ω is connected to a 230 V variable frequency supply. (a) What is the source frequency for which current amplitude is maximum. Obtain this maximum value. (b) What is the source frequency for which average power absorbed by the circuit is maximum. Obtain the value of this maximum power. (c) For which frequencies of the source is the power transferred to the circuit half the power at resonant frequency? What is the current amplitude at these frequencies? (d) What is the Q-factor of the given circuit? Answer : Inductance, L = 0.12 H Capacitance, C = 480 nF = 480 × 10^-9 F Resistance, R = 23 Ω Supply voltage, V = 230 V Peak voltage is given as: V[0] = (a) Current flowing in the circuit is given by the relation, I[0] = maximum at resonance At resonance, we have ω[R] = Resonance angular frequency ∴Resonant frequency, And, maximum current (b) Maximum average power absorbed by the circuit is given as: Hence, resonant frequency (V[R]) is 663.48 Hz (c) The power transferred to the circuit is half the power at resonant frequency. Frequencies at which power transferred is half, = Hence, change in frequency, Hence, at 648.22 Hz and 678.74 Hz frequencies, the power transferred is half. At these frequencies, current amplitude can be given as: (d) Q-factor of the given circuit can be obtained using the relation, Hence, the Q-factor of the given circuit is 21.74. Q21 :Obtain the resonant frequency and Q-factor of a series LCR circuit with L = 3.0 H, C = 27 μF, and R = 7.4Ω. It is desired to improve the sharpness of the resonance of the circuit by reducing its ‘full width at half maximum’ by a factor of 2. Suggest a suitable way. Answer : Inductance, L = 3.0 H Capacitance, C = 27 μF = 27 × 10^-6 F Resistance, R = 7.4 Ω At resonance, angular frequency of the source for the given LCR series circuit is given as: Q-factor of the series: To improve the sharpness of the resonance by reducing its ‘full width at half maximum’ by a factor of 2 without changing, we need to reduce R to half i.e., Resistance = Q22 :Answer the following questions: (a) In any ac circuit, is the applied instantaneous voltage equal to the algebraic sum of the instantaneous voltages across the series elements of the circuit? Is the same true for rms voltage? (b) A capacitor is used in the primary circuit of an induction coil. (c) An applied voltage signal consists of a superposition of a dc voltage and an ac voltage of high frequency. The circuit consists of an inductor and a capacitor in series. Show that the dc signal will appear across C and the ac signal across L. (d) A choke coil in series with a lamp is connected to a dc line. The lamp is seen to shine brightly. Insertion of an iron core in the choke causes no change in the lamp’s brightness. Predict the corresponding observations if the connection is to an ac line. (e) Why is choke coil needed in the use of fluorescent tubes with ac mains? Why can we not use an ordinary resistor instead of the choke coil? Answer : (a) Yes; the statement is not true for rms voltage It is true that in any ac circuit, the applied voltage is equal to the average sum of the instantaneous voltages across the series elements of the circuit. However, this is not true for rms voltage because voltages across different elements may not be in phase. (b) High induced voltage is used to charge the capacitor. A capacitor is used in the primary circuit of an induction coil. This is because when the circuit is broken, a high induced voltage is used to charge the capacitor to avoid sparks. (c) The dc signal will appear across capacitor C because for dc signals, the impedance of an inductor (L) is negligible while the impedance of a capacitor (C) is very high (almost infinite). Hence, a dc signal appears across C. For an ac signal of high frequency, the impedance of L is high and that of C is very low. Hence, an ac signal of high frequency appears across L. (d) If an iron core is inserted in the choke coil (which is in series with a lamp connected to the ac line), then the lamp will glow dimly. This is because the choke coil and the iron core increase the impedance of the circuit. (e) A choke coil is needed in the use of fluorescent tubes with ac mains because it reduces the voltage across the tube without wasting much power. An ordinary resistor cannot be used instead of a choke coil for this purpose because it wastes power in the form of heat. Q23 :A power transmission line feeds input power at 2300 V to a stepdown transformer with its primary windings having 4000 turns. What should be the number of turns in the secondary in order to get output power at 230 V? Answer : Input voltage, V[1] = 2300 Number of turns in primary coil, n[1] = 4000 Output voltage, V[2] = 230 V Number of turns in secondary coil = n[2] Voltage is related to the number of turns as: Hence, there are 400 turns in the second winding. Q24 :At a hydroelectric power plant, the water pressure head is at a height of 300 m and the water flow available is 100 m3 s^-1. If the turbine generator efficiency is 60%, estimate the electric power available from the plant (g= 9.8 m s^-2). Answer : Height of water pressure head, h = 300 m Volume of water flow per second, V = 100 m^3/s Efficiency of turbine generator, n = 60% = 0.6 Acceleration due to gravity, g = 9.8 m/s^2 Density of water,ρ= 10^3 kg/m^3 Electric power available from the plant = η x hρgV = 0.6 x 300 x 10^3 x 9.8 x 100 = 176.4 x 10^6 W = 176.4 MW Q25 :A small town with a demand of 800 kW of electric power at 220 V is situated 15 km away from an electric plant generating power at 440 V. The resistance of the two wire line carrying power is 0.5 Ω per km. The town gets power from the line through a 4000-220 V step-down transformer at a sub-station in the town. (a) Estimate the line power loss in the form of heat. (b) How much power must the plant supply, assuming there is negligible power loss due to leakage? (c) Characterise the step up transformer at the plant. Answer : Total electric power required, P = 800 kW = 800 × 10^3 W Supply voltage, V = 220 V Voltage at which electric plant is generating power, V’ = 440 V Distance between the town and power generating station, d = 15 km Resistance of the two wire lines carrying power = 0.5 Ω/km Total resistance of the wires, R = (15 + 15)0.5 = 15 Ω A step-down transformer of rating 4000 – 220 V is used in the sub-station. Input voltage, V[1] = 4000 V Output voltage, V[2] = 220 V Rms current in the wire lines is given as: (a) Line power loss = I[2]R = (200)^2 × 15 = 600 × 10^3 W = 600 kW (b) Assuming that the power loss is negligible due to the leakage of the current: Total power supplied by the plant = 800 kW + 600 kW = 1400 kW (c) Voltage drop in the power line = IR = 200 × 15 = 3000 V Hence, total voltage transmitted from the plant = 3000 + 4000 = 7000 V Also, the power generated is 440 V. Hence, the rating of the step-up transformer situated at the power plant is 440 V – 7000 V. Q26 :Do the same exercise as above with the replacement of the earlier transformer by a 40,000-220 V step-down transformer (Neglect, as before, leakage losses though this may not be a good assumption any longer because of the very high voltage transmission involved). Hence, explain why high voltage transmission is preferred? Answer : The rating of a step-down transformer is 40000 V – 220 V. Input voltage, V[1] = 40000 V Output voltage, V[2] = 220 V Total electric power required, P = 800 kW = 800 × 103 W Source potential, V = 220 V Voltage at which the electric plant generates power, V’ = 440 V Distance between the town and power generating station, d = 15 km Resistance of the two wire lines carrying power = 0.5 Ω/km Total resistance of the wire lines, R = (15 + 15)0.5 = 15 Ω P = V[1]I Rms current in the wire line is given as: (a) Line power loss = I^2R = (20)^2 × 15 = 6 kW (b) Assuming that the power loss is negligible due to the leakage of current. Hence, power supplied by the plant = 800 kW + 6kW = 806 kW (c) Voltage drop in the power line = IR = 20 × 15 = 300 V Hence, voltage that is transmitted by the power plant = 300 + 40000 = 40300 V The power is being generated in the plant at 440 V. Hence, the rating of the step-up transformer needed at the plant is 440 V – 40300 V. Hence, power loss during transmission = In the previous exercise, the power loss due to the same reason is
{"url":"https://www.toppersbulletin.com/ncert-solutions-for-class-12-chapter-7-alternating-current/","timestamp":"2024-11-10T16:05:55Z","content_type":"text/html","content_length":"228347","record_id":"<urn:uuid:bf44d92b-a0b3-4eca-b076-7ed98c956734>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00046.warc.gz"}
< Back 1218. Longest Arithmetic Subsequence of Given Difference We're asked to find the longest subsequence of numbers that all have a difference of difference from left to right. We solve this using dynamic programming, but not with a memoization table like usual. Instead, we maintain dictionary of numbers seen, because, for a subsequence, we're going to have to look backwards an arbitrary number of spaces. The dictionary contains, for a given number, num, the length of the subsequence prior to reaching that number wherein all numbers have a difference of difference. So for example, if we're looking at 2 and the difference is 2, we the compliment would be 4. If we've seen 4 before, it will return from a dictionary lookup, and then we append 1 to the subsequence and store dp[2]. Otherwise, the subsequence for dp[2] will be 1. Across this process, we maximize the answer for the highest length subsequence we've seen so far. The solution is as follows: from collections import defaultdict class Solution: def longestSubsequence(self, arr: List[int], difference: int) -> int: dp = defaultdict(int) ans = 1 for num in arr: dp[num] = dp[num - difference] + 1 ans = max(ans, dp[num]) return ans _ Time Complexity: O(n) - Where n is the length of arr, we iterate through the number list once. _ Space Complexity: O(n) - We maintain a memoization dictionary of size n.
{"url":"https://one2bla.me/dsa/dynamic-programming/1218.html","timestamp":"2024-11-04T05:13:01Z","content_type":"text/html","content_length":"2577","record_id":"<urn:uuid:bcebcfd2-bfcf-498c-840f-ff4719054289>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00636.warc.gz"}
414 usteaspoon to imperialpint - How much is 414 US customary teaspoons in imperial pints? [CONVERT] ✔ 414 US customary teaspoons in imperial pints Conversion formula How to convert 414 US customary teaspoons to imperial pints? We know (by definition) that: $1&InvisibleTimes;usteaspoon ≈ 0.00867368942321863&InvisibleTimes;imperialpint$ We can set up a proportion to solve for the number of imperial pints. $1 &InvisibleTimes; usteaspoon 414 &InvisibleTimes; usteaspoon ≈ 0.00867368942321863 &InvisibleTimes; imperialpint x &InvisibleTimes; imperialpint$ Now, we cross multiply to solve for our unknown $x$: $x &InvisibleTimes; imperialpint ≈ 414 &InvisibleTimes; usteaspoon 1 &InvisibleTimes; usteaspoon * 0.00867368942321863 &InvisibleTimes; imperialpint → x &InvisibleTimes; imperialpint ≈ 3.590907421212513 &InvisibleTimes; imperialpint$ Conclusion: $414 &InvisibleTimes; usteaspoon ≈ 3.590907421212513 &InvisibleTimes; imperialpint$ Conversion in the opposite direction The inverse of the conversion factor is that 1 imperial pint is equal to 0.278481142146053 times 414 US customary teaspoons. It can also be expressed as: 414 US customary teaspoons is equal to $1 0.278481142146053$ imperial pints. An approximate numerical result would be: four hundred and fourteen US customary teaspoons is about three point five eight imperial pints, or alternatively, a imperial pint is about zero point two eight times four hundred and fourteen US customary teaspoons. [1] The precision is 15 significant digits (fourteen digits to the right of the decimal point). Results may contain small errors due to the use of floating point arithmetic.
{"url":"https://converter.ninja/volume/us-customary-teaspoons-to-imperial-pints/414-usteaspoon-to-imperialpint/","timestamp":"2024-11-08T23:25:49Z","content_type":"text/html","content_length":"21034","record_id":"<urn:uuid:15ff50ed-7c54-4dbe-941b-e6541e9acd3b>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00842.warc.gz"}
MATH and CS Club Math/CS Club talks/activities are held weekly during the academic year and open to all. • Talks and activities • The focus is on mathematics, mathematics education, and computer science topics. • Events are open to all. You do NOT have to be a math/cs major/minor to attend. • Snacks are provided. • Tuesdays 12-1PM • Department of Mathematics and Computer Science (AC/2C07) Conference Room or via Zoom • are accessible to a general audience. • assume little or no previous knowledge. • cover a wide selection of topics. • encourage audience participation. • are not boring or stuffy. Recent Math/CS talks include • The mathematics of billiards. • Secret Sharing. • Proofs Without Words. • A history of infinity. • Graduate School in the Mathematical Sciences. For more information and/or to be added to the Math/CS Club e-mail list, please contact Dr. Lidia Gonzalez at Lgonzalez@york.cuny.edu
{"url":"https://www.york.cuny.edu/mathematics-computer-science/resources/math-club","timestamp":"2024-11-04T01:35:56Z","content_type":"text/html","content_length":"216785","record_id":"<urn:uuid:69140d14-9217-49a0-b832-0d476805a0e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00525.warc.gz"}
A quick ggplot2 hack (multiple dataframes) | R-bloggersA quick ggplot2 hack (multiple dataframes) A quick ggplot2 hack (multiple dataframes) [This article was first published on Timothée Poisot » R , and kindly contributed to ]. (You can report issue about the content on this page ) Want to share your content on R-bloggers? if you have a blog, or if you don't. I’m starting to get familiar with ggplot2, and I really like it. I just found a very quick way to use several dataframes within the same plot, provided that the dataframes share columns names. One obvious application is the production of graphs with the mean (obtained by aggregate) superposed to the original raw data. You can do this in ggplot2 simply with something along the lines of dat <- read. base <- ggplot + geom_line By changing the data argument, you will recycle the aes settings. This is really handy. The result is Now, of course, is someone knows a simpler way to do it, I’d like to know!
{"url":"https://www.r-bloggers.com/2010/09/a-quick-ggplot2-hack-multiple-dataframes/","timestamp":"2024-11-09T22:35:15Z","content_type":"text/html","content_length":"85198","record_id":"<urn:uuid:abd3367d-6207-4b6b-ab1d-671b37d16ad9>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00881.warc.gz"}
Solving systems of equations by Gaussian Elimination method Gaussian elimination method On this post you will learn what the Gaussian elimination method is and how to solve a system of equations by the Gaussian elimination method. What is the Gaussian elimination method? The Gaussian elimination method, also called row reduction method, is an algorithm used to solve a system of linear equations with a matrix. The Gaussian elimination method consists of expressing a linear system in matrix form and applying elementary row operations to the matrix in order to find the value of the unknowns. However, to understand how Gaussian elimination works, we must first know how to express a system of linear equations in matrix form and what row operations can be computed. So, we will explain these two things first, and then we will see how to apply Gaussian elimination method. Augmented matrix of a system of linear equations In linear algebra, a system of equations can be expressed in matrix form: the coefficients of the unknown x correspond to the first column of the matrix, the coefficients of the unknown y to the second column, the coefficients of the unknown z to the third column, and the constants to the fourth column. For example: The matrix that represents the system of equations is called augmented matrix. The objective of the Gaussian method is to convert the initial system of equations into a echelon system, that is, a system in which each equation has one less unknown than the previous one: In other words, we have to transform the augmented matrix into a matrix in row echelon form: To do this, we have to apply elementary operations on the rows of the matrix. So, let’s see what operations can be done in the Gaussian elimination method. Elementary row operations To convert the augmented matrix to a matrix in row echelon form, you can perform any of the following elementary operations: • Interchange two rows of the matrix. For example, we can interchange rows 2 and 3 of a matrix: • Multiply or divide all the terms in a row by a nonzero number. For example, we can multiply the first row by 4, and divide the third row by 2: • Add to a row another row multiplied by a scalar. For example, in the following matrix we add row 3 multiplied by 1 to row 2: How to do Gaussian elimination Now we are going to see with a solved example how to solve a system of linear equations using the Gaussian elimination method: First of all, we find the augmented matrix of the system of equations: As we will see later, it is better if the first number of the first row is 1. Therefore, we are going to change the order of rows 1 and 2: The goal of the Gaussian elimination method is to make the numbers below the main diagonal 0. So, we have to transform the red numbers to 0: To eliminate these numbers we must do the appropriate elementary rows operations. For example, the number -1, which is the first element in the second row, is the negative of 1, the first element in the first row. So if we add the first row to the second row, the -1 will be By adding the rows we obtain the following matrix: In this way we have transformed the -1 into a 0. Now we are going to zero the number 2 out. To do so, we add the first row multiplied by -2 to the third row: And the following matrix is obtained: Now we have to convert the -8 to 0. To do this, we multiply the third row by 3 and add the second row multiplied by 8: Therefore, we obtain the following matrix: As you can see, with all these transformations we have made all the numbers below the main diagonal 0. That is, the matrix is in row echelon form (or triangular form). Now we must transform the matrix into a system of equations with unknowns. For this, remember that the first column of the matrix corresponds to the unknown x, the second column to the unknown y, the third column to the unknown z, and the last column are the constants of the equations: And finally, to solve the system we have to find the values of the unknowns from the bottom up. Since the last equation only has one unknown, and therefore, we can easily find its value: Now we back-substitute into the second row to evaluate the unknown y: And we do the same with the first equation of the system: we back-substitute the values of the other unknowns and solve for x: Thus, the solution of the system of equations is: We have solved the system of equations by transforming the augmented matrix into a matrix in echelon form. However, there are authors who prefer to continue applying row operations until the matrix is in reduced row echelon form (zeros above and below the main diagonal), in this case, the process is called Gauss-Jordan elimination. 2 thoughts on “Gaussian elimination method” 👍👍 UT really helped me Leave a Comment
{"url":"https://www.algebrapracticeproblems.com/gaussian-elimination-method/","timestamp":"2024-11-06T17:09:24Z","content_type":"text/html","content_length":"198469","record_id":"<urn:uuid:84261fb2-1de6-4d25-b475-86838182d7a3>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00780.warc.gz"}
Obtained 2530 - math word problem (2530) Obtained 2530 The weight of the veal obtained is 64% of the total weight of live calves. What was the weight of the live calves, from which 16,830 kg of veal were obtained? Correct answer: Did you find an error or inaccuracy? Feel free to write us . Thank you! Tips for related online calculators percentage calculator will help you quickly calculate various typical tasks with percentages. Do you have a linear equation or system of equations and are looking for its ? Or do you have a quadratic equation You need to know the following knowledge to solve this word math problem: Units of physical quantities: Grade of the word problem: Related math problems and questions:
{"url":"https://www.hackmath.net/en/math-problem/2530","timestamp":"2024-11-12T07:09:04Z","content_type":"text/html","content_length":"53253","record_id":"<urn:uuid:872fdd78-2f68-4693-ab39-d63cbb3d2c22>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00771.warc.gz"}
New Terminology: Scoring v. Grading After studying Assessment FOR Learning pretty intensely for the past few school years, I am now beginning to think that we might do ourselves a favor if we would change some of our terminology. Specifically, I think it's time to stop using the words "grading" or "grade" as often as we do and replace them - at times - with "scoring" or "score". You don't have to go very far down the AFL road to realize that traditional grading practices often get in the way of our attempts to use AFL strategies. Traditional grade books and grading strategies typically average together all of a student's grades for the grading period to determine a final grade. Therefore, practice assignments such as homework and classwork will have an impact on the student's grade. Since the concept of assigning lots of practice so that students and teachers can receive the feedback necessary to increase learning is central to AFL (see Heart of AFL), averaging practice grades into a student's overall grade becomes obviously problematic. What if the additional practice helps a student learn but also lowers the student's grade? The natural reaction to this problem is for teachers to feel that they should not grade practice assignments. For more on this topic see: So the philosophy of AFL naturally leads to teachers feeling as though they should not grade practice assignments. This is where Newton's third law of motion comes into play: "To every action there is always an equal and opposite reaction." When students realize that some things are graded and some things are not, they react by asking before most assignments, "Is this going to be graded?" Implied in their question is the idea that if the answer is "Yes" then they will work harder than if the answer is "No". As a result, teachers are reluctant to not grade assignments - even if they agree with the philosophy of practice assignments not lowering a grade - for fear that students won't work hard and, therefore, won't learn as much. So we're left with a quandary. We don't want to let practice impact the student's final grade but we want students to work on each assignment as though their final grade depended on it. Part of this quandary is of our own making. As explored previously in What we WANT students to do v. What we TRAIN students to do, we wish that students worked for the love of learning but we then use points and grades as a Sea World trainer uses a fish. It's difficult to argue that students should not be motivated by grades when we, in turn, use grades as motivators. We have to find a new way. Perhaps our new AFL philosophy requires some new terminology. What would happen if we started "scoring" all assignments and "grading" only a few? The term "grading" implies the following: 1. The teacher will assess how well the student did on the assignment. 2. The student will receive feedback on well they have mastered the content. 3. The grade will go into the grade book to be used to help determine the student's final grade. In most classrooms, "grading" is the only tool the teacher has - or uses - for providing feedback. There is an old adage that describes this problem: "When the only tool you have is a hammer, every problem looks like a nail." "Scoring" could be the new tool needed to help us out of our quandary. The difference between scoring and grading is in implication #3 from the list above. Both scoring and grading provide the teacher with feedback and both provide the student with feedback. However, a score on an assignment may or may not be used by the teacher to determine the final grade. Here's how I envision scoring working in a typical AFL classroom: 1. The teacher assigns practice everyday. 2. The teacher provides feedback on all practice. While this feedback is often provided very informally, the majority of feedback given formally is in the form of a score. 3. The score looks very similar to a grade. 4. The score goes into the grade book. 5. The students understand up front that the teacher will be looking over all of a student's scores - and grades - to determine what the appropriate final grade is for the student. While graded assignments are the few that will definitely count toward the final grade, they will be much fewer in number than the scored assignments. Rather than being tied down to averaging all graded assignments, the teacher who uses scoring will now be able to study the evidence and arrive at the most appropriate final grade. The point here is that every score counts toward helping the teacher determine a grade. When students ask, "Is this graded," what they really means is, "Does this count?" With scoring, the answer to that question is: "Yes, it counts. Everything counts. As the teacher, I will be analyzing ALL the evidence - just like a good detective - before arriving at a conclusion (your grade). How it counts could be different for each of you, depending on how you perform, but ALL assignments count." Scoring satisfies our desire to be AFL-ish: • teachers receive feedback • students receive feedback • practice doesn't have to lower - or overly inflate - the final grade At the same time, scoring doesn't entice students to fall into the trap of only working "when it counts." What do you think? You need to be a member of The Assessment Network to add comments! Scott - Ok, I posted the article as a blog. Thanks for the encouragement. Great reply, Justin. That would make a wonderful blog post all by itself. I think our members could benefit from reading your article. Thanks for sharing this, Scott. You and I are definitely on the same page. I wrote an article for Online Classroom journal a few months back that reflects almost exactly what you explore in this post. I call marks what you are calling scoring because I think that it is easier to distinguish between marks (formative) and grades (summative). Click here to see the article. I have been using this system for about 5 years now and I am very satisfied with it. I have asked students to evaluate this component of my classes too and in general they appreciate it very much. Every class there is at least one student who dislikes it, always because everybody can see everybody else's marks on the Google spreadsheet I publish. I would like to find a way to get around this, have a private link for each student, but that seems like too much work. I believe that scoring/grading should reflect learning, mastery of lab skills and expanded thought processes..........it is important that the whole grading process in one's classroom be designed to indicate how much a student can do to improve and grow: teachers should not use anytype of grading procedures that will discourage a student but will serve as benchmarks and methods of positive reinforcements.....thus teachers must provide students with different methods of assessment that will meet needs of the diverse student populations..... we need to be creative I have tried this idea a few times this year, but I know I need to utilize this idea a lot more. It has helped me to see the "score" on practice and compare it with a grade on the quiz or test. I can look at the positive changes that occur when the students engage in more practice which usually leads to better understanding and then a better grade. Unfortunately, I need to figure out how to get more of my students, some of the most unmotivated, to complete the practice!!! I understand the concept of scoring vs. grading and allowing for growth, which should be reflected in the grade. I also appreciate the school board allowing us flexibility in assigning grades. The problem I see is that a lot of subjectivity can enter into this process, and while we as professionals are qualified to make these subjective/professional judgments, I also see that this will open us up to increased scrutiny and criticisms from parents if they do not agree with their child's final grade and we cannot quantify it in the way that parents are used to. If you have clear and consistent expectations of your students to complete all classwork, they don't ask if it's a grade because they know everything has importance. Don't forget that in PowerSchool you can make things not count into the final grade - that way they aren't affecting the average. But you could always tell kids that you'll be using everything in PowerSchool to guide what you end up putting as their final grade. I tell my students that I'm checking for understanding. It seems to get most students involved. I put a check, check +, or - and sometimes a point value, no more than 5. It doesn't go into my gradebook, but it has worked. Debbie, Hilda and Rachel, Wondering how this will be reflected in Powershcool? We like feedback the scoring can give. In favor of putting in Powerschool but worth few points. • 1 of 2 Next This reply was deleted.
{"url":"https://salemafl.ning.com/profiles/blogs/scoring","timestamp":"2024-11-09T14:22:34Z","content_type":"text/html","content_length":"115974","record_id":"<urn:uuid:c2af4f3d-75ea-4698-b840-ec41286c17f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00119.warc.gz"}
Computer Graphics: Fractal Rendering Usually, a fractal is described as a visual object with some self-similarity. It is difficult to give a concrete definition because there are many different types of fractals and people may understand the term fractal differently depending on the context. However, we can usually emphasize the self-similarity property of fractals. There is some small-scale part of the fractal that looks like some large-scale part. This could be exactly the same-looking part or could just be similar enough that we see the resemblance. Although that resemblance might be difficult to explicitly point out, we somehow intuitively feel it if something is a fractal. Fractals can be categorized by different methods that generate them: • Iterated function systems – These are fractals generated via some geometric rewrite rules. For example, the Koch snowflake is constructed by drawing new triangles in the middle of the outer edges of existing triangles, thus extending the shape. Other commonly known examples are the Sierpiński triangle and the Menger sponge. • Lindenmayer-systems – As you might have guessed, the L-systems we learned in the Computer Graphics course are also fractals. You can think of a simple tree generated by an L-system. The whole process of iterating the systems inherently creates a fractal structure as the same rules are applied at each iteration to grow the tree. Ie, branches of a tree have a similar structure as the tree itself. • Strange attractors – Systems of differential equations that evolve in time can create fractal-like shapes. • Escape-time fractals – These are systems that recursively iterate over the input and try to find if the result stays within some bound (does not escape in some certain number of iterations). The famous Mandelbrot fractal is an escape-time fractal. These will be the fractals whose construction and rendering we will focus on in this topic. Popular software for fractal rendering includes Mandelbulber, Mandelbulb 3D, Fragmentarium. The best place for everything fractal is Fractalforums (historical site). If you really want to get a retro feel, then you can go to Sprott's Fractal Gallery, where a new fractal is automatically generated every day! • Fractal – Some kind of object, which exhibits self-similarity at different scales. • Escape-time fractal – Fractals that arise by iterating a function on some input, usually a complex number, and considering it escaped if its magnitude exceeds some threshold after a fixed iteration count. • Orbit – Sequence of numbers arising from iterating an escape-time fractal process on a given input (the path this number takes). • Complex numbers – Number system extending real numbers by adding an imaginary part. • Imaginary number – The number $i$ such that $i^2 = -1$. • Dual numbers –Number system extending real numbers by adding an abstract part. • Abstract number – The number $\epsilon$ such that $\epsilon^2 = 0$ and $\epsilon \neq 0$. • Distance field – Some mapping $\mathbb{R}^n \Rightarrow \mathbb{R}$, where for any given input the output is the distance to the nearest surface (where the field value is $0$). • Partial derivative – Derivative showing the change of a multivariate function with respect to a change in one coordinate/axis of the input. • Gradient – Vector of all the partial derivatives of a multivariate function $\mathbb{R}^n \Rightarrow \mathbb{R}$. Shows the direction of the greatest increase of the function at any given point. • Jacobian – Matrix of all the partial derivatives of a multivariate function $\mathbb{R}^n \Rightarrow \mathbb{R}^m$. Has the gradients for each output coordinate/axis in its rows. • Ray marching – Technique to render a distance field by stepping (marching) through it with rays, each the ray length being the value in the distance field at the ray's origin. • Level surface – Surface that is defined in some specific constant $c$ level, $f(x) = c$. Often $c = 0$. Julia and Mandelbrot You probably know quite well how a Mandelbrot fractal sorta looks like. But the Mandelbrot fractal is actually made up of different seeds for Julia sets. It is also called a map or a dictionary of all Julia sets. Julia sets are also a bit simpler than the Mandelbrot, so we will start with these. Julia Fractal Julia sets are created from complex numbers. These numbers have 2 dimensions and form a 2D plane, on which we can render the fractal. For each rendered pixel, its coordinate on the complex plane will be the input to a recursive function. As the Julia fractal is an escape-time fractal, we are interested in seeing if the complex number will grow to infinity in that recursive process or not. That recursive function for Julia sets is: $z_{n+1} = z_n^2 + c$ Here $z_n$ is the result of the process at step $n$. We will start at the pixel coordinate we are rendering. The constant $c$ will be another complex number, which defines a Julia set. Different values for $c$ give different Julia sets. For example, if we are rendering a pixel at coordinates $(5, 10)$, our initial $z$ will be $z_0 = 5 + 10i$. We also have to define some $c$ and then we iterate the process. If at some point determine that the process grows to infinity, we can color the pixel white. If we determine that it will never grow to infinity, we can color the pixel black and that coordinate will be in the Julia set. Sometimes will be unclear if the point actually tends towards infinity or not. Commonly a condition is used which checks if the length of the complex number is greater than 2, then we consider it tending to infinity. This produces some false positives into the Julia set, but that is usually okay. When creating fractals we are often interested in the visual look, not that much mathematical Mandelbrot Fractal If you understand how the Julia fractals are created, then the Mandelbrot fractal is also easy to grasp. The process of the Mandelbrot fractal is the same, but instead of some configurable $c$ we use the pixel coordinate. The pixel coordinate is also the initial $z$ value ($z_0$), so the formula for the Mandelbrot is: $z_{n+1} = z_n^2 + z_0$ The escape-time condition is usually the same. If the length of $z$ reaches over 2 within some number of iterations, then we exclude the point from the set, else we include it. On the example to the right, you can change the number of iterations and see the shape become more accurate as the iteration count gets larger. You can also see the different Julia sets and set the complex number $c = a + bi$ to the specific Julia set you wish to check out. Radial Space Folding One way to think about how Julia fractals are formed is to see what complex number multiplication does to the space. For that, we should check out our complex numbers in their polar form. A complex number $z = a + bi$ would be in polar form $z = r \cdot (\cos(\theta) + \sin(\theta)i)$. When multiplying two complex numbers in polar form, the result has the angles added and radii multiplied together. Generally speaking: $z_1 \cdot z_2 = r_1 \cdot r_2 \cdot (\cos(\theta_1 + \theta_2) + \sin(\theta_1 + \theta_2)i)$ For Julia sets we are multiplying the complex number with itself, ie, squaring it. Thus: $z^2 = r^2 \cdot (\cos(2 \cdot \theta) + \sin(2 \cdot \theta)i)$ Generally speaking, we can use any power instead of 2 there. $z^p = r^p \cdot (\cos(p \cdot \theta) + \sin(p \cdot \theta)i)$ Now we know that the length of the location vector of the complex number (the radius) is raised to the power. If our power is over 1 and the length is also over 1, the length will increase. This means that regardless of the angle when we square complex numbers, the numbers with lengths over 1 will get pushed away from the origin. Similarly, the numbers with lengths less than 1 will get pulled into the origin. We also now know that the angle will be multiplied by the same power. This means that the complex numbers will sorta rotate on the complex plane. It is not a rotation, though, as numbers with small angles stay more similar than ones with large angles. For example, the square of a number with an angle of 20° will have an angle of 40°, but a square of a number with an angle of 120° will have an angle of 240°. This is illustrated in the image below: Here the complex numbers are chosen to be of unit length so they would stay on the unit circle. In the Julia fractal, we have a process that keeps raising the numbers to the power 2 many times. We can think about this as a process that in some sense generates more space or, more intuitively perhaps, radially folds the space unto itself. Notice that the same thing happens in the opposite direction with negative angles too. If this would be all the Julia fractal is doing, we would have a pretty boring fractal – just a unit circle (or a disc, depending on the implementation). The points outside the unit circle would escape quite fast, the points inside would be pulled towards the origin. Luckily, the Julia process also includes an additive term. The constant complex number $c$ added every iteration shifts the space by some amount. This means that some numbers outside the unit circle may end up inside in the next iteration and vice versa. This is what creates the Julia fractal an interesting fractaly look. We can visualize this space folding in time as is done in the example on right. We are translating the fractal only in the shift step ($+c$). The folding step ($z^2$) kinda centers the fractal back. Try to focus on one spot on the fractal and see how it moves. Note that while in the visualization it seems that space is generated out of nothing, in the actual Julia process the new values come from complex numbers that have gone over 360°. The red areas in this and the previous example depict the complex numbers for which the process resulted in a complex number with length around 0.5. There are two other explanations for this: one by Karl Sims and another by Steven Wittens. You have probably seen some really awesomely colored fractals before. It is not immediately obvious how to actually color Julia or Mandelbrot sets. The examples above had binary values – the pixel coordinate either belonged to the set or not. We did draw some stripes to keep track of part of the structure inside the set, but that was about all we did. Furthermore, people often color the outside of the fractal rather than the inside. To use more than just 2 colors for that, we would need more values to map these colors against. One value we do have for the outside points is the escape time of a point. Ie, the number of iterations a point went through before escaping the set. Remember that escaping the set meant that its length became larger than some fixed value. We will call this value the bailout value (B). This is one of the easiest colorings to do, just take the escape time value (iteration count) and map it against some colors. We could normalize it via the maximum iteration count and use it to mix between different colors. This solution, as you can guess, will create visible banding. Because our escape time values will be integers (1, 2, 3, ...) representing on which iteration our point escaped, the color mix we do, will also have these discrete bands. This could be the desired look, of course, but if we could make a smooth coloring instead, then that would perhaps make the fractal pop out more and thus be aesthetically nicer. It is possible to derive a smooth iteration count and thus make a smooth coloring. To do that, we will consider a single direction from the surface of the fractal. The value B will be our bailout value (after what length do we consider the point to be escaped). We also simplify the case to 2 bands. The points in Band 1 need one more iteration to escape. The points in Band 2 will need 2 iterations. We have two example points here too: $q_i$ from Band 1 and $p_i$ from Band 2. The index $i$ shows that these points have already passed through $i$ iterations themselves. They are calculated from some original pixel coordinate points $q_0$ and $p_0$. The chain of $p_0, p_1, p_2, ..., p_i, ..., p_n$ is called an orbit. In this example, when we move forward along an orbit, ie, iterate the Julia function $z_{i+1} = z_i^2 + c$ on a point, the point always moves directly to the right. We will also make a simplification to the Julia function by omitting $+c$. For larger numbers, the $+c$ term becomes unimportant compared to the $z^2$ term. So, when we calculate the next step in the orbits, we just square the lengths. Our point $p$ has now escaped (crossed the length threshold $B$) at escape time $i+1$. So, wherever the point $p$ came from, we will color it the color corresponding to index $i+1$. The second point $q$ has not escaped yet, but obviously will in the next iteration, thus its escape time will be $i+2$ and we will color its origin some other color. If it is the case that $i=0$ and thus $p_i$ and $q_i$ are the origin points in this example, then we will color them and other pixels in Band 1 and Band 2 different colors. Now we would like to make the coloring smooth. Basically, we would need one value from $[0, 1]$, which tells us how far along in some band our point is. Notice that because we are only squaring our points, their proportional place in every band will be exactly the same. The point $p_i$ was in about the center of Band 1 and will be in the center in the band from $B$ to $B^2$. The point $q_i$ was roughly $2/3$ through Band 2 and $q_{i+1}$ is at the same proportion in Band 1. So, if we can find that smoothly changing value from $[0, 1]$ for one band, we can directly apply it on the origin point and the result should be a smooth coloring over origin points. Let us take the band $[B, B^2]$. This is the band where all the escapees will end up first in when escaping. They have to cross $B$ to escape and they will not be crossing $B^2$ in a single iteration. This is because one iteration only squares the number $z^2$. The points near $B$ should get the values from $0$ (because they are at the beginning of a band) and the points near $B^2$ should be getting values near $1$ (as they are reaching the end of the band). Thus we want to map $[B, B^2]$ to $[0, 1]$. If we just take the proportion of the distance traveled on the escape band, then we would surprisingly not get a smooth coloring. Because our process always squares the numbers. If we start with a uniform pixel grid, then each iteration the numbers with higher initial values will get more spread apart than lower numbers. Our process is not linear, so just a linear range conversion in the escape band based on the distance there will not work out. For example, we can consider two pairs of points in Band 2. Both points in both pairs are close to each other. One pair, $p_i$ and $q_i$, is at the end of the band and another pair, $r_i$ and $s_i$, is at the beginning of the band. Next iteration all the points will escape, but points $p_{i+1}$ and $q_{i+1}$ will be more apart than the points $r_{i+1}$ and $s_{i+1}$. So, instead of splitting the escape band space equally, we rather want to find how much the exponent of $B$ needs to increase from $1$ to be at that distance. In the picture above you can see that the distance in terms of the exponent increase of $B$ will now be about $1.5$ for both pairs of points. To find the exponent increase, we can first find the exponent itself via: $\log_B(|p_{i+1}|) = \log(|p_{i+1}|) / \log(B)$ Then we could just subtract $1$ from that to get the $B$ exponent offset. This we subtract from the discrete integer iteration count to get a smooth continuous iteration count. We can then use it to mix between some colors or sample from a continuous color palette. Unfortunately, we have again jumped the gun a bit. By subtracting $1$ we are doing a linear mapping. Generally that transformation would be $(\log_B(|p_{i+1}|) - 1) / (d - 1)$, where $d$ is the exponent we use to iterate our process (in our case with Julia and Mandelbrot fractals it's just $2$). What we actually found with $\log_B(|p_{i+1}|) = e$ was some exponent $e$ such that $B^e = |p_ {i+1}|$. Now, this $B^e = p_{i+1} = {p^2}^{i+1}$, we have raised our point $p$ to the power of $2$ some $i+1$ times for it to escape. If we want to find out what is the offset of the iteration count for $B^e = p_{i+1}$, we need to find the fraction of the iteration we did to get from $B^1$ to $B^e$. Our Julia process was $p^2$ and this also applies to $B$. So, what we actually have is $B = {B^2} ^0$ and then some $B^e = {B^2}^\hat{e}$. We need to find that $\hat{e}$. For that, we just take: $\log_2(B^e) = \log_2(log_B(|p_{i+1}|))$. That will be our final, correct, and commonly used smooth iteration count fractional offset. In code it would be like: float len = length(z); if (len > bailout) { smoothI = float(i) - log2(log(len) / log(bailout)); break; // Exit the Julia process This we can then use to smoothly change the colors of the outer area of Julia or Mandelbrot fractals. We can also color the interior with a single color. To get some artistic coloring going, we can use a smooth palette as well. Here are a few, which are used in the example to the right. Notice how the smooth coloring "$-1$" produces also a smooth coloring compared to the "$\log_2$" version, but the latter is more uniformly spread out and matches the discrete bands better. Distance Fields So far we have rendered 2D escape-time fractals. Our viewport pixels with their coordinates have served as inputs to the algorithm and we also saw an algorithm for coloring the output. To render fractals in 3D we need to cover the topics of distance fields and ray marching. Distance fields are a relatively simple concept. We have an object in 3D space that has some surface. This could be your everyday mesh or mathematically defined. Then we can imagine the whole 3D space as an infinite field of real numbers. Every point will have a real number attached to it. You could also think about it as a function from $\mathbb{R}^3 \Rightarrow \mathbb{R}$. Or you could think of the space as consisting of voxels. Whatever way you think about it, the bottom line is that at every 3D coordinate there is a specific real number. That number will be the distance to the nearest surface of the object. The interior of the object could also have distances to the object's surface but with a negative sign. In that case we have a signed distance field (SDF). There are many applications for the distance field. For example: anti-aliasing, improved shadow mapping, obstacle avoidance. Given some mesh or a mesh-based scene, the difficult part is actually calculating the distance field. In the case of static scenes, this could be done in advance. In the example on the right you can see such a pre-calculated distance field of an object. It is a 3D field and the example shows it one slice at a time. The interior of the mesh is colored blue. You can try to guess how that object would look like in 3D. We will see it later on in an actual render. There is not too much to say about distance fields by themselves in the context of fractals. We will be using distance fields, or more precisely, distance estimator functions in 3D fractal rendering with ray marching. Ray Marching To render 3D escape time fractals we will use a rendering technique called ray marching. Note that we do not have a polygonal surface here, so the regular standard graphics pipeline rendering will not work. We also cannot do the regular ray tracing solution as our surfaces are complicated – they are not standard shapes like spheres and triangles that have known formulae for calculating intersection points with a ray. Instead, we have an arbitrary surface and a distance field that tells us the closest distance to it. The ray marching technique does consist of shooting a ray into the scene, so the initial setup is the same as with ray tracing (we set up an origin and a direction for each camera ray). However, each ray then takes a number of steps while traversing the distance field. The size of the step will be the value read from the distance field. The ray marches along the field. We will either march the ray some maximum number of times or until we reach the surface. Reaching the surface usually means the next step would be smaller than some specific length. In the picture above the space is divided into quite big voxels, so there will be some noticeable errors and we could check for exactly $0$. In usual cases, the smallest possible ray march distance could be a configurable small value. The bigger we make it, the more surface detail we will lose, but the faster the rendering. When we hit a surface, we can color the pixel some single color. You can imagine that this will result in a pretty 2D-ish image. We will look at how to get a surface normal a bit later. One easy thing we can do to make the result look more 3D is to add ambient occlusion (AO). This is very easy to do with this rendering technique: We just add some black depending on how many steps the ray made. The more steps it makes, the more likely it entered some crevasse or hits the surface at a grazing angle. When we color such areas darker, we get a more 3D look. We can ray march a pre-existing distance field, like the one from the Distance Field material before, or we might have a mathematical distance estimation (DE) function. One easy way to render a sphere would be to create a distance field $DE(p) = length(center - p) - radius$. The further along we are from the surface, the larger our values will be, the surface is defined at some distance $radius$ from the point $center$ and the function will be $0$ there. We could create many shapes in a similar fashion and combine them together with smooth operators as shown by Inigo Quilez. In the example to the right, we have the distance field from the Distance Field material, which is now visualized via ray marching. It is a Doll! You can switch to the Balls demo to see the result of evaluating a mathematical distance estimation function during the march. You can turn the AO on and off. When you visualize the iteration count instead of rendering the shape, you can see that rays that barely pass the shape also have larger counts (the darker areas around the shapes). • Raymarching Distance Fields (2008) – Inigo Quilez Great site of things related to the topic. There is also a site index with these and other topics. • Ray Marching – Michael Walczyk Overview of ray marching with code examples, good illustrations, and surface normal estimation. Box and Sphere Folding Now that we can render arbitrary 3D objects as long we know a distance estimation function, we can start thinking about 3D fractals. There is a famous Mandelbrot analogue for 3D called the Mandelbulb . In this material, however, we will be creating a fractal called the Mandelbox. It will take a bit of doing to create the Mandelbox process function and then a distance estimation function. We will start with the pieces of the former – the box and sphere fold operations. Box Folding This operation starts kind of like folding a piece of paper into a box shape. We do not end at a 3D box shape but keep folding on until we get back to a 2D piece of paper. You can already imagine that this would be impossible in the real world. And we will be doing it in 3D! We first define some folding distance $d$. Some points in the space will be less than $-d$ (blue), some will be greater than $+d$ (green), and some will be between $-d$ and $+d$ (red). The points we have drawn here are just some reference points. It is the whole axis below $-d$ or above $+d$ that will fold. Note that after this operation the area $[-d, d]$ will have more points mapped to it than before. Mathematically you could also think of this as two conditional reflections. All the points less than $-d$ will reflect over $-d$ and all the points greater than $+d$ will reflect over $+d$. This is also the idea behind how you would implement it. void boxFold(inout vec3 z, float d) { z.x = abs(z.x) > d ? sign(z.x) * 2.0 -z.x : z.x; z.y = abs(z.y) > d ? sign(z.y) * 2.0 -z.y : z.y; z.z = abs(z.z) > d ? sign(z.z) * 2.0 -z.z : z.z; Or you could write the if statements out: void boxFold(inout vec3 z, float d) { if (abs(z.x) > d) { z.x = sign(z.x) * 2.0 - z.x; } if (abs(z.y) > d) { z.y = sign(z.y) * 2.0 - z.y; } if (abs(z.z) > d) { z.z = sign(z.z) * 2.0 - z.z; } If you do not like conditions, then you could write the same thing very shortly as just: void boxFold(inout vec3 z, float d) { z = clamp(z, -1.0, 1.0) * 2.0 - z; Feel free to check that all these are mathematically equivalent and produce the correct result. Sphere Folding The next useful operation will be sphere folding. Assume we start with a grid with some points on it for reference. We use a grid to better see how the entire space transforms. For sphere folding, there are two circles centered at the origin: a smaller circle with radius $r$ and a larger circle with radius $R$. These radii will be parameters to the operation just like the folding distance was a parameter of the box folding. First, we are going to fill the smaller circle. We fill it with the downscaled version of our space. The scale factor will be $R^2 / r^2$. In our current example $R = 2$ and $r = 1$, so the inner circle will be filled with our space downscaled $2^2 / 1^2 = 4$ times. You can notice that our reference points also appear in the downscaled space. This is a very obvious source of a fractal-like structure – a larger and a smaller version of the same space occur simultaneously Second, we want to fill in the band between the smaller and larger circles. Mathematically what will happen there will be a circle inversion (in 3D sphere inversion) of the outer space against the larger circle. All the points will be reflected over the circle in a specific way. Because our point is in the origin, the condition will be $|p'| = r^2 / |p|$, where $p$ is our initial point and $p'$ is the point projected over the circle. If we want to operate with coordinates, then the formula to transform a point would be $p' = p * r^2 / (x^2 + y^2) = p * r^2 / p \cdot p $. In the following picture, we have found an inversion for the yellow reference point and the vertical line that passes it. Note that all the other points on the yellow vertical line will be further away from the circle's border (if we move up or down on the line). Thus their inversions will also be further away. The result will be (a part of a) circle. One of the properties of circle inversion is that all lines become circles in the inversion. Let us invert all the vertical lines. Because we previously chose the scale factor for space in the middle circle to be $R^2 / r^2$, then we get a pretty cool image, where the lines are connected with their counterparts in both circles. For example, take the line with the blue reference point on it. If you start from the top, you follow the vertical line to the blue dot in the outer space. Then the line goes to the circle inversion band where we meet our blue dot again. Following the line, we end up in the inner circle with the downscaled space. After meeting a third copy of the blue point. The line will then lead us out of the inner circle, through the circle inversion band, and then to the bottom of the picture. For the complete picture, we also add the inversions of the horizontal lines. This is the complete sphere fold operation. It creates an interesting locally repeating (folded) effect. Note that the points further away actually will not be visible inside the circles. If a point is further than roughly the purple point is, it will be outside the small circle radius in the downscaled copy. In the circle inversion part, it would also go outside the band. If we would only have the circle inversion (and no small circle for the downscaled space), then all the outer space would be projected into the circle. But because it is a band, then all the space further away will be cut off in the inversion by the small circle. On the right, there is an example, where you can play around with the radii. It might be useful to make it fullscreen so the (generated) grid pattern would not get too much Moire aliasing. The last slider can be used to quickly hide the sphere folding for seeing the underlying texture. Notice that already with this we can create some really cool fractal images. If you change $r$ to $0$ (and $R \neq 0$), then all of the infinite space will get packed into the only remaining circle. Code-wise the implementation could be something like this: void sphereFold(inout vec2 p, float r, float R) { float len2 = dot(p, p); float r2 = r * r; float R2 = R * R; if (len2 < r2) p *= R2 / r2; else if (len2 < R2) p *= R2 / len2; // You could also calculate the sphere-folded coordinates this way: //else if (len2 < R2) p = normalize(p) * R2 / length(p); Of course, instead of the inout keyword, you could also just return the new coordinates here and in the box folding code before. Later we will be using box folding and sphere folding in 3D for defining the Mandelbox fractal. Mathematically these operations are also called conditional scaling. Automatic Differentiation When we want to ray march a surface, we will need its distance estimation function. Remember from the Distance Fields chapter before that, at each location we need to know the distance to the nearest surface point of the shape we want to render. When we have functions $\mathbb{R}^3 \Rightarrow \mathbb{R}$, we can define a surface at $f(p) = 0$ for example. Such surfaces are called level surfaces (there is a surface when the value of a function is at some level, at level $0$ in our example). So, we need to find a distance between our current point $p$ and the closest point on the surface Note that we know $f$, it is the function whose values we use to define the level surface, but it is not the distance function. We do not know how the function $f$ behaves. But we can evaluate $f$ at $p$ or $p_c$ and get a number. What we are interested in is that is the shortest distance to the surface, ie, $|p_c - p|$, but we do not know where $p_c$ is. All we know about $p_c$ is that it is approximately the closest point to $p$, where $f(p_c) = 0$. To get to $p_c$ we need to know how the function $f$ changes around $p$. Which way does the function get closer to the surface? Which way would we be moving away from the surface? More specifically we want to know, in what direction would we move fastest towards the surface and thus to $p_c$? That will be the gradient of $f$ at $p$. Written out it is: $\nabla f(p) = \begin{pmatrix} \nabla f(p)_x \\ \nabla f(p)_y \\ \nabla f(p)_z \end{pmatrix} = \begin{pmatrix} \delta f(p) / \delta x \\ \delta f(p) / \delta y \\ \delta f(p) / \delta z \end{pmatrix} All this taunting notation says is that it is a vector, where each coordinate shows how much the function changes when we change the corresponding ($x$, $y$, or $z$) coordinate of the input. We have found such gradients already during the Computer Graphics course in the Bump Mapping homework. There we used a finite difference method to find it, but here we will soon be using another approach to find it. Assuming we have the gradient, we can approximate the closest distance to the surface by taking inspiration from a technique called the Newton's method. It is an iterative method that estimates the distance to the root of a function (where the value is $0$) via $f(p) / f'(p)$. One property of that technique is that closer we are to the actual root, the more accurate the estimation. So, it will be ok if we have a not-so-good estimation if our ray is still far away. We may make more steps, but when we are close to the surface, just one iteration of the Newton's method should be accurate enough. There is a small problem. The Newton's method is meant for $\mathbb{R} \Rightarrow \mathbb{R}$ functions. We have a $\mathbb{R}^3 \Rightarrow \mathbb{R}$ function. Thus, we need to use the Directional Newton's Method by Levin and Ben-Israel. Intuitively the problem with an $\mathbb{R}^3 \Rightarrow \mathbb{R}$ function is that which way do we estimate the distance to $0$? We need to specify that we want the closest one, $f(p_c) = 0$, whose direction is given by the gradient. Generally, the formula for the distance estimation in direction $d$ with the Directional Newton's Method is: $ \displaystyle{\frac{f(p)}{\nabla f(p) \cdot d} \cdot d} $ When we use the direction given by the gradient, we get: $ \displaystyle{\frac{f(p)}{\nabla f(p) \cdot \nabla f(p)} \cdot \nabla f(p) = \frac{f(p)}{| \nabla f(p)| ^2} \cdot \nabla f(p)} $ Because we are only interested in the distance, ie, length of the approximation, we can do: $ \displaystyle{\left | \frac{f(p)}{\nabla f(p) \cdot \nabla f(p)} \cdot \nabla f(p) \right | = \frac{|f(p)|}{| \nabla f(p)| ^2} \cdot |\nabla f(p)| = \frac{|f(p)|}{| \nabla f(p)| }} $ Note that $f(x)$ is a scalar here. The only vector (apart from $p$) is the gradient, which is why we could move the length calculation inside (bring the scalar out). We need one more piece of the puzzle. Right now we only thought about the fact that our function outputs a single number in the end for a given 3D point. For escape-time fractals (like Julia and Mandelbrot, but also Mandelbox later on) our recursive process function is (most of the time) $\mathbb{R}^3 \Rightarrow \mathbb{R}^3$. This means that change in every one of the input coordinates could result in changes in any one of the output coordinates. We do not just have one number, but actually, three different numbers that change in the output. We will need a matrix to hold together the effects every input coordinate can have on every output coordinate (until we collapse them to a single scalar value). This is called the Jacobian matrix: $J = \begin{pmatrix} \delta f_x(p) / \delta x && \delta f_x(p) / \delta y && \delta f_x(p) / \delta z \\ \delta f_y(p) / \delta x && \delta f_y(p) / \delta y &&\delta f_y(p) / \delta z \\ \delta f_z (p) / \delta x && \delta f_z(p) / \delta y && \delta f_z(p) / \delta z \end{pmatrix}$ It looks very scary, but you can just look at the different columns. The first column tells us what happens with our output point when $x$ of the input changes. The second column shows the output change when $y$ of the input changes. The last column does the same for input's $z$. You can also look at the rows. Each row shows the change of our function in only one coordinate. The rows are basically $\mathbb{R}^3 \Rightarrow \mathbb{R}$ gradients separately for output's $x$, $y$, and $z$ coordinates. Fortunately, our escape-time fractals give out a length in the end. The length we use to determine if a point has escaped or not. So at one point, we will move back from $\mathbb{R}^3 \Rightarrow \ mathbb{R}$. When we do that we will get again a gradient from our Jacobian. The way to transform the Jacobian back to a gradient, when the function takes the length of the point in the end, turns out to be: $\nabla f(p) = \displaystyle{\frac{f(p) \cdot J}{|f(p)|}}$ If we now apply the Directional Newton's Method on the distance of the function, we get: $ \displaystyle{\frac{|f(p)|}{|\nabla f(p)|}} = \displaystyle{\frac{|f(p)|}{ \displaystyle{\left | \frac{f(p) \cdot J}{|f(p)|}\right |} }} = \displaystyle{\frac{|f(p)|}{ \frac{1}{|f(p)|} \cdot |f(p) \cdot J|}} = \displaystyle{\frac{|f(p)| \cdot |f(p)|}{|f(p) \cdot J|}} = \displaystyle{\frac{|f(p)|^2}{|f(p) \cdot J|}}$ So, to find the distance to the nearest surface for ray marching a fractal, we will need to either find the Jacobian or at least account for the factors of the Jacobian that affect the output's length. In the next chapter, we will see how both of these solutions work with the Mandelbox. Before that, there is still one piece missing. We need a way to find the Jacobian (the three gradients). Dual Numbers The common ways to find a derivative (or the gradient, or the Jacobian – depends on the function we have) are: • Analytical derivation. We work it out on paper. Very difficult. • Finite Difference (FD) methods. The forward, backward, and central difference methods. We sample the neighborhood and find the differences in each direction. This is what we did in the Bump Mapping task. The issue is that we need to sample the function several times, resulting in the algorithm being run again as many times as we need to sample enough of the neighborhood (depends on the method used and dimensionality of the function). Also has an arbitrary step size, the choice of which could cause inaccurate results. • Automatic Differentiation (AD). We will keep track of the running derivative and change it according to how the function changes. We can use dual numbers for this to make it quite simple to do. Dual numbers are quite an easy concept by themselves. They are 2-dimensional numbers, like complex numbers, but instead of the imaginary unit $i$ they have an abstract unit $\epsilon$. The property of $\epsilon$ is that $\epsilon^2 = 0$. Of course $\epsilon \neq 0$. This makes $\epsilon$ a nilpotent. Here are a few examples of some dual numbers: $(a + b\epsilon) ~~~~ a, b \in \mathbb{R}$ $(1 + 2\epsilon) + (5 + 2 \epsilon) = (6 + 4 \epsilon)$ $(1 + 2\epsilon) \cdot (5 + 2 \epsilon) = 5 + 2 \epsilon + 2 \epsilon \cdot 5 + 2 \epsilon \cdot 2 \epsilon = (5 + 12 \epsilon)$ Notice that with the last multiplication the $2\epsilon \cdot 2\epsilon$ term disappears because $\epsilon^2 = 0$. You can see that the real component has an effect on the abstract component, but the opposite is not true. The abstract component does not have an effect on the real component. This is different from complex numbers. There are calculation rules for doing AD with dual numbers. It turns out to be quite easy to now use dual numbers to calculate the running first derivative. If we have a function we want to find the derivative for at certain values, we can just replace all our inputs and constants with dual numbers and the abstract part will store the first derivative. For example, if we have a simple function $f(x) = 3x^2$ we can get the value of the function and its first derivative at $4$ by using $(4 + 1 \cdot \epsilon)$ instead of $4$. It would follow: $f(4 + \epsilon) = 3 \cdot (4 + \epsilon)^2 = 3 \cdot (16 + 2 \cdot 4 \epsilon) = (48 + 24 \epsilon)$ So $f(4) = 48$ and $f'(4) = 24$. No sweat. Note that the abstract part of the input should be $1$ if it is the input whose effect on the function's change (derivative with respect to it) we want to calculate. In the case of constants, the abstract part should be zero. In the case of multivariate functions (functions with multiple inputs) $\mathbb{R}^n \Rightarrow \mathbb{R}$ we can find the gradient. For that, we use vectors in the abstract part that have $1$ in the place where we want to store the partial derivative of a specific variable for and $0$ elsewhere. For example, take a function $f(x, y) = 2 x + xy$. If we want to evaluate it and its gradient at $(5, 2)$, then we use $(5 + [1, 0]\epsilon, 2 + [0, 1]\epsilon)$. It follows that: $f(5 + [1, 0]\epsilon, 2 + [0, 1]\epsilon) = 2 \cdot (5 + [1, 0]\epsilon) + (5 + [1, 0]\epsilon) \cdot (2 + [0, 1]\epsilon) = (10 + [2, 0]\epsilon) + (10 + [0, 5]\epsilon + [2, 0]\epsilon + 0) = (20 + [4, 5]\epsilon)$ So, the values are $f(5, 2) = 20$ and $\nabla f(5, 2) = [4, 5]$. Because this is a linear function, we can tell that if we are at $(5, 2)$ and we increase $x$ by $1$, then we increase the function's value by $4$, if we increase $y$ by $1$, we increase the function's value by $5$. For completeness, we can now try the Directional Newton's method we described before. According to that, approximated vector to the closest point on the $0$-level surface would be $|f(5, 2)| / |\ nabla f(5, 2)| = 20 / \sqrt{4^2 + 5^2} \approx 6.4$ units away. We can use WolframAlpha to find the actual root, which turns out to be $[0, -2]$, which is $\sqrt{(5-0)^2 + (2+2)^2} \approx 6.4$ units away like we estimated. It will not always give the correct answer in the first iteration like that. Here are two other functions and results from several iteration steps: │ │$x + y^2$ │$2x + xy$ │ │1│$(3, 4)$ │$19+ [1, 8]\epsilon$ │$(5, 2)$ │$20 + [4, 5]\epsilon$ │ │2│$(2.71, 1.66)$ │$5.47 + [1, 3.32]\epsilon$ │$(3.05, -0.44)$│$4.76 + [1.56, 3.05]\epsilon$ │ │3│$(2.25, 0.15)$ │$2.28 + [1, 0.31]\epsilon$ │$(2.42, -1.68)$│$0.78 + [0.32, 2.42]\epsilon$ │ │4│$(0.17, -0.48)$ │$0.40 + [1, -0.97]\epsilon$│$(2.37, -1.99)$│$0.01 + [~0, 2.37]\epsilon$ │ │5│$(-0.04, -0.28)$│$0.04 + [1, -0.56]\epsilon$│$(2.37, -2)$ │$~0 + [~0, 2.37]\epsilon$ │ │6│$(-0.07, -0.26)$│$ ~0 + [1, -0.57]\epsilon$ │ │ │ We can see that it takes several steps from these chosen starting points to reach $0$. However, as we get closer, we get better steps. For example with $2x + xy$ from iteration 1 to 2 we have a decrease in the value by about 75%. But from iteration 3 to 4 it is about 99%. With ray marching it is not a big problem, we will just make a few more steps. Finally, we can see what happens in the $\mathbb{R}^3 \Rightarrow \mathbb{R}^3$ case. We start similarly in that we use vectors for the abstract part of our inputs: $f(x' + [1, 0, 0]\epsilon, y' + [0, 1, 0]\epsilon, z' + [0, 0, 1]\epsilon)$. After calculating the function, we get out a 3D point that also has some abstract parts $f(p) = (x' + [a, b, c]\epsilon, y' + [d, e, f]\epsilon, z' + [g, h, i]\epsilon)$. The numbers, and thus the abstract parts, do not become one as they did before. Instead we have three gradients here. The gradient $[a, b, c]$ shows how the input $x$, $y$, and $z$ affect the output's $x$ coordinate. The other gradients are similar but with respect to $y$ and $z$ coordinates. You can probably already tell that these form the Jacobian: $J = \begin{pmatrix} a && b && c \\ d && e && f \\ g && h && i \end{pmatrix} = \begin{pmatrix} \delta f_x(p) / \delta x && \delta f_x(p) / \delta y && \delta f_x(p) / \delta z \\ \delta f_y(p) / \ delta x && \delta f_y(p) / \delta y &&\delta f_y(p) / \delta z \\ \delta f_z(p) / \delta x && \delta f_z(p) / \delta y && \delta f_z(p) / \delta z \end{pmatrix}$ Great, so all we have to do is calculate every part of our algorithm with dual numbers and we get out a Jacobian. From the Jacobian, we can get the gradient. Then we use that gradient in the Directional Newton's Method to estimate the distance to the nearest surface (in the direction of the gradient). Implementation-wise we can pass along a vec3 for the real part and a mat3 for the Jacobian in the abstract part. function fractal(vec3 p) { vec3 dp = mat3(1.0); for (int i = 0; i < maxI; i++) { doStuff(p, dp); return dot(p, p) / length(p * dp); // The distance estimation via Directional Newton's Method. This is what we found just before starting the Dual Numbers subchapter. The Mandelbox Now we are ready to construct the Mandelbox. It is an escape-time fractal, just like Julia and Mandelbrot were, but in 3D and with a different formula: $z_{n+1} = sphereFold(boxFold(z_n)) \cdot scale + z_0 \cdot offset$ To render it, we need to estimate its distance function. Using dual numbers for this, we already know how to add and multiply different dual numbers. However, the box folding and sphere folding are worth looking into. For the box folding, we reflect an axis over a constant when the absolute value of the coordinate is larger than some constant $d$. We can use conditionals with dual numbers. Thus, if we have a $p$, where we need to flip, we just multiply the whole dual number with $-1$. If gradients are in the rows of the Jacobian, then the code would be: void boxFold(inout vec3 p, inout mat3 dp) { if (abs(p.x) > d) { p.x *= -1.0; dz[0].x *= -1.0; dz[1].x *= -1.0; dz[2].x *= -1.0; if (abs(p.y) > d) { p.y *= -1.0; dz[0].y *= -1.0; dz[1].y *= -1.0; dz[2].y *= -1.0; if (abs(p.z) > d) { p.z *= -1.0; dz[0].z *= -1.0; dz[1].z *= -1.0; dz[2].z *= -1.0; Parts of this could be optimized, written in vector form, etc, but mathematically we need to do for each axis $a \in [x, y, z]$: $-1 \cdot (p_a + \nabla f(p)_a \cdot \epsilon)$, if $|p_a| > d$. Or when write the gradient out: $-1 \cdot (p_a + \left [\frac{\delta f_a(p)}{\delta x}, \frac{\delta f_a(p)}{\delta y}, \frac{\delta f_a(p)}{\delta z} \right]\epsilon)$, if $|p_a| > d$. This makes sense as we change the direction of one axis, we change the direction of the output's corresponding coordinate. With sphere folding it is more complicated. There are two branches, in the first we just multiply by a constant $R^2 / r^2$. That goes like regular scalar constant multiplication. However, in the other branch, we divide with $R^2 / |p|^2$. The proposed operation to perform on the all the columns of the Jacobian $J = |c_{fx}, c_{fy}, c_{fz}|$ in that branch is: $\displaystyle{\frac{R^4}{|p|^4} \cdot \left ( c_i - p \cdot \frac{2 (p \cdot c_i)}{|p|^2} \right )}$. The $p \cdot c_i$ is the dot product. The real number part we multiply with just $R^2 / |p|^2$. Honestly, I have no idea where this comes from. It makes some sense as at sphere folding all the output coordinates depend on all the input coordinates. So it is logical to change all the elements of the Jacobian and account for the interplay. When using that code it seemed to give a better result by just multiplying the matrix with $R^2 / |p|^2$ instead of $R^4 / |p|^4$. Code-wise sphere folding (with my replacement) would be: void sphereFold(inout vec3 p, inout mat3 dp) { float len2 = dot(z, z); if (len2 < r2) { float s = R2 / r2; multiplyScalar(p, dp, s); } else if (len2 < R2) { float s = R2 / len2; dp[0] = dp[0] - p * 2.0 * dot(p, dp[0]) / len2; dp[1] = dp[1] - p * 2.0 * dot(p, dp[1]) / len2; dp[2] = dp[2] - p * 2.0 * dot(p, dp[2]) / len2; multiplyScalar(p, dp, s); When these functions are good, then all that is left is the main iteration: function mandelbox(vec3 p) { vec3 dp = mat3(1.0); mat3 dc = dz; vec3 c = z; for (int i = 0; i < maxI; i++) { boxFold(p, dp); sphereFold(p, dp); multiplyScalar(p, dp, scale); add(p, dp, c * offset, mat3(0.0)); // Instead of mat3(0.0) there should be "dc" here, but that caused a lot of aliasing and a lumpy fractal return dot(p, p) / length(p * dp); // The distance estimation via Directional Newton's Method. This is what we found just before starting the Dual Numbers subchapter. This will render you a Mandelbox, but it will be quite flat-looking. You could add ambient occlusion, but what we really want here are the surface normals. We could get the surface normals with a finite difference method, which would need us to sample the mandelbox function more times. However, we can observe that when we hit the surface of a fractal, we always do it in the same direction as the gradient. So we could just take the gradient to be our surface normal. In principle, this should be the negated gradient, but for some reason, it is the positive gradient that works for me. $\displaystyle{normal = \frac{f(p) \cdot J}{|f(p) \cdot J|}}$ In code we would thus have: function mandelbox(vec3 p) { vec3 dp = mat3(1.0); mat3 dc = dz; vec3 c = z; for (int i = 0; i < maxI; i++) { boxFold(p, dp); sphereFold(p, dp); multiplyScalar(p, dp, scale); add(p, dp, c * offset, mat3(0.0)); return deResult( // This could be a struct dot(p, p) / length(p * dp), normalize(p * dp) There is an alternative solution, which would just keep track of the Jacobian changes the magnitude of the distance to the closest surface. For that, we will first analyze the final distance calculation from the previous approach: $\displaystyle{\frac{|f(p)|^2}{|f(p) \cdot J|}}$ Keep in mind that we are only interested in the distance and assume that the Jacobian changes the distance of a vector by some specific value $a$ when multiplied from the left. Then we would have: $\displaystyle{\frac{|f(p)|^2}{|f(p) \cdot a|} = \frac{|f(p)|^2}{|f(p)| \cdot |a|} = \frac{|f(p)|}{|a|}}$ Now, all we need is to check if our assumption is correct and then know the value of $a$. Our Mandelbox process starts with a unit Jacobian. You can think of it as a scale matrix with the scale factors being all $1$. Then we are doing box folding. That operation only changes the sign of one row, so does not make the Jacobian scale anything bigger or smaller. Next, we have sphere folding. For that we need to analyze what is happening when we do: $\displaystyle{\frac{R^2}{|p|^2} \cdot \left ( c_i - p \cdot \frac{2 (p \cdot c_i)}{|p|^2} \right )}$. Note that here we use $R^2 / |p|^2$ as this is also what I use in the code. $\displaystyle{\frac{R^2}{|p|^2} \cdot \left ( c_i - p \cdot \frac{2 \cdot |p| \cdot |c_i| \cdot \cos(\angle{p, c_i})}{|p|^2} \right ) = \frac{R^2}{|p|^2} \cdot \left ( c_i - \frac{p}{|p|} \cdot 2 \ cdot |c_i| \cdot \cos(\angle{p, c_i}) \right ) = \frac{R^2}{|p|^2} \cdot \left ( c_i - 2 \cdot \hat{p} \cdot (\hat{p} \cdot c_i) \right ) }$. Here $\hat{p}$ is normalized $p$ and $\hat{p} \cdot c_i$ is the dot product. What we find is that this is reflecting the column of the Jacobian over our vector $p$. As it is a reflection, it does not change the length of $c_i$. Meaning that the Jacobian here will become a rotation matrix, but all the axes still have the scalar factor just $R^2 / |p|^2$. In the other sphere folding branch, there is only scaling with the scalar factor $R^2 / r^2$. We then multiply with $scale$, which also would multiply the Jacobian. Lastly, we add the $offset$, which with our mat(0.0) does not add anything to the Jacobian. Thus, if we are only interested in the closest distance, then we could substitute the Jacobian matrix with just a running scalar value $a$: void add(vec3 p, float a, vec3 q, float dq) { // Add dq to a // (in our case dq is 0, so we do nothing) void multiplyScalar(vec3 p, float a, float s) { // Multiply a with s void sphereFold(vec3 p, float a) { // Multiply a with either // R^2 / r^2 // or // R^2 / |p|^2 void boxFold(vec3 p, float a) { // do nothing with a function mandelbox(vec3 p) { vec3 c = z; vec3 dp = mat3(1.0); mat3 dc = dz; float a = 1.0; for (int i = 0; i < maxI; i++) { boxFold(p, dp a); sphereFold(p, dp a); multiplyScalar(p, dp a, scale); add(p, dp a, c * offset, 0.0); return deResult( dot(p, p) / length(p * dp), normalize(p * dp) length(p) / abs(a), This is how the code from many Mandelbox implementations like the one in this Shadertoy comes from. The only issue here is that we do not have the information to assess the normal anymore. Thus solutions like that usually use a finite difference method to assess the normal, resampling the distance estimation function several times at nearby coordinates. On the right, there is an example with the Mandelbox. The example is split into two. The left part of the screen shows the simplified version, where normals are estimated by a finite-difference (FD) method. The right part is the implementation with the Jacobian matrix, where the normal is calculated from the gradient vector directly. The first slider (FD/AD cutoff) controls the split. The second slider (Cam Z) controls. The next two sliders (Hit Min/Max Dist.) control the distance from where a ray is considered to be on the surface. Then come the sphere fold sliders (Min/Max Radius), the scale parameter (MB Scale), and the box fold parameter (Box Fold Dist.). Notice that the FD and AD solutions do not produce completely identical pictures. There are weird circular artefacts in the AD solution, which are especially obvious if you increase the Hit Min Dist. to around 0.29 and zoom out. At the bottom of the example, there are a few saved scenes (parameter configurations) to check out. If you want to explore the Mandelbox further, then both the AD and FD versions are available in Fragmentarium and this site has Mandelbox parameters for Mandelbulber 2.
{"url":"https://cglearn.eu/pub/advanced-computer-graphics/fractal-rendering","timestamp":"2024-11-06T04:33:51Z","content_type":"text/html","content_length":"86929","record_id":"<urn:uuid:da65ca5a-ae53-4d1b-972d-cbd677cdc902>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00290.warc.gz"}
High Precision for Hard Processes (HP2 2024) Fragmentation of heavy quarks into heavy-flavoured hadrons receives both perturbative and non-perturbative contributions. We consider perturbative QCD corrections to heavy quark production in $e^+e^ -$ collisions to next-to-next-to-leading order accuracy in QCD with next-to-next-to-leading-logarithmic resummation of quasi-collinear and soft emissions. We study multiple matching schemes, and multiple regularisations of the soft resummation, and observe a significant dependence of the perturbative results on these ingredients, suggesting that NNLO+NNLL perturbative accuracy may not lead to real gains unless the interface with non-perturbative physics is properly analysed. We confirm previous evidence that $D^{*+}$ experimental data from CLEO/BELLE and from LEP are not reconcilable with perturbative predictions employing standard DGLAP evolution. We extract non-perturbative contributions from $e^+e^-$ experimental data for both $D$ and $B$ meson fragmentation. Such contributions can be used to predict heavy-quark fragmentation in other processes, e.g. DIS and proton-proton collisions.
{"url":"https://agenda.infn.it/event/35067/timetable/?view=standard_numbered","timestamp":"2024-11-07T22:20:58Z","content_type":"text/html","content_length":"445424","record_id":"<urn:uuid:d359da32-49ba-4b51-ab21-e55f8b9e99ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00237.warc.gz"}
At least one... Imagine flipping a coin a number of times. Can you work out the probability you will get a head on at least one of the flips? At Least One... printable sheet Imagine flipping a coin three times. What's the probability you will get a head on at least one of the flips? Charlie drew a tree diagram to help him to work it out: He put a tick by all the outcomes that included at least one head. How could Charlie use his tree diagram to work out the probability of getting at least one head? How could he use it to work out the probability of getting no heads? What do you notice about these two probabilities? Devise a quick way of working out the probability of getting at least one head when you flip a coin 4, 5, 6... times. What is the probability of getting at least one head when you flip a coin ten times? Once you've worked out a neat strategy for the coins problem, take a look at these related questions which can be solved in a similar way: Imagine choosing a ball from this bag (which contains six red balls and four blue balls) and then replacing it. If you did this three times, what's the probability that you would pick at least one green ball? What if you didn't replace the ball each time? Imagine a class with 15 girls and 13 boys. Three children are chosen at random to represent the class at School Council. What is the probability that there will be at least one boy? Why not try the problem Same Number! next? Getting Started If there isn't at least one head, what must be true about every flip of the coin? Student Solutions We received a large number of responses of excellent quality. Ben from St Peter's followed the tree diagram and calculated out the answer: If you flip a coin three times the chance of getting at least one head is 87.5%. To get this outcome I used the provided tree diagram to establish how many outcomes used one head. Llewellyn from St Peter's and Diamor from Willington County Grammar School both observed an interesting pattern and expanded the answer to flipping ten coins: If you flip a coin 3 times the probability of getting at least one heads is 7 in 8 by reading the table. This table also works the opposite way, the chances of Charlie getting no heads is 1 in 8 because out of all the outcomes only one of them has only tails. I notice that if you add these probabilities together you get the total amount of outcomes (7+1=8). If you flip a coin 4 times the probability of you getting at least one heads is 15 in 16 because you times the amount of outcomes you can get by flipping 3 coins by 2, it results in 16 and then you minus 1 from it. With 5 coins to flip you just times 16 by 2 and then minus 1, so it would result with a 31 in 32 chance of getting at least one heads. With 6 coins you times by 2 and minus by 1 again resulting in a 63 in 64 chance. To find the chance of getting at least one heads if you flip ten coins you times 64 by 2 four times or by 16 once and then minus 1, this results in a 1063 in 1064 chance of getting at least one Neeraj from Wilson School developed a generalization for different numbers of possible outcomes: I noticed that when you add the probabilities together they make a whole. A quick way of figuring out how many times you get at least one head is, that it is always the no. Of possible outcomes minus one over the no. of possible outcomes So: if No of possible outcomes = n the equation would be: P= (n- 1)/n One student suggested how to calculate the number of desired outcomes: If the number of times flipped =p Then the number of outcomes that contain a head is$2^p-1$ So for flipping a coin $10$ times, the number of outcomes with at least one head is $2^{10}-1 = 1024 - 1 = 1023$ Luke from Maidstone Grammar School went further to investigate the next part of the question: When there are 4 green balls in the bag and there are 6 red balls the probability of randomly selecting a green ball is 0.4 ($\frac{2}{5}$) and the probability selecting a red ball is 0.6 ($\frac{3} If a ball is selected and then replaced the probability of picking a red ball or a green ball is the same every time. When 3 balls are picked with replacement the probability of getting at least one green is 1-(the probability of getting 3 reds) Because the probability is the same every time the chance of getting 3 reds is $0.6^3=0.216$ (or in fractions $(\frac{3}{5})^3 = \frac{27}{125}$). So the probability of getting at least one green is $1-0.216=0.784$ (or in fractions $1 - \frac{27}{125} = \frac{98}{125}$). When the balls are not replaced the probability of getting at least one green is still 1-(the probability of getting 3 reds). In each draw the probability of drawing a red ball is $\frac{\text{the number of red balls}}{\text{the total number of balls}}$ On the first draw there are 6 red balls out of 10 so the probability of picking a red is $\frac{6}{10}$. On the second draw there are 5 red balls out of 9 so the probability of picking a red is $\frac{5}{9}$. On the final draw there are 4 red balls out of 8 so the probability of picking a red is $\frac{4}{8}$. The probability of this sequence of draws happening is the probability of each draw multiplied together. i.e.: $\frac{6}{10}\times\frac{5}{9}\times\frac{4}{8}=\frac{1}{6}$ The probability of drawing all reds is $\frac{1}{6}$ and so the probability of drawing at least one green is $\frac{5}{6}$. Helen from Stroud finished up the problem: When children are selected for the school council they are not replaced. The children are selected one after another and each time the probability of a boy being selected is P(boy selected first) = $\frac{\text{the number of boys in the class}}{\text{the total number of children in the class}}$ Note: the class refers to students who have not already been made part of the council. To find the probability that there will be at least one boy, find the probability that all three are girls, and then P(at least one boy selected) = 1-P(all girls selected) to get the answer. The probability of picking a girl is P(girl selected first) = $\frac{\text{number of girls in class}}{\text{total number in class}}= \frac{15}{28}$ Then P(second selected also a girl) = $\frac{14}{27}$ And P(third selected also a girl) = $\frac{13}{26}$ So P(all girls selected) = $\frac{15}{28}\times\frac{14}{27}\times\frac{13}{26} = \frac{5}{36}$ Then the answer is P(at least one boy selected) = 1 - P(all girls selected) = 1 - $\frac{5}{36}$ = $\frac{31}{36}$ Well done to everyone. Teachers' Resources Why do this problem? An important strategy in answering probability questions requires us to consider whether it is easier to work out the probability of an event occurring or the probability of it NOT occurring. In this problem, learners are introduced to tree diagrams and the concept of mutually exclusive events whose probabilities sum to 1 Possible approach Ask the introductory question: "Imagine flipping a coin three times. What's the probability you will get a head on at least one of the flips?" Give the class time to explore on their own or in pairs, then share the different methods they used to work it out. If no-one has suggested a tree diagram, start building a tree diagram on the board and ask for suggestions of how to complete it. Then ask the class to identify which branches contain at least one head, and to use this to work out the probability of getting at least one head. With the class working in groups of three or four, challenge them to build on what they've done by asking them to work out the probability of getting at least one head in four flips. Wander around the class and ask groups to move on to five flips, six flips... as soon as they've finished the one they are working on. As each group discovers a neat way of working out these probabilities, first challenge them to work out the probability of getting at least one head in twenty flips, and then, assuming they can apply their method successfully, give them one of the related questions from the Once most of the groups have a successful method for the at least one head problems, bring the class together to discuss what they noticed when working on their tree diagrams, and to justify the methods they used to work out the probabilities. Finally, the remaining questions from the problem can be used with the class to consolidate these ideas. Key questions What is the probability of getting at least one head? What is the probability of getting no heads? How are the probabilities related? Possible support Spend some time working together as a class on listing probabilities, and then move to the tree diagram representation simply as an efficient way of listing systematically. You can read about some of the issues which might arise when teaching probability in this article. Possible extension Same Number! provides a natural extension to this problem.
{"url":"https://nrich.maths.org/problems/least-one","timestamp":"2024-11-07T19:28:14Z","content_type":"text/html","content_length":"48778","record_id":"<urn:uuid:3e429d54-6aec-4099-9b8b-84d05249cfd3>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00349.warc.gz"}
Celestial Bloodline Kyle, burdened by a perceived lack of talent compared to his peers, endured mockery and condescension from others, leading him to abandon his diligent efforts. Understanding that the world favored the strong in a perilous environment, he felt helpless, convinced that his limited potential left him with no viable path to success. However, a turning point emerged when his brother proposed an unexpected idea – taking the Royal Academy test to gain valuable experience. Kyle, initially incredulous at the suggestion, couldn't fathom attempting such a feat. He doubted the feasibility of passing a test for one of the kingdom's most prestigious academies, especially considering his lack of dedication even after awakening his Despite his skepticism and the apparent impossibility of success, Kyle, motivated by familial ties, resolved to undertake the challenge.
{"url":"https://lightnovelonline.com/novel/celestial-bloodline","timestamp":"2024-11-09T12:46:15Z","content_type":"text/html","content_length":"205397","record_id":"<urn:uuid:79dd549a-6ac6-47b1-88b4-481a37df8b81>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00311.warc.gz"}
True scale-free networks Network theory The underlying scale invariance properties of naturally occurring networks are often clouded by finite-size effects due to the sample data. True scale-free networks hidden by finite size effects We analyze about two hundred naturally occurring networks with distinct dynamical origins to formally test whether the commonly assumed hypothesis of an underlying scale-free structure is generally viable. This has recently been questioned on the basis of statistical testing of the validity of power-law distributions of network degrees. Specifically, we analyze by finite-size scaling analysis the datasets of real networks to check whether the purported departures from power-law behavior are due to the finiteness of sample size. We find that a large number of the networks follow a finite-size scaling hypothesis without any self-tuning. This is the case of biological protein interaction networks, technological computer and hyperlink networks, and informational networks in general. Marked deviations appear in other cases, especially involving infrastructure and transportation but also in social networks. We conclude that the underlying scale invariance properties of many naturally occurring networks are extant features often clouded by finite-size effects due to the nature of the sample data.
{"url":"https://lims.ac.uk/paper/true-scale-free-networks-hidden-by-finite-size-effects/","timestamp":"2024-11-03T23:18:07Z","content_type":"text/html","content_length":"87048","record_id":"<urn:uuid:b96508c1-e7f6-45cc-81ae-dd99d8acef42>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00556.warc.gz"}
Module Ssrmatching_plugin.Ssrmatching The type of context patterns, the patterns of the set tactic and : tactical. These are patterns that identify a precise subterm. val pr_cpattern : cpattern -> Pp.t The type of rewrite patterns, the patterns of the rewrite tactic. These patterns also include patterns that identify all the subterms of a context (i.e. "in" prefix) val pr_rpattern : rpattern -> Pp.t type ('ident, 'term) ssrpattern = | T of 'term | In_T of 'term | X_In_T of 'ident * 'term | In_X_In_T of 'ident * 'term | E_In_X_In_T of 'term * 'ident * 'term | E_As_X_In_T of 'term * 'ident * 'term AST for rpattern (and consequently cpattern) type pattern = Evd.evar_map * (Constr.constr, Constr.constr) ssrpattern val pp_pattern : Environ.env -> pattern -> Pp.t val redex_of_pattern : ?resolve_typeclasses:bool -> Environ.env -> pattern -> Constr.constr Evd.in_evar_universe_context Extracts the redex and applies to it the substitution part of the pattern. raises Anomaly if called on In_T or In_X_In_T val interp_rpattern : Goal.goal Evd.sigma -> rpattern -> pattern interp_rpattern ise gl rpat "internalizes" and "interprets" rpat in the current Ltac interpretation signature ise and tactic input gl val interp_cpattern : Goal.goal Evd.sigma -> cpattern -> (Genintern.glob_constr_and_expr * Geninterp.interp_sign) option -> pattern interp_cpattern ise gl cpat ty "internalizes" and "interprets" cpat in the current Ltac interpretation signature ise and tactic input gl. ty is an optional type for the redex of cpat The set of occurrences to be matched. The boolean is set to true * to signal the complement of this set (i.e. {-1 3}) type subst = Environ.env -> Constr.constr -> Constr.constr -> int -> Constr.constr subst e p t i. i is the number of binders traversed so far, p the term from the pattern, t the matched one val eval_pattern : ?raise_NoMatch:bool -> Environ.env -> Evd.evar_map -> Constr.constr -> pattern option -> occ -> subst -> Constr.constr eval_pattern b env sigma t pat occ subst maps t calling subst on every occ occurrence of pat. The int argument is the number of binders traversed. If pat is None then then subst is called on t. t must live in env and sigma, pat must have been interpreted in (an extension of) sigma. raises NoMatch if pat has no occurrence and b is true (default false) t where all occ occurrences of pat have been mapped using subst val fill_occ_pattern : ?raise_NoMatch:bool -> Environ.env -> Evd.evar_map -> Constr.constr -> pattern -> occ -> int -> Constr.constr Evd.in_evar_universe_context * Constr.constr fill_occ_pattern b env sigma t pat occ h is a simplified version of eval_pattern. It replaces all occ occurrences of pat in t with Rel h. t must live in env and sigma, pat must have been interpreted in (an extension of) sigma. raises NoMatch if pat has no occurrence and b is true (default false) the instance of the redex of pat that was matched and t transformed as described above. val pr_dir_side : ssrdir -> Pp.t val mk_tpattern : ?p_origin:(ssrdir * Constr.constr) -> Environ.env -> Evd.evar_map -> (Evd.evar_map * Constr.constr) -> (Constr.constr -> Evd.evar_map -> bool) -> ssrdir -> Constr.constr -> Evd.evar_map * tpattern mk_tpattern env sigma0 sigma_p ok p_origin dir t compiles a term t living in env sigma (an extension of sigma0) intro a tpattern. The tpattern can hold a (proof) term p and a diction dir. The ok callback is used to filter occurrences. the compiled tpattern and its evar_map raises UserEerror is the pattern is a wildcard type find_P = Environ.env -> Constr.constr -> int -> k:subst -> Constr.constr findP env t i k is a stateful function that finds the next occurrence of a tpattern and calls the callback k to map the subterm matched. The int argument passed to k is the number of binders traversed so far plus the initial value i. t where the subterms identified by the selected occurrences of the patter have been mapped using k raises NoMatch if the raise_NoMatch flag given to mk_tpattern_matcher is true and if the pattern did not match raises UserEerror if the raise_NoMatch flag given to mk_tpattern_matcher is false and if the pattern did not match type conclude = unit -> Constr.constr * ssrdir * (Evd.evar_map * UState.t * Constr.constr) conclude () asserts that all mentioned occurrences have been visited. the instance of the pattern, the evarmap after the pattern instantiation, the proof term and the ssrdit stored in the tpattern raises UserEerror if too many occurrences were specified val mk_tpattern_matcher : ?all_instances:bool -> ?raise_NoMatch:bool -> ?upats_origin:(ssrdir * Constr.constr) -> Evd.evar_map -> occ -> (Evd.evar_map * tpattern list) -> find_P * conclude mk_tpattern_matcher b o sigma0 occ sigma_tplist creates a pair a function find_P and conclude with the behaviour explained above. The flag b (default false) changes the error reporting behaviour of find_P if none of the tpattern matches. The argument o can be passed to tune the UserError eventually raised (useful if the pattern is coming from the LHS/RHS of an equation) val pf_fill_occ_term : Goal.goal Evd.sigma -> occ -> (Evd.evar_map * EConstr.t) -> EConstr.t * EConstr.t val cpattern_of_term : (char * Genintern.glob_constr_and_expr) -> Geninterp.interp_sign -> cpattern do_once r f calls f and updates the ref only once assert_done r return the content of r. raises Anomaly is r is None val unify_HO : Environ.env -> Evd.evar_map -> EConstr.constr -> EConstr.constr -> Evd.evar_map val pf_unify_HO : Goal.goal Evd.sigma -> EConstr.constr -> EConstr.constr -> Goal.goal Evd.sigma val tag_of_cpattern : cpattern -> char Some more low level functions needed to implement the full SSR language on top of the former APIs
{"url":"https://coq.inria.fr/doc/V8.11.2/api/coq/Ssrmatching_plugin/Ssrmatching/index.html","timestamp":"2024-11-03T00:41:40Z","content_type":"application/xhtml+xml","content_length":"28635","record_id":"<urn:uuid:38dddb55-fe6b-4e99-8d8a-f534242217c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00693.warc.gz"}
Trying to run a medication samples report w/ the same lot # but different check in dates LOT # Anoro Ellipta 62.5 mcg 1122334455 10/06/21 10 10/13/21 zzztest 25 -15 false LOT # Anoro Ellipta 62.5 mcg 1122334455 10/08/21 10 10/13/21 zzztest 25 -15 false zzztest 01/01/00 Anoro Ellipta 62.5 mcg 1122334455 25 10/07/21 zzztest true true zzztest I am trying to get an accurate count on samples but, if I change the received date the total adjusts for each individual check in date instead of the lot #?! How do I adjust to just the lot #? This is a copy of the report ran... Thank You in advanced. Best Answer • Hi @JJLewis If I'm understanding you correctly, you currently have two formulas which identify for each row the Total Received amount and the Total Distributed amount in order to give you an overall picture of what's On Hand. However this number is input as a total but on each individual line. This means that when you subtract Distributed@row from the Distribution Quantity, while the first row give you the correct answer, the second row does not take into account the first row's On Hand amount, and still only subtracts Distributed@row from the total Quantity, again. Instead of just subtracting Distributed@row, we'll want to SUM the Distributed column by the LOT Number using a SUMIF formula. It might still only return the current row if there's just one row for this Lot number, but when you have multiple rows with the same Lot number it will add them together before subtracting it from the Quantity. Try this in your On Hand column: =[Received Amount]@row - SUMIF([Lot #]:[Lot #], [Lot #]@row, Distributed:Distributed) Then in your Report you won't want to SUM the On Hand column, as each cell will give you the total eOn Hand. Does this make sense? Let me know if I've understood you question and if this works for you! Need more help? 👀 | Help and Learning Center こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions • This is the receiving sheet... Formulas used in the Distributed and On Hand columns are... =SUM(COLLECT({Distribution QTY}, {Distribution - Lot #}, [Lot #]@row)) =[Received Amount]@row - Distributed@row This is the distribution sheet... This is the report... Hope this helps... • Hi @JJLewis If I'm understanding you correctly, you currently have two formulas which identify for each row the Total Received amount and the Total Distributed amount in order to give you an overall picture of what's On Hand. However this number is input as a total but on each individual line. This means that when you subtract Distributed@row from the Distribution Quantity, while the first row give you the correct answer, the second row does not take into account the first row's On Hand amount, and still only subtracts Distributed@row from the total Quantity, again. Instead of just subtracting Distributed@row, we'll want to SUM the Distributed column by the LOT Number using a SUMIF formula. It might still only return the current row if there's just one row for this Lot number, but when you have multiple rows with the same Lot number it will add them together before subtracting it from the Quantity. Try this in your On Hand column: =[Received Amount]@row - SUMIF([Lot #]:[Lot #], [Lot #]@row, Distributed:Distributed) Then in your Report you won't want to SUM the On Hand column, as each cell will give you the total eOn Hand. Does this make sense? Let me know if I've understood you question and if this works for you! Need more help? 👀 | Help and Learning Center こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions Help Article Resources
{"url":"https://community.smartsheet.com/discussion/83873/trying-to-run-a-medication-samples-report-w-the-same-lot-but-different-check-in-dates","timestamp":"2024-11-13T21:45:50Z","content_type":"text/html","content_length":"409982","record_id":"<urn:uuid:f22bee69-a7ae-4c38-a4d2-e8d2e3d46755>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00602.warc.gz"}
Guide for Scalable Risk Assessment Methods for Pedestrians and Bicyclists < Previous Table of Contents Next > Step 6 in the scalable risk assessment process is to use the analytic method selected in Step 5 to estimate the desired exposure measure(s). All of the previous steps involve making scoping or planning decisions about how to estimate exposure. Step 6 in the process is when the detailed analysis for exposure estimation occurs, As a result, the Step 6 section in this guide is the largest and has the most content. Step 6 includes a section for each of the three primary methods that can be used to estimate exposure. • Site counts: Overview of the manual and automated counting procedures • Demand estimation models: Overview of the demand estimation models, with a particular focus on direct demand models, in estimating non-motorized exposure. • Travel surveys: Overview of the most commonly used travel surveys in estimating exposureThis section also introduces an online interactive tool that estimates exposure using national travel survey data from NHTS and ACS. Site Counts Site counts are direct measurements of the number of pedestrians or bicyclists at a defined location. The counts may be gathered automatically from various technology-based sensors or manually from human observers. Site counts are taken at a point and are typically used to represent two different scales: point and segmentFor the point scale, counts are most commonly used for intersection crosswalks. For the segment scale, the site count (taken at a point) is applied to the entire length of a street segment (where the counts are not expected to vary significantly along the segment Counts are more commonly used to estimate exposure when the desired facility coverage is limited, as count data collection for all facilities within a large network or region is cost-prohibitive (unless extensive sampling is used)In some cases, it is also a challenge to get representative pedestrian and bicyclist counts due to seasonal and day-of-week variation. This section of Step 6 provides an overview of counting procedures for pedestrians and bicyclistsIn particular, this section will highlight considerations and issues that are relevant to site counts used for exposure estimation. Analysts will find many more procedural details in the comprehensive guidance documents listed in Table 15. Table 15. Key Guidance Documents for Site Counts Guidance Document or Report Useful Resources or Application • Automatic counter technology and equipment • Systematic monitoring using combination of permanent and short-duration count sites FHWA 2016 Traffic Monitoring Guide, Chapter 4 Traffic Monitoring for Non-Motorized Traffic • Non-motorized traffic patterns • Standardized non-motorized traffic data format • Extensive guidance and interpretation on the standardized non-motorized traffic data Report FHWA-HEP-17-011, Coding Nonmotorized Station Location Information in the 2016 Traffic Monitoring format and attributes in the Traffic Monitoring Guide Guide Format, 2016 • Relevant for submitting non-motorized count data to FHWA's Travel Monitoring Analysis • Automatic counter technology and equipment Report FHWA-HEP-17-012, FHWA Bicycle-Pedestrian Count Technology Pilot Project - Summary Report, 2016 • Identifying suitable count locations • Practical lessons learned in collecting and using count data • Systematic monitoring of counts NCHRP Report 797, Guidebook on Pedestrian and Bicycle Volume Data Collection, 2014 • Automatic counter technology and equipment, calibration and validation, and technology NCHRP Web-Only Document 229, Methods and Technologies for Pedestrian and Bicycle Data Collection: Phase • Extensive and most up-to-date evaluation of automatic counter technology 2, 2016 • Automatic counter technology and equipment Report FHWA-HPL-16-026, Exploring Pedestrian Counting Procedures: A Review and Compilation of Existing • Automatic counter validation and calibration Procedures, Good Practices, and Recommendations, May 2016 • Recommended count practices for various facility types • Data management procedures (quality assurance, metadata, data analysis) Alta Planning + Design, Innovation in Bicycle and Pedestrian Counts: A Review of Emerging Technology, • Automatic counter technology and equipment 2016 • Innovative and emerging technology for non-motorized counts Typical Applications of Site Counts In some exposure analyses, site counts may serve as the only source of data for exposure estimationHowever, the collection of site counts on all street segments or at all signalized intersections within a city or region is cost-prohibitive. Therefore, the use of site counts only is most applicable to exposure analyses that are facility-specific (i.e., point or segment scale) and focus on a limited number of intersections or street segments. In most exposure analyses, site counts are collected at a small but representative sample of locations, and then an estimation model is developed and calibrated from these site counts and used to estimate pedestrian and bicyclist volumes at uncounted locations. The next major section in this chapter describes several different demand estimation models that use a sample of site counts and other street inventory and land use variables to predict pedestrian and bicyclist volumes at all locations citywide. Some cities, MPOs, and state DOTs have begun to collect pedestrian and bicyclist counts as part of a routine monitoring program, so it may not be necessary to start from the beginning for exposure estimationOne should inquire about existing count data not just with transportation agencies, but also with other groups and agencies that may share your interest in pedestrian and bicyclist • City or county parks and recreation department (e.g., on shared use paths). • National or state parks (e.g., on internal or connector paths). • Public health departments (e.g., monitoring physical activity). • Retail or business associations (e.g., on pedestrian malls or plazas). • Pedestrian and/or bicyclist advocacy groups. Even if other agencies' existing counts can be used, it is likely that additional counts will be desired for the purposes of exposure estimationIn some cities, the existing and ongoing counts are at a very limited number of locations, and may have been chosen because of high pedestrian and bicyclist usage or recent facility improvements. For purposes of exposure estimation, additional counts may be needed at high crash locations or at a broader range of locations that represent a mix of facility types and land uses (for purposes of estimation model development and calibration). Calculating AADT from Site Counts Site counts can be conducted for varying durations (e.g., 12 hours, 48 hours, 7 days, etc.) and at different times of the year, but the final count measure most commonly used in both motorized and non-motorized exposure estimation is AADT. In these cases, AADT is applied to a defined segment, and is then multiplied by the defined segment length to calculate average annual daily PMT or BMTThe same process can be used at intersection crossings, whereby the crossing length is used to calculate PMT or BMT at each crossingEarlier in this guide, Step 4 described a simple example of this calculation process. The estimation of AADT from a short duration site count is a recommended practice for pedestrian and bicyclist exposure estimation. AADT estimates are typically made using what are called factor adjustments (described in detail in Section 4.4 of FHWA's Traffic Monitoring Guide). Aside from being a widely accepted practice in motor vehicle analysis, there are several reasons why AADT values should be estimated and used in exposure analysis. Pedestrian and bicyclist counts can vary dramatically by month of the year, day of the week, and by prevailing weather conditionsSite counts that were collected for a single day during favorable weather are not a representative sample of all other days during the year. Therefore, in simple terms, the factor adjustment process is used to adjust the count samples to represent more accurately the true annual average pedestrian and bicyclist usage. The factor adjustment process for AADT estimation requires continuous count data from similar location(s) to scale accurately a short duration count to represent an annual average countHowever, some cities or regions may not have the required continuous count data for factor adjustment. At the time of this Guide development, many agencies are still working to implement the factor adjustment processSeveral organizations are currently working to develop default seasonal adjustment factors based on climate zones.^ If factor adjustment and AADT estimation is not feasible for your application, every effort should be made to collect site counts during seasons/months, days of the week, and weather conditions that most closely resemble typical conditions during the year. Collecting Counts Specifically for Exposure Estimation The following paragraphs describe key considerations when collecting counts specifically for exposure estimation. Many of these considerations are addressed in more detail in other data collection guides, but are highlighted here to emphasize their importance. • Use automated counter equipment as much as possible: Automated counter equipment allows counts to be conducted for longer periods (multiple days), which reduces error in AADT estimates. Automated counter equipment can also reduce the labor cost of data collection. The resources listed at the beginning of this Site Counts section provide detailed guidance on selecting the appropriate technologies for automated counting. • Avoid very short duration counts (i.e., two-hour counts): The FHWA Traffic Monitoring Guide recommends a minimum duration of seven days for short duration countsIf this duration is not cost-feasible at all of the desired locations, then at least one to two twelve-hour periods should be counted. Two-hour counts should be avoided as much as possible, even it if means reducing the number of count locations to allow for a six-hour or twelve-hour countSeveral research efforts have documented the high error rates that result from estimating AADT from two-hour counts. • Seek balance between number of count locations and duration: As implied in the previous bullet, it may be necessary to balance the number of count locations with the duration of each count. This may be necessary to avoid very short duration counts yet still collect data at a representative number of count locationsIn these cases, one is balancing the temporal error (from sampling short durations of time during the year) with the spatial error (from sampling very few locations of all locations to be analyzed). At this time, achieving a balance between number of count locations and count duration is considered more of an art form and not science. • Select representative months and days of week for your area and count location: Site counts should be collected during typical or normal seasons/days/times, especially if adjustment factors are not feasible to use for estimating AADT valuesEven if adjustment factors are planned for use, site counts should be collected during months/days/times that are considered typical or normal conditionsThis helps to reduce the magnitude of the factor adjustment, and ultimately, reduce the error associated with AADT estimates. • Focus on balance of high-priority yet representative locations: If site counts must be collected specifically for exposure estimation, the count collection effort should focus on a balance of high-priority yet representative locations. High priority locations for exposure analysis are likely to be those locations that have a high crash frequency. However, if an estimation model is to be developed, site counts at high-crash locations are likely to be a biased input for model developmentThe result is an estimation model that only predicts counts accurately at high-crash locations. Therefore, high-crash locations should be balanced with other locations that represent a range of facility types and land use patterns. Note that this balance is considered a more of an art form and not science at this time. Demand Estimation Models As described in Step 5, there are several estimation models to estimate pedestrian and bicyclist demand for input to an exposure measureIn this section of Step 6, we provide a detailed overview and key considerations in developing direct demand models since they are the most widely used tools in the literature for pedestrian and bicyclist volume estimation modeling. Specifically, detailed overview of direct demand models is presented first, and then step-by-step instructions are provided (with examples) to develop a direct demand model. Detailed Overview of Direct Demand Models Direct demand models are the most widely used tools in the literature for pedestrian and bicyclist volume estimation modeling (especially in supporting traffic safety studies). These models have been primarily used to develop facility-specific demand estimations for the local level of community, project, and facility planning and to evaluate and prioritize projects. The potential usages of direct demand models are listed as follows by Kuzmyak et. al.l (2014): • Answer questions about facility use or needs that could not be addressed with traditional trip-based regional models because of limitations related to scale and ad hoc treatment of non-motorized • Address the need for estimates of walk activity on links and at intersections for safety analysis and design. • Address the need for estimates of bicycle activity to support questions on bike network design and to support decisions on facility needs. • Provide a better connection between the context of the given built environment and non-motorized travel behavior and demand The FHWA has sponsored a Non-Motorized Travel Analysis Toolkit, which includes various applications to support non-motorized transportation planning and modeling. This Toolkit includes several direct demand models to estimate pedestrian and bicycle volumes (http://www.nmtk.pedbikeinfo.org/ui/#/). Direct demand models have also been identified as the primary tools to measure bicyclist and pedestrian exposure for safety analysis. Direct demand models are generally based on different versions of regression modeling to explain “demand levels as recorded in counts as a function of measured characteristics of the adjacent environment” (Kuzmyak et. al. 2014). As indicated by Munira and Sener (2017), “the concept of using a direct-demand model to estimate non-motorized activity is not newStudies dating back 50 years have forecast non-motorized traffic using count and spatial data.” Schmiedeskamp and Zhao (2016) explained such models as following “a similar approach of first proposing a set of explanatory variables, fitting some form of regression model, and then interpreting and justifying the results according to the guiding theory.” Direct demand models are based on variety of data sources such as activity counts, census population and employment characteristics, land use and topography and transportation network characteristics. Direct demand models are appealing due to their simplicity and convenience in development and application, and since they are generally based on available dataThese models are particularly useful for screening and preliminary analyses especially when the resources are limited and a more comprehensive (and relatively expensive) model is not available or not possible to developHowever, direct demand models are limited in terms of capturing the behavioral structureIn addition, they are usually not transferable due to their strong linkage to local context, activity levels and the characteristics that the models are built on Aoun et. a. l(2015) summarized the advantages and disadvantages of direct demand models as in Table 16. Table 16. Advantages and Disadvantages of Direct Demand Models Advantages Disadvantages • Software requirements are usually limited to spreadsheets or standard statistical software packages. • They do not take into account individual trip choices and factors. • Can be created largely using existing data. • Activity level (count) data is costly to collect, depending on geographic scale. • Most necessary data is typically publicly available and can be found at a variety of geographic levels. • They may inaccurately correlate activity levels with adjacent land uses. • Network connectivity can be estimated, but requires additional time/ resources to quantify. • Validity between datasets may not be satisfactory. • Datasets typically used (i.eU.SCensus Data) are not frequently updated. Source: (adapted from) Aoun et. al. 2015 Kuzmyak et. al. (2014) highlighted the need to be judicious in the development and application of direct demand models, and suggested the following guidelines: • The models need to be developed from scratch for each study, and well calibrated to existing conditions with the specific area and on the specific facilities under studyModels developed for a specific area cannot be construed as transferable. • Uncertainties developed due to unaccounted origin-destination, route choice, and trip purpose data may be narrowed down by developing counts and models focused on a specific time period. • After model calibration, the reliability of the models to predict volume in individual locations and the overall study area should be tested. • Need to be judicious in the types of applications or decisions to be supported by the models. The readers are referred to Munira and Sener (2017) for an in-depth review of the available literature associated with direct-demand modeling to estimate bicycle and pedestrian activity. Development of a Direct Demand Model This section provides step-by-step instructions to develop a direct demand model. The generalized approach to develop a direct demand model includes three primary phases as demonstrated in Figure 17 In addition, Figure 18 provides a flowchart of an algorithm to help the analysts walk through the tasks needed to complete the phases when developing a direct demand model Below, we provide detailed instructions on each phase, and the corresponding tasks involved in each phase as consistent with the flowchart provided in Figure 18Specifically, the main objective and primary tasks of each phase are presented first, followed by discussion of key considerations when processing the phase. Phase A: Study Identification Objective: The main objective of Phase A is to identify the study focus. Primary Tasks: Phase A includes three primary tasks: • Task 1: Identify study area. • Task 2: Determine facility locations in the identified study area. • Task 3: Determine variable of interest (i.edependent variable) of the direct demand model. Processing the Phase: Task 1: Phase A starts with identification of the overall study area (or target population) for which the direct demand model is desired to be developedWhen executing the task, it is of utmost importance to identify project objectives to avoid any unnecessary process, optimize the limited resources and limit the bias. Task 2: The next task is to determine the facility locations at which the exposure measure is desiredExample of facility locations include signalized and unsignalized intersections or midblock locations along the street segmentsStep 3 of this Guide provides information on how to determine desired geographic scale(s) for exposure. Task 3: Once the location-specific details of the model are identified, the analyst needs to determine the variable of interest, i.ethe model output or outcome whose variation is being examinedFor example, in the context of non-motorized direct demand models, the analyst might be interested in obtaining annual average pedestrian volume and peak hour bicycle traffic volumeIn statistical terms, the variable of interest is named as the dependent variable of the model. Phase B: Data Preparation Objective: The main objective of Phase B is to prepare the data needed for model development. Primary Tasks: Phase B includes five primary tasks: • Task 1: Identify and compile data needed for the dependent variable of the model. • Task 2: Process the data and perform data quality checks for the dependent variable. • Task 3: Identify and compile candidate explanatory (i.eindependent) variables of the model. • Task 4: Process the data and perform data quality checks for the independent variables • Task 5: Combine the datasets for dependent and independent variables. Processing the Phase: Task 1: The first task in this phase is to compile data needed for the dependent variable of the modelAs aforementioned in the Site Counts part of Step 6 of this Guide, site counts serve as the common source of data for exposure estimation especially when the desired facility coverage is limited and data collection for all facilities within a large network or region is cost-prohibitiveTherefore, site counts are the main ingredient used for creating the dependent variable of the model Sampling: In the ideal conditions, it is desired to have site counts at all facility locations across the study area, but this is not feasible given the limited amount of resources (i.ebudget, time, equipment and manpower constraints)Therefore, the analyst needs to develop a sampling strategy to select sites from which data will be collectedThere are two different types of sampling technique for data collection: non-probabilistic sampling technique and probabilistic sampling technique • Non-probabilistic sampling techniques are the most commonly used techniques due to their low cost, easy-to-implement methodologyIn this technique, the analyst identifies some subjective criteria (e.gconvenience, engineering judgment, local knowledge, quota, etc.) and collects data based on those criteriaThis technique does not allow the analyst to control for sampling error, and usually a small number of samples are collectedIf the analyst needs to draw conclusions about the entire population, this technique is not recommended since the sites collected might not be • Probabilistic sampling techniques are based on statistical approaches, involves random selection process, and therefore allows the analyst to compute a sample size and to draw conclusions about the entire populationIn general, this technique requires more resources because of the increased sample size needed to obtain a representative sampleThere are various different methodologies that can be selected in applying probabilistic sampling techniques including simple random sampling, strategic sampling, cluster analysis and multi-stage random sampling. Table 17 is adapted from Greene-Roesel et. al. 2010 and provides summary information on each of these sampling techniques Table 17Sampling Techniques Sampling Methodology Definition Example Advantage Disadvantage Obtaining a sample of people or units that are most Selecting intersections with available Low Cost; Easy method of sample No representative sample; Not Convenience convenient to study collision data design recommended for descriptive or casual Selecting a sample based on individual judgment about Selecting signalized intersections Low cost; Allow to draw some Judgement the desirable characteristics required of the sampling because of experience or intuition that conclusions about the Does not allow drawing general units. they have higher pedestrian flow. characteristics of the selected conclusions about the entire population. Non-probabilistic sample. method Making sure to select some signalized Low cost; Allow to draw some Does not allow drawing general Quota It is similar to the judgment sample, but requires that and some unsignalized intersections in conclusions about the conclusions about the entire population, the various subgroups in a population are represented. a sample. characteristics of the selected or sample subgroups. Additional survey respondents are obtained from Used when surveying individuals about Some characteristics about the Requires a lot of time and resources; Snowball information provided by the initial sample of their behaviors (e.ghow much they walk target population can be known Used only for surveys. respondents. in specific areas) Simple A sampling procedure that ensures each element in the Subgroups within the target population random population will have an equal chance of being included When there are enough resources; to may not be represented in the sample; in the sample inquire about the characteristics of Simple; Conclusions about the Larger samples are necessary. Systematic Samples are randomly selected from a list in order, but the entire population population can be drawn. The sample may not be representative random not everyone has an equal chance of being selected. because of the ordering of the original Sub-samples are drawn within different strataEach More efficient sample (variance May be difficult to determine stratum is composed of samples with similar When representation of all subgroups differs between the strata); characteristics of individuals to Stratified characteristics (e.gtaking into account similarity of within a particular sample is Small sampling error between appropriate classify them in specific Probabilistic intersection characteristics – signalized or necessary. strata; Smaller samples. strata. Entire groups, not individuals, are selected to Sample may not be as representative as participate in the data collection; Simple random Efficient for large numbersDo desired; Error may be greater than with Cluster sampling is applied to the representative “clusters” to When the population is too big or when not need to identify all other techniques; Pilot studies may be select the clusters in which all members will there is a lack of information about unitsSmaller samples; Less necessary to identify the clusters. participate. individual sampling units (e.gall expensive relative to the Multi Stage Stratification techniques within the clusters used to vehicle occupants in the United States) population size. Like cluster sampling but more Random refine and improve the sampleExamples of this kind of representative within clusters. sampling: National Safety Belt Survey. Source: (adapted from) Greene-Roesel et. al. 2010 Sampling size: Similar to the selection of sampling technique, there are various considerations in determining the sample sizeWhen selecting the sample size, it is recommended that the analyst first identifies what is available, and if the data can be obtained by adjusting/combining readily available data or modifying the existing data collection system. This will help to develop an effective and practical approach in sampling and determination of sample size neededIn addition, it is of utmost importance to continuously evaluate the study objectives to effectively use the existing The following provides resource examples in determining sample size, which might be helpful in making decisions about the sampling size and technique (adapted from Greene-Roesel et. al. 2010): • Evaluate change over time: The analyst might be interested in understanding the change in pedestrian traffic volume at a particular facility location over timeFor example, if the only focus of a study is to conduct a before and after evaluation of one particular intersection in the region, then there is no need to draw information about the general populationIn such cases, non-probabilistic sampling techniques (e.gprofessional judgement) are commonly usedThe sampling focus should be given to collect representative data at that particular intersection taking into account potential biases regarding the time of data collection (e.gseasonal changes) • Evaluate risk related to infrastructure type: The analyst might be interested in comparing the pedestrian safety between different facility locations, such as signalized intersections versus unsignalized intersections in a cityThis will require one to collect two random samples to determine the pedestrian exposure at signalized intersections and pedestrian exposure at unsignalized intersections, respectivelySimple random sampling technique might be appropriate and are easy to apply assuming that pedestrian exposure will be similar across similar sites (i.eminimal variance across the selected sample) and the complete list of targeted intersections are availableIn this case, the following formulae can be used to compute an approximate value of each sample size (Garder 2004): where n is the sample size, z is the z-value determined based on the desired level of confidence, CV is the coefficient of variation, and ME is the margin of error. For example, assuming a 95 percent confidence interval (z=1.96), with low variation (e.g., 10 percent), and with acceptable margin of error (e.g., 5 percent), the minimum sample size is computed as 16For the example described above, this will yield to about 32 intersections with 16 of them signalized and 16 of them unsignalizedIt is also important to reexamine the coefficient of variation during data collection, and re-compute the sample size as needed. • Sampling exposure in geographic area: The analyst might be interested in determining exposure across an area for example to assess pedestrian risk in a cityIn this case, a probabilistic sampling technique is needed since the analyst wants to draw information across the entire area and in need of representative sample of pedestrian volume at different facility types and locations across the area. The analyst can use different probabilistic approachesFor instance, the analyst can apply stratified sampling by choosing stratification variables and their corresponding categoriesLet's assume the analyst identified intersection type (with 2 categories - signalized versus unsignalized) and geographic area (with four categories - CBD, urban, suburban, and rural)Then, based on the above assumptions, the minimum sample size needed can be computed as 128 (16x2x4)This number may also need to be proportionally adjusted based on the shares of each stratum in the region. While stratified sampling is an effective method in obtaining observations with different levels for the variables used, the analyst may need to increase the number of strata for better representation, which will eventually increase the sample size neededIn that case, the analyst might consider other probabilistic sampling techniquesFor example, cluster sampling can be used by classifying all the intersections into different clusters with similar characteristicsCluster analysis helps control sample size while adding more variables, but might not be as representative as stratified samplingFinally, the analyst can choose to combine clustering and stratification to obtain more representative samples within the clusters as in the case of multi-stage random Task 2: Once the site count data are sampled and compiled, the second task of this phase includes processing the data (e.gfactor adjustment) and conducting data quality checks (for a representative sample) needed. Site counts can be conducted for various different durations and at different times of the yearWhile some studies have used the count data directly (i.e., for the specific collection period) as their dependent variable of the models, other studies have processed and expanded the count data to longer time periodsDirect demand models typically require high-quality volume count information that might be supplemented/validated with travel surveys (such as to account for demographics and trip generators)The analysts are referred to Site Counts part of Step 6 of this Guide, which provides detailed information on site counts and key considerations in processing and obtaining a representative sample of site counts to be used as the dependent variable of the modelThis process completes the preparation of the model's dependent variable. Task 3: Next, the analyst needs to focus on the preparation of explanatory variables of the modelExplanatory variables represent the cause or reason for the outcomeIn statistical terms, explanatory variables are also named as the independent variables of the model whose relationships with the dependent variable are being examinedAt this stage, the analyst needs to first identify candidate independent variables to be considered in the model, and then compile the data needed for the identified variablesIt is likely that the analyst might have an initial (desired) set of independent variables based on the local knowledge, professional judgement, data availability, practicality in usage, etcHowever, it is also likely that the analyst might not have any preference or knowledge on the candidate independent variables to be considered in the modelIt is important to always keep in mind the goals of developing the model when selecting variablesThe final explanatory variables of the model should be composed of variables that are intuitive, logical and relevant to the action items in the decision making process. Table 18 provides an overview of the key significant variables used across the studies (based on the extensive literature review conducted by Munira and Sener 2017)The analyst is recommended to review the variables in Table 18 before making a final selection of candidate independent variablesThe table provides information on frequency (i.e., use in the model) and impact (i.e., direction of the variable). The model variables showed some differences based on the mode (i.e., pedestrian model versus bicycle model) and the analysis methodWhile choosing model variables, it is important to consider the context specific nature of explanatory variables of the direct demand modelsAs indicated by Munira and Sener (2017), “choice of independent variables and their magnitude and direction of impact on non-motorized activity largely depend on community, people, and location”For example, while the availability of sidewalks and land use characteristics might be more influential in motivating walking trips, cycling trips might be more likely to be influenced by various factors across spatial areas beyond the trip origin (Munira and Sener 2017; Winters et. al. 2010). Table 18Key Explanatory Variables of Pedestrian and Bicyclist Direct Demand Models Category Variable Pedestrian Bicycle Frequency Impact Frequency Impact Population density + + Total population + + Demographic % of non-white residents + + % of black residents − − % residents with a college education + + % residents younger than 5 and older than 65 years + Household income − +/− Socioeconomic Total employment + + Employment density +/− + Number of lanes + +/− Speed limit − Network/ Arterial street (of count location) + + interaction with vehicle traffic % major arterials − Collector street (of count location) + Presence of four-way intersection + Presence of bike lane + + Presence of sidewalk + Footway pavement width + On-street bicycle facility length + Bicycle- or pedestrian-specific infrastructure Presence of a cycle track + Bicycle-trail access + Bike lane or curb lane width + Separated path + Presence of bicycle markings on any approach + Number of bus/transit stops + + Transit facilities Presence of subway station + + Bus frequency + Accessibility to an underground station + Distance from the central business district/downtown − − Major generators Proximity to a university campus + + Number of schools + + Precipitation − − Weather and environmental Temperature − + Very warm temperature (maxtemperature >32°C) − Residential land use +/− − Land-use mix (area of retail, office, and commercial space per housing unit) + + Retail area +/− + Office space area + Industrial area − − Cultural and entertainment space area + Job accessibility + Dwell count + Commercial space + + Maximum/mean slope − − Land use Traffic signal-controlled intersection + Patch richness density + Single-family residential areas − Average visibility within the street network + Tourist and downtown area + Job accessibility + Centrality + Low-density residential space + Institutional space + Presence of three approaches − Presence of parking entrance − Source: Based on the literature review of 22 studies conducted by Munira and Sener 2017. Task 4: Upon identification and compilation of independent variables, the next task includes processing the data and conducting data quality checks neededSeveral different specifications and alternative functional forms of independent variables might need to be considered to identify the best data fit during the development of direct demand modelsFor example, while some variables may need to be considered in categorical or binary forms, some other might need to be used as continuous variables (e.gdevelopment of income categories versus income as a continuous variable)Similarly, some independent variables might work best if they are transformed into other scales (e.gnatural logarithmic scale), which might be helpful for an easier interpretation of model variables as well as better data fit. In addition, all independent variables should be carefully assessed and statistical tests should be performed to ensure the database compiled for independent variables is logical and free of error (e.gidentification of missing values and outliers)This process completes the preparation of the model's candidate independent variables. Task 5: The final task of the data preparation phase is to combine datasets prepared for dependent and independent variables to obtain one final dataset to be used in model development. Phase C: Model Development Objective: The main objective of Phase C is to develop the model based on the data prepared. Primary Tasks: Phase C includes four key tasks: • Task 1: Identify statistical method to be adopted for model development. • Task 2: Perform statistical checks to identify independent variables to be tested in the model. • Task 3: Estimate the model. • Task 4: Perform model validation Processing the Phase: Task 1: The first task in this phase is to identify the statistical method to describe the relationship between dependent variable and independent variablesA wide variety of methods have been used in predicting non-motorized activity using direct-demand modelsLinear regression, Poisson regression, and negative binomial regression models are among the most commonly used statistical methods used in direct demand models for bicycle and pedestrian exposure estimationIn order to select the best model to the data, the analyst needs to examine the nature of the dataFor example, Poisson distribution assumes that the mean and variance are the same; however, we often found that count data exhibits over-dispersion with a variance greater than the meanIn that case, it has been shown by many studies that negative binomial provides a better data fit Task 2: In this second task, the analyst needs to screen the pre-identified model variables and the relationships between variablesThe following describes some of the key considerations in evaluating the variables. • Compute correlations across variables, and screen all variables for high and low correlation values • Identify independent variables that are relatively highly correlated with the dependent variable • Check for multicollinearity to avoid including highly correlated independent variables simultaneously in the same model. Task 3: Once the statistical method is identified and statistical relationships across variables are initially screened, the analysts can start working on the model estimationIn this task, the analyst may need to conduct several model iterations and different combinations of explanatory variables to find the best model structureIt is recommended that the final model is selected based on both statistical and intuitive considerations (e.gstatistical significance, goodness of fit, insights obtained from the literature, practicality, and engineering judgment)The model performance should be evaluated through statistical checks, such as by overall goodness of fit, residual plots, etcA 5 percent level of significance is recommended in general to include variables in the final model; however, the analyst is also recommended to check logical and intuitive variables that might not be very highly statistically significant to reduce any bias. Task 4: It is important to validate the model, which is defined as the application of the models and comparison of the results to observed data that was not used to estimate the model It is required that the observed data used for model validation are not the same data used for model estimationModel sensitivity tests will also be useful to determine if the model results are reasonable and sensitive to the changes in explanatory variables. The analyst is recommended to check the process from the selection of statistical method to the identification and examination of variables included in the model until a good model performance is obtainedAs/if needed, the model should be re-specifiedThis process helps develop more robust models with good fit and intuitive explanatory variables that would be useful for both evaluating risk and informing safety policy and investment decisions. The completion of this task concludes the development of the modelThe exposure output obtained from the model can then be used in risk analysisThe direct demand models can also be used in predicting volumes at locations where the count data are not available, extending the study to an areawide level in the application process. Examples of Direct Demand Models Table 19 provides an overview of example direct demand models from the recent literature. The table provides information on the coverage, data collection scale, analysis methods, and significant explanatory variables of the final estimated modelsNext, we provide example studies that have developed and applied direct demand models for exposure estimation in non-motorized safety analysis. Table 19Examples of Recent Direct-Demand Models to Estimate Pedestrian and Bicycle Volumes Author Coverage Data Collection Scale Analysis Significant Explanatory Variables (Buffer Size) Model Performance and (Date) Methods Pedestrian Bicyclist Validation Bicycle Model: Pedestrian and Stepwise Pedestrian Model: Hankey Blacksburg, bicyclist counts at linear Sidewalk length; off-street trail length; household income; Household income; centrality; population density; Adj-R^2=0.71 et. al. VA 101 locations on regression residential addresses count in buffer; population density; on-street facility length; major roads length (2017) different street and model bus stop count in buffer Validated by goodness of trail segments fit, internal validation, and a Monte Carlo–based 20% holdout analysis Bicycle Model: Pedestrian and Off-street trails (200 m); on-street facilities (100 Hankey bicyclist counts at Stepwise Major roads (200 m); off-street trails (3000 m); transit m); retail areas (100 m); industrial areas (1250 m); Pedestrian Model: and Minneapolis, 471 locations on linear stops (400 m); retail areas (100 m); industrial areas (1250 open space areas (200 m); job accessibility; Adj-R^2=0.53 Lindsey MN different street and regression m); open space areas (100 m); job accessibility; population population density (1250 m); precipitation; (2016) trail segments model density (750 m) temperature Internal validation and Monte Carlo–based 10% holdout analysis Fagnant Negative Employment density; bicycle-trail access; bridges; and Bicycle counts at 251 binomial number of lanes; curb-lane width; bike-lane width; Kockelman Seattle, WA intersections and Poisson Not reported separated paths; speed limit; residential areas; Not reported (2016) models morning period count; League of American Bicyclists gold (bicycle-friendly community listings) Multiple linear regression model Bicycle Model: Tabeshian Pedestrian and bicycle Multiple Number of bus stops (0.1 mi); street length (0.5 mi); total Hectares of commercial space (0.10 mi); hectares of and Calgary, counts at 34 linear and bus-km of bus routes (0.75 mi); total number of dwell count low-density residential space (0.10 mi); number of Pedestrian Model: Kattan Canada intersections located Poisson (0.5 mi); hectares of commercial space (0.25 mi); number of bus stops (0.25 mi); hectares of institutional space (2014) on major arterials models schools (0.5 mi); pathway length (0.25 mi) (0.50 mi); number of street lanes reaching Adj-R^2=0.92 (excluding downtown) intersection Validation based on prediction models of 18 intersections in southwest Calgary Island of Bicycle activity Number of employment (400 m); presence of schools Strauss Montreal, counts at 647 Bayesian (400 m); presence of subway stations (800 m); Not reported et. al. Quebec, signalized model Not reported land-use mix (800 m); length of bicycle facilities (2013) Canada intersections (800 m); commercial land-use area (50 m); presence of three approaches Adj-R^2 values between Number of households (0.25 mi); total employment (0.25 mi); 0.78 and 0.83 Schneider San Count of pedestrians intersection is in a high-activity zone; maximum slope on any et. al. Francisco, who crossed each leg Log-linear intersection approach leg; intersection is within 0.25 mi of Validated against 2002 (2012) CA of the 50 model a university campus; intersection is controlled by a traffic pedestrian volume at intersections signal other 49 four-way Bicycle Model: Adj-R^2=0.38 (OLS); Cox–Snell R^2=0.48 (NB) Pedestrian and Ordinary % of non-white residents; % residents with a college bicyclist counts at least education; distance from the central business district (CBD); % of non-white residents; % residents with a college Pedestrian Model: Hankey Minneapolis, 259 locations, squares and distance from nearest body of water; recorded precipitation; education; median household income; measure of mixing Adj-R^2=0.30 (OLS); et. al. MN midblock portion of negative principal arterial street (of count location); arterial of land uses; distance from the CBD; recorded Cox–Snell R^2=0.42 (NB) (2012) each street or binomial street (of count location); and collector street (of count precipitation; off-street trail (of count location); sidewalk segment models location) arterial street (of count location) and year Validates models based on predicted non-motorized traffic at 85 locations (46 new and 39 previously sampled locations) Source: Adapted from Munira and Sener (2017) Detailed Example This section provides an example study with details to help analysts follow the process described aboveThe example is based on a study conducted by Schneider et. al. (2012), and presents a typical example for direct demand model development for non-motorized exposure estimation. Development of Pedestrian Intersection Volume Model in San Francisco, California Phase A – Study Identification: Schneider et. al. (2012) developed a pedestrian intersection volume model in San Francisco, California, focusing on annual pedestrian crossing intersection volume Phase B – Data Preparation: The authors identified 50 intersections to collect a sample of counts for the San Francisco pedestrian intersection volume modelThe intersections were selected to represent the range or urban characteristics across the cityThe data collection was conducted at different time periods (2009 and 2010)The authors aimed at increasing the geographic representation of locations across the study area by collecting data at various locations, e.ghigh-crash locations, regional count locations, locations near planned or completed projects, locations near key transit hubs, etcNext, the authors applied automated counter, temporal, and weather adjustment factors to extrapolate an annual pedestrian volume estimate from the two-hour counts at the 50 study intersectionsThe logarithm of the annual pedestrian crossing volume constituted the dependent variableFor independent variables, they considered 16 explanatory variables (e.gtotal number of households within 0.25 mile of the intersection without a car, ratio of population to jobs within 0.25 mile of the intersection, intersection in a high-activity zone, etc.)The authors examined descriptive statistics for all variables (i.emean, standard deviation, minimum and maximum). Phase C – Model Development: Next, the authors developed a log-linear regression model to identify the relationship between annual pedestrian volume estimate and various different explanatory variables including land use, transportation system, local environment, and socioeconomic characteristics near each sampled intersectionAfter conducting various model runs, the authors identified 12 potential models of annual volumes of pedestrian intersection crossingsThe variables with high level of correlation, without precise estimates, and with counterintuitive relationships with pedestrian volume were excluded from the modelThe 12 potential models were indicated to have good fits (adjusted R2-values between .78 and .83) and were significantly better than a model based only on a constant with no independent variables (F-values between 28.4 and 34.4)The final recommended model was selected based on a combination of good overall fit and intuitive, logical and practical explanatory variablesThe model performance was evaluated by spatially reviewing the difference between predicted and observed countsSensitivity tests were conducted, and the model was validated against pedestrian volumes collected at 49 four-way intersections (different than the intersections used in the model estimation) in 2002, which showed that the model ranked intersections similarly to the previous counts in overall volumeThe authors indicated the model included only six significant factors because of the relatively small sample of intersections, and highlighted the importance of various other variables that might need to be considered in future studies. The model was then used to evaluate pedestrian crossing risk at each intersection based on the exposure measure of the number of pedestrian crashes per 10 million crossings Additional Examples This section provides additional examples that are briefly described to present different thought process during the development of direct demand models for non-motorized safety analysis • Molino et. al. (2009; 2012) developed a log-linear regression model (with Poisson distribution) to estimate pedestrian counts at signalized intersections in Washington, D.CWhile 15-min pedestrian counts served as the dependent variable, the independent variables of the model included land use variables (e.g., commercial, residential) and characteristics of the day (e.g., day of the week, time of the day)Using the parameter estimates of the model and follow-up adjustment procedures, a total number of miles traveled were estimated “…by multiplying the total number of pedestrians by the mean width of all the sampled signalized intersections.” This result was then used as an exposure measure in pedestrian crash rate computation. • Using a count database of 954 observations and 471 locations, Hankey and Lindsey (2016) employed a stepwise linear regression model that allowed for varying spatial scale of independent variables including land use and transportation network variablesRelying on the modeled values of bicycle traffic from this work, Wang et. al. (2016) then estimated peak-hour bicycle traffic volumes for many segments in Minneapolis. The model results were then converted to bicycling volume for intersections and used for computing bicycle crash rates by intersections and segments. • Strauss et. al. (2013; 2014) used a relatively improved version of modeling to estimate non-motorized demand. First, Strauss et. al. (2013) developed a bivariate Bayesian Poisson model to simultaneously estimate cyclists' injury occurrence and bicycle activity at 647 signalized intersections on the island of Montreal, Quebec, CanadaIn a follow-up study, Strauss et. al. (2014) applied their Bayesian modeling methodology as part of a multimodal approach aimed at examining the safety at intersections for both non-motorized and motorized traffic. After model calibration, the study compared injury and risk between modes and intersections by using the “expected number of injuries (obtained from the models) per million cyclists, pedestrians or motor-vehicle occupants per year” as the expected risk. Travel Surveys This section of the guide provides an overview of the most commonly used travel surveys in estimating exposure including the ACS, NHTS, and regional household travel surveys. This section includes information on the general purpose of the survey and applicability in estimating exposure for bicyclists and pedestrians, limitations and benefits, and data availability. American Community Survey (ACS) The ACS is an important tool for tracking non-motorized (bicycling and walking) travel patternsThis national ongoing survey of a sample of U.Shouseholds conducted by the U.SCensus Bureau gathers a wide variety of information in addition to their primary travel mode from home to work.^ The ACS does not have trip information for non-commute trips (whereas NHTS does but only conducted once a decade)Thus, the ACS can be used to estimate non-motorized exposure expressed as the daily person commute trips by walking or bicycling per the specified areawide geography. Benefits and Limitations In many cases, non-motorized trips are secondary modes of travel to the longest distance mode (driving or transit). However, it provides one of the most robust sources of information on non-motorized commuting by bicycle and walking in smaller spatial units like Census block groups Table 20 summarizes the strengths and limitations of the ACS in estimating non-motorized trips. All survey and census estimates incorporate errors to a certain extentSampling error is the quality measure that can adversely affect any survey resultSampling error usually occurs when data are based on a sample of a population rather than the full populationFor every ACS estimate, margins of error are provided and can be easily converted into confidence intervals.^ For different calculations associated with the ACS, it is important to consider sampling error. Table 20. Strengths and Limitations of the ACS in Estimating Pedestrian and Bicyclist Travel Strengths Limitations • Deliver useful, relevant data, similar to data from previous census long forms, and updated every • Only home-to-work commute trips. year instead of every 10 years. • Does not capture trips by children or most trips by older adults • ACS is more accurate than the decennial census long form sample. • Requires sound statistical knowledge to understand and use multi-year estimates. • ACS item allocation rates are lower, and non-sampling error is reduced • Relatively large confidence intervals associated with ACS data for smaller geographic areas and subgroups of the population. Data Availability The ACS provides estimates for different levels: a) 1-year estimates, b) 3-year estimates, and c) 5-year estimatesA key U.SCensus document^ lists the distinguishing features of 1-year, 3-year, and 5-year estimatesIt is important to note that using 3-year or 5-year ACS is beneficial due to large sample size relative to 1-year estimates, thus reducing margins of error of estimates for small subpopulationsFor analyzing areas with larger population (e.g., states, congressional districts), 1-year ACS is beneficial.^ For spatial units with smaller populations, the ACS samples may have insufficient numbers of households to provide reliable single-year estimatesFor these spatial units, multiple years (3 or 5) worth of data will be merged together to create more reliable estimatesThe multi-year estimates have advantages of statistical reliability for less populated areas and small population subgroups. The level of precision improves considerably with multi-year estimates. National Household Travel Survey (NHTS) FHWA conducts periodically the NHTS to gather detailed information on the travel behavior of the American public. The survey collects a wide array data from respondents, including household characteristics, demographics of each member in the households, vehicle details, and trip attributes (mode used, trip length, trip time, trip purpose). These data are stored in separate files: household, person, vehicle, and travel day (i.e., travel diary). The basic concept of using travel surveys to calculate the amount travel for an area is based on developing estimated statistics developed from the survey sample and expanding those estimates to the population by using weights (for example, see Schneider et. al. .^). Analysts can use the NHTS to estimate the following exposure measures for pedestrian and bicyclist travel: • Total estimated annual trips • Total estimated annual miles traveled • Total estimated annual hours traveled Benefits and Limitations The 2009 NHTS national sample estimates are statistically valid down to the state level. However, if additional add-on samples were purchases by a particular state or MPO, then estimates in those areas may be valid at a smaller geography depending on the methodology used by the analyst. Keep in mind that the NHTS documentation warns that standard errors or margins or error should generally be used when looking at estimates at geographies smaller than the national level. While providing a rich national sample, the NHTS sample sizes might have sparse coverage at small geographic scales. Transferability of the NHTS results to small geographic area (e.g., census tracts) is limited to estimates of average weekday household person trips, vehicle trips, person-miles traveled, and VMT. Though these estimates could serve as exposure measures for non-motorized travel risk, they are not the best choice since specific mode of travel is not offered or is vehicle-based. Unfortunately, since the NHTS is not conducted on a more frequent and regular basis, it cannot be used to directly track short-term trends in non-motorized travel exposure. It can, however, be used in sketch planning or travel demand modeling efforts to estimate or predict non-motorized exposure based on more current census demographic information. For example, the generalized daily person trips per person by mode, generated from the NHTS, can be used to estimate the total non-motorized trips produced in subsequent years by using current ACS population estimates. These trip rates need to be updated periodically, and if possible, supplemented by more localized travel data to better reflect local nuances and unique characteristics of the transportation infrastructure and traveling In terms of sketch planning, it is possible to apply generalized person trip rates that were produced to represent the statewide population to local areas. This is done by multiplying the person type trip rate by the total number of corresponding population within the local area. This operation has statistical drawbacks since the generalized person trips rates were produced by using a statewide sample, which is likely to not be statistically representative of the local area population. It is always best to use local data for local purposes; however, the NHTS provides an opportunity to estimate local exposure when local data do not exist. Data Availability A NHTS was conducted for 1969, 1977, 1983, 1990, 1995, 2001, and 2009. Most recently, a 2017 NHTS that began in March 2016 was released in early 2018It is comprised of a 26,000 national household sample representing all U.SStates and the District and Columbia along with an additional 103,112 add-on sample households. Additional add-on samples are made available to the states and regional/MPOs for purchase. These add-on samples provide the opportunity to populate different exposure measures at a finer geographic level and to develop more robust safety analyses. The 2017 NHTS add-on sample sizes for the state DOTs and MPOs are listed in Table 21. Table 212017 NHTS State DOT & MPO Add-on Household Sample Study Area Sample Size National 26,000 Arizona DOT 2,444 California DOT 24,000 Des Moines Area MPO 1,200 Georgia DOT 8,000 Indian Nations Council of Governments 1,000 Iowa Northland Regional Council of Governments 1,200 Maryland DOT 1,000 New York State DOT 15,851 North Carolina DOT 8,000 South Carolina DOT 6,500 Wisconsin DOT 11,000 Texas DOT 20,000 North Central Texas Council of Governments 2,917 TOTAL 129,112 Source: NHTS Task C: Sample Design, Dec31, 2015, page 5 The NHTS data can be used to compute or statistically model several different exposure measure estimates (e.g., population, miles traveled, number of trips) nationally, by census region/division, state, and urban/rural area types, depending on the survey year. It is possible to produce these same measures at smaller census geographies like Core Based Statistical Area (CBSA) if the particular location(s) participated in the add-on program and were specified in the sampling design. Regional Household Travel Survey A regional household travel survey is typically conducted by an MPO to develop a regional travel demand modelThe frequency of these surveys varies from city-to-city, with some planning agencies conducting household travel surveys every eight-to-ten years or longer. Just like the NHTS, regional household travel surveys collect data from respondents on the household characteristics, demographics of each member in the households, vehicle details, and trip attributes via a travel diary. Figure 19 depicts the relationship between the four separate data components. Exposure measures (e.g., miles traveled or number of trips) can be estimated for household and person types and expanded to the population to provide statistically valid areawide estimates. Benefits and Limitations Household travel surveys can be used to measure the population proportion, distance traveled, duration traveled, and number of trips by a specific mode for the survey region. Survey respondents typically fill out a travel diary indicating origins and destinations with the start and end times of trips along with the mode that was used. Since the survey represents only a stratified sample of the population, weights must be applied to expand the survey sample so that it represents the entire population of the study area (see Figure 20). Survey weights indicate how many households each survey observation represents of the total population of households – these weights are typically provided along with the survey data. For example, the regional household travel survey for Austin, Texas could be used to estimate the total amount of time traveled by walking for the five-county region. The survey sample is comprised of 3,000 households and 8,100 persons and can be expanded to represent the population of the study area by applying the survey weights. To do so, the total duration of trips by mode must be enumerated per household type, as defined in the survey stratification. The totals then must be multiplied by their corresponding survey weight to equal the total daily duration of trips by mode for the entire study area. The result is an estimated total of 189,256 daily walk trips with an average trip duration of 16 minutes equaling approximately 50,437 hours of walking per weekday. The main limitations of region household travel surveys include their high cost and expertise required to process and analyze the survey data. The data may not be publically available due to survey respondent privacy concerns. Areawide Non-Motorized Exposure Tool The Areawide Non-Motorized Exposure Tool described here makes it easier for practitioners to obtain and summarize nationwide travel survey data to estimate pedestrian and bicyclist exposure to risk at statewide and MPO area scales. The first part of the tool titled Statewide Exposure Estimates fills that gap between years when the NHTS is conducted by using the more current ACS data to estimate non-motorized exposure at the state-level. The second part of the tool titled MPO Area Exposure Estimates also uses NHTS and ACS data to estimate non-motorize exposure but for individual MPOs throughout the nationBoth parts of the tool produce annual non-motorized exposure estimates by mode for years 2009 to 2016 in terms of trips, miles of travel, and hours of travel for their respective areawide geography. The results are offered in tabular form along with graphics like the examples shown in Figure 21The following sections describe the tool's capabilities, as well as instructions on how to use the tool. Statewide Exposure Estimates The FHWA's Safety Performance Management (Safety PM) Final Rule currently requires each State DOT to report the number of non-motorized fatalities and serious injuries (without considering exposure). To understand the relationship between these crashes and non-motorized risk, exposure is desirable to help measure the magnitude of bicyclist and pedestrian vulnerabilityHowever, users should note that, at this time, the Safety PM Final Rule does not require non-motorized exposure to be reported or considered. The Statewide Exposure Estimates component offers a method for practitioners to estimate statewide non-motorized exposure in order to calculate non-motorized risk. The tool provides the following exposure measure estimates for both bike and walk travel modes per state for the individual years 2009-2016: • Total estimated annual trips • Total estimated annual miles traveled • Total estimated annual hours traveled The estimates are based on a combination of the 2009 NHTS and the U.SCensus Bureau's ACS data for each respective year. The 2009 NHTS total annualized trips per state are adjusted to better represent the selected year for analysis by using the more current ACS population and daily commute trip estimates (tables B01003 and B08301, respectively). The adjustment factors account for change in both population and the number of commute trips per mode over time. The population adjustment factor (AF[pop,i]) is based on the 2009 ACS population estimate since the NHTS data represent 2009It can be written as: AF[pop,i] = Population adjustment factor in i^th year (i = 2009 to 2016) for state POP[i] = ACS population estimate in i^th year for state POP[2009] = ACS population estimate in 2009 for state In order to expand daily person commute biking and walking trips, the relationship between commute and total trips is required. The commute trip adjustment factor is based on the 2009 NHTS annualized person trips per person by mode (bike and walk) and the annualized ACS daily persons commuting by mode (bike or walk) for the selected year (i.e., ith year) for analysis. The equation is as follows: AF[CT] = Commute trip adjustment factor by mode for state PT[2009] = NHTS annualized person trips by mode in 2009 for state PC[2009] = ACS daily persons commuting by mode in 2009 for state The adjustment factors (AF) are applied to the selected year ACS commute trips per person by mode to provide estimated annual person trips: PT[i] = Estimated annual person trips by mode (biking or walking) in ith year for state PC[i] = ACS daily persons commuting by mode in ith year for state AF[pop,i] = Population adjustment factor in i^th year for state AF[CT,i] = Commute trip adjustment factor by mode for state Finally, to calculate the estimated total annual miles and hours traveled, the 2009 NHTS average trip durations and trip lengths per state were then applied to the total trips to estimate the amount of hours and miles traveled annually per mode for each state. HT[i] = Estimated annual hours traveled by mode (biking or walking) in i^th year for state PT[i] = Estimated annual person trips by mode in i^th year for state TD[2009] = 2009 NHTS average trip duration (in hours) by mode for state MT[i] = Estimated annual miles traveled by mode in i^th year for state TL[2009] = 2009 NHTS average trip length (in miles) by mode for state Data sources for the above variables are as follows: Variable Data Source POP[i] & POP[2009] ACS 1-year estimate, table B01003 - Total Population PC[i] & PC[2009] ACS 1-year estimate, table B08301 - Means Of Transportation To Work PT[2009] 2009 NHTS TD[2009] 2009 NHTS TL[2009] 2009 NHTS This method assumes that the average trip durations and lengths remain constant between year 2009 and 2016 due to the lack of more current data. However, the tool does provide the user the option to input their own values if available In addition, the tool should be updated with the newly published 2017 NHTS data to produce the 2017 estimates based on current travel behavior data. The NHTSA Fatality Analysis Reporting System (FARS) person data were used to calculate the total annual non-motorized fatalities per state from 2009 to 2016. The totals are provided in the spreadsheet tool along with total annual risk per state that is based on the total annual non-motorized fatalities per million hours of travel. The total annual non-motorized fatalities are defined as individuals classified as a bicyclist or pedestrian that sustained a fatal injury in a motor-vehicle crash.  As of May 2018, the 2016 FARS data were incomplete. The data can be found online: The interface of the Statewide Exposure Estimates component is shown in Figure 22. Step-by-Step Instructions • 1: Open the spreadsheet containing the tools. • 2: Read the “Introduction” page for an overview of the available tools. • 3: Click on the “Statewide Exposure Estimates” tab to select the tool. • 4: Select the state of interest from the drop down menu titled “State” (also marked by • 5: Select the source (default or user input) of the required inputs, “Average Trip Length (Miles)” and “Average Trip Duration (minutes)”, from the dropdown menus highlighted in green for each year (also marked by • 6: For those required inputs where the default option is chosen, no further action is required. • 7: For those required inputs where the user input option is chosen, enter the desired user input value into the appropriate cell (no color highlights) below the source dropdown menu. MPO Area Exposure Estimates The MPO Area Exposure Estimates component offers a method for practitioners to estimate MPO-wide non-motorized exposure for calculating non-motorized risk. The tool provides the following exposure measure estimates for both bike and walk travel modes per MPO for the individual years 2009-2016: • Total estimated annual trips • Total estimated annual miles traveled • Total estimated annual hours traveled Non-motorized exposure estimates at the MPO level are derived from a combination of ACS and the 2009 NHTS. The Census data offers estimates at relatively small geographies that can be interpolated to the MPO leve. lThe 2009 NHTS data provides information on travel behavior for a sample of the travel public from around the nation and can be used to calculate average person trip rate, average trip length, and average trip duration per mode. Total person trips by bike and walk can be estimated with a generalized 2009 person trip rate per mode applied to the total population of the year of interest and then annualized (365 days). The product is then adjusted to account for any change in the mode-specific commuting population between 2009 and the year of interest. However, this adjustment does not capture any change in non-motorized recreational travel that may be induced from communities investing in bicycle and pedestrian infrastructure. Total person trips are then be applied to the average trip length and average trip duration to equal total miles and total hours traveled per mode, respectively. It also important to note that any error in the 2009 NHTS estimates of walking or bicycling is carried through to the subsequent years. The MPO-level estimated annual person trips by mode equation is as follows: PT[i] = Estimated annual person trips by mode (biking or walking) in i^th year for MPO PTR[2009] = 2009 NHTS average daily person trip rate by mode for CBSA peer group POP[i] = ACS 5-year population estimate in i^th year for MPO PC[i] = ACS 5-year daily persons commuting by mode estimate in i^th year for MPO PC[2009] = ACS daily persons commuting by mode 2009 for MPO To calculate the estimated total annual miles and hours traveled, the 2009 NHTS average trip durations and trip lengths per MPO are applied to the total trips to estimate the amount of hours and miles traveled annually per mode for each MPO. HT[i] = Estimated annual hours traveled by mode (biking or walking) in i^th year for MPO PT[i] = Estimated annual person trips by mode in i^th year for MPO TD[2009] = 2009 NHTS average daily person trip duration (in hours) by mode for CBSA peer group MT[i] = Estimated annual miles traveled by mode in i^th year for MPO TL[2009] = 2009 NHTS average daily person trip length (in miles) by mode for CBSA peer group Data sources for the above variables are as follows: Variable Data Source PTR[2009] 2009 NHTS POP[i] ACS 5-year estimate, table B01003 - Total Population PC[i] & PC[2009] ACS 5-year estimate, table B08301 - Means Of Transportation To Work TD[2009] 2009 NHTS TL[2009] 2009 NHTS Several caveats do apply to this methodThe end user should keep in mind that the MPO-level estimates for their area are based on an estimated average for their CBSA (Core Based Statistical Area) peer group. Also, like the statewide tool, the MPO-level method assumes that the average trip durations and lengths remain constant between year 2009 and 2016 due to the lack of more current data. However, the tool does provide the user the option to input their own values if available. The NHTSA FARS person data were used to calculate the total annual non-motorized fatalities per MPO from 2009 to 2016. The crashes were plotted in a GIS based on the coordinated provided and spatially joined the underlying MPO layerThe totals along with total annual risk per MPO that is based on the total annual non-motorized fatalities per million hours of travel are provided in the spreadsheet tool. The total annual non-motorized fatalities are defined as individuals classified as a bicyclist or pedestrian that sustained a fatal injury in a motor-vehicle crash. As of May 2018, the 2016 FARS data were incompleteThe data can be found online: https://www.nhtsa.gov/research-data/fatality-analysis-reporting-system-fars. The interface of the MPO Area Exposure Estimates component is shown in Figure 23. Step-by-Step Instructions • 1: Open the spreadsheet containing the tools. • 2: Read the “Introduction” page for an overview of the available tools. • 3: Click on the “MPO Area Exposure Estimates” tab to select the tool. • 4: Select the state of interest from the drop down menu titled “State” (also marked by • 5: Select the state of interest from the drop down menu titled “State” (also marked by • 6: Select the source (default or user input) of the required inputs, “Person Trip Rate”, “MPO Population Estimate”, “Population Adjustment Factor”, “Average Trip Length (Miles)”, and “Average Trip Duration (minutes)”, from the dropdown menus highlighted in green for each year (also marked by • 7: For those required inputs where the default option is chosen, no further action is required. • 8: For those required inputs where the user input option is chosen, enter the desired user input value into the appropriate cell (no color highlights) below the source dropdown menu. Census Data for MPOs The Census does not offer data specific to MPO geographies; therefore, tract-level ACS Census data are used to provide the finest resolution to areal interpolate the population and commuter population for the MPOs. Only ACS 5-year estimates are available at the tract level; therefore, the estimates represent a given year within the five-year period as opposed to any individual year. 1- and 3-year estimates are unavailable due to inadequate ACS sample sizes at small geographies (i.e., tracts and counties). Figure 24 offers a visual comparison example of the Census Core-base Statistical Areas (CBSA), MPO and tract geographies for Memphis, TN. Variables that require ACS data: POP[i] = MPO population in i^th year (derived through areal interpolation of tract-level ACS data) PC[i] = MPO commuter population in i^th year (derived through areal interpolation of tract-level ACS data) PC[2009] = MPO commuter population in 2009 (derived through areal interpolation of tract-level ACS data) Developing Person Travel Estimates from the 2009 NHTS CBSA Peer Grouping Methodology: The 2009 NHTS survey data represent only a sample of the traveling public from both rural and urban areas. A portion of the 2009 NHTS data are labeled as being located within Census Core-base Statistical Areas (CBSA) that represent metropolitan areas. The CBSA geography is the smallest geography in the 2009 NHTS data and the only way to locate a portion of the survey sample. The survey sample data are grouped by their CBSA as indicated in the original 2009 NHTS data to represent metropolitan areas around the nation. The CBSA metropolitan areas serve as proxies for MPOs in terms of developing of travel estimates from the 2009 NHTS; however, sample sizes vary between CBSAs and are possibly not statistically representative of the local populations. CBSAs are then grouped together based on 2009 ACS 1-year estimates for bicycle and walk commute percentages to increase samples sizes. Tables 22 (bike) and 23 (walk) list the ACS commute percentage ranges for the initial CBSA peer groupings along with their corresponding 2009 NHTS trip sample and the generalized travel estimates. Generalized Travel Estimates Applied to MPOs: The 2009 NHTS survey data are used to generate an average person trip rate, average trip length (miles), and average trip duration (minutes) for bicycling and walking per CBSA grouping. These generalized travel estimates are applied to the MPOs that possess similar bicycle and walk commute percentages as their peer 2009 NHTS CBSAsThe MPOs area assigned a CBSAs peer grouping for every year between 2009 and 2016 based on the annual release of ACS 5-year estimates for bicycling and walking commute percentages. Refer to the Appendix for a list of the MPOs with their corresponding ACS population and commuter information along with their CBSA bike and walk grouping assignments. In developing the generalized travel estimates, the NHTS survey weights are not applied because: • Estimates are for CBSAs with unrepresentative samples • Unrepresentative samples do not include all segments of the population • NHTS weights are to replicate the national population (meaning, each person is weighted to be represented within the national population) Table 22Bike - CBSA Peer Groupings CBSA Name 2009 ACS Bike Commute Quintile 2009 NHTS Bike Average Person Trip Average Person Trip Length Average Person Trip Duration Percentage Grouping Trips Rate (miles (minutes) Memphis, TN-MS-AR 0.02% Nashville-Davidson--Murfreesboro--Franklin, 0.09% Charlotte-Gastonia-Concord, NC-SC 0.11% Birmingham-Hoover, AL 0.12% Dallas-Fort Worth-Arlington, TX 0.13% 1 531 0.00598 2.43 18.29 Cincinnati-Middletown, OH-KY-IN 0.18% San Antonio, TX 0.18% Oklahoma City, OK 0.20% Atlanta-Sandy Springs-Marietta, GA 0.20% Kansas City, MO-KS 0.21% Cleveland-Elyria-Mentor, OH 0.22% Pittsburgh, PA 0.24% Hartford-West Hartford-East Hartford, CT 0.24% Louisville-Jefferson County, KY-IN 0.26% Houston-Sugar Land-Baytown, TX 0.27% Riverside-San Bernardino-Ontario, CA 0.27% 2 609 0.00779 2.47 21.15 Providence-New Bedford-Fall River, RI-MA 0.28% StLouis, MO-IL 0.30% Richmond, VA 0.31% Indianapolis-Carmel, IN 0.32% Detroit-Warren-Livonia, MI 0.32% Baltimore-Towson, MD 0.33% Las Vegas-Paradise, NV 0.34% Buffalo-Niagara Falls, NY 0.35% Raleigh-Cary, NC 0.36% New York-Northern New Jersey-Long Island, 0.40% NY-NJ-PA 3 662 0.00719 2.77 21.45 Virginia Beach-Norfolk-Newport News, VA-NC 0.41% Columbus, OH 0.42% Milwaukee-Waukesha-West Allis, WI 0.43% Orlando-Kissimmee, FL 0.45% Rochester, NY 0.50% Chicago-Naperville-Joliet, IL-IN-WI 0.57% Washington-Arlington-Alexandria, DC-VA-MD-WV 0.57% Miami-Fort Lauderdale-Pompano Beach, FL 0.61% San Diego-Carlsbad-San Marcos, CA 0.62% Jacksonville, FL 0.64% 4 1,406 0.00970 2.82 22.07 Tampa-St Petersburg-Clearwater, FL 0.70% Denver-Aurora-Broomfield, CO 0.72% Austin-Round Rock, TX 0.72% Philadelphia-Camden-Wilmington, PA-NJ-DE-MD 0.73% Minneapolis-St Paul-Bloomington, MN-WI 0.86% Los Angeles-Long Beach-Santa Ana, CA 0.86% Salt Lake City, UT 0.87% Phoenix-Mesa-Scottsdale, AZ 0.91% Seattle-Tacoma-Bellevue, WA 0.92% New Orleans-Metairie-Kenner, LA 0.96% 5 1,430 0.01227 3.08 22.70 Boston-Cambridge-Quincy, MA-NH 1.03% San Jose-Sunnyvale-Santa Clara, CA 1.43% San Francisco-Oakland-Fremont, CA 1.54% Sacramento--Arden-Arcade--Roseville, CA 1.62% Portland-Vancouver-Beaverton, OR-WA 2.13% Table 23. Walk - CBSA Peer Groupings CBSA Name 2009 ACS Walk Commute Quintile 2009 NHTS Walk Average Person Trip Average Person Trip Length Average Person Trip Duration Percentage Grouping Trips Rate (miles) (minutes) Orlando-Kissimmee, FL 0.97% Nashville-Davidson--Murfreesboro--Franklin, 1.10% Birmingham-Hoover, AL 1.30% Memphis, TN-MS-AR 1.33% Richmond, VA 1.34% 1 10,692 0.07503 0.70 14.35 Dallas-Fort Worth-Arlington, TX 1.40% Atlanta-Sandy Springs-Marietta, GA 1.41% Tampa-StPetersburg-Clearwater, FL 1.43% Kansas City, MO-KS 1.48% Raleigh-Cary, NC 1.51% Houston-Sugar Land-Baytown, TX 1.55% Indianapolis-Carmel, IN 1.57% Jacksonville, FL 1.58% Charlotte-Gastonia-Concord, NC-SC 1.58% StLouis, MO-IL 1.64% Detroit-Warren-Livonia, MI 1.65% 2 6,023 0.09039 0.67 14.55 Louisville-Jefferson County, KY-IN 1.66% Oklahoma City, OK 1.66% Austin-Round Rock, TX 1.77% Miami-Fort Lauderdale-Pompano Beach, FL 1.77% Las Vegas-Paradise, NV 1.78% Phoenix-Mesa-Scottsdale, AZ 1.80% Sacramento--Arden-Arcade--Roseville, CA 1.84% San Antonio, TX 2.02% Riverside-San Bernardino-Ontario, CA 2.03% San Jose-Sunnyvale-Santa Clara, CA 2.13% 3 7,854 0.08853 0.73 15.93 Columbus, OH 2.14% Denver-Aurora-Broomfield, CO 2.15% Cincinnati-Middletown, OH-KY-IN 2.16% Hartford-West Hartford-East Hartford, CT 2.19% Cleveland-Elyria-Mentor, OH 2.25% Minneapolis-St Paul-Bloomington, MN-WI 2.26% Salt Lake City, UT 2.27% Virginia Beach-Norfolk-Newport News, VA-NC 2.40% New Orleans-Metairie-Kenner, LA 2.59% Los Angeles-Long Beach-Santa Ana, CA 2.63% 4 13,552 0.10889 0.72 15.96 Providence-New Bedford-Fall River, RI-MA 2.79% San Diego-Carlsbad-San Marcos, CA 2.80% Baltimore-Towson, MD 2.85% Milwaukee-Waukesha-West Allis, WI 2.88% Buffalo-Niagara Falls, NY 3.10% Chicago-Naperville-Joliet, IL-IN-WI 3.17% Portland-Vancouver-Beaverton, OR-WA 3.17% Washington-Arlington-Alexandria, DC-VA-MD-WV 3.21% Rochester, NY 3.37% Seattle-Tacoma-Bellevue, WA 3.57% Pittsburgh, PA 3.71% 5 14,211 0.14297 0.68 14.50 Philadelphia-Camden-Wilmington, PA-NJ-DE-MD 3.75% San Francisco-Oakland-Fremont, CA 4.40% Boston-Cambridge-Quincy, MA-NH 5.12% New York-Northern New Jersey-Long Island, 6.28% Manual Data Extraction The following section offers details on how to the manually extract data from the ACS and NHTS sources. These data sources offer a variety of additional demographic and travel behavior information that may be of value to safety analysis projects or outreach efforts. ACS Data Extraction ACS provides pedestrian and bicycle commuting estimates in geodatabase and CSV formats.^ Data are available for different measures related to pedestrian and bicycle commutingUsers can download data based on their requirementsTo get estimate the number of workers that commute to work by walking or bicycling, four major variables can be used (see Table 24). Table 24. ACS Data Attributes for Pedestrian and Bicycle Commute Estimates from Table B08301 Census Code Variable Name B08301e18 Means of Transportation to Work: Bicycle: Workers 16 years and over -- (Estimate) B08301m18 Means of Transportation to Work: Bicycle: Workers 16 years and over -- (Margin of Error) B08301e19 Means of Transportation to Work: Walk: Workers 16 years and over -- (Estimate) B08301m19 Means of Transportation to Work: Walk: Workers 16 years and over -- (Margin of Error) To create maps, users can use ESRI ArcGIS software to join TIGER/Line (Topologically Integrated Geographic Encoding and Referencing) shapefile with the ACS data tables. Example Problem: Determine recent bicycle commuting estimates for Census Tracts in Texas With the following steps, users can determine bicycle-commuting estimates for census tracts in TexasIt is important to note that manual data extraction requires expertise in ArcGIS tool. 1: Download 2011-2015 ACS Data from U.SCensus^ (see Figure 25). 2: By using ArcMap 10.3, users can join ACS shapefile with ACS commuting data provided in the geodatabaseShapefile's variable 'GEOID' is required to be joined with 'GEOID' variable in ACS commuting data (see Figure 26). 3: Export data to text file format from the joined ACS shapefile attribute table. 4: To generate a choropleth map (see Figure 27), users need to follow the following steps: • Right click on the generated shapefile to select properties • Under the 'Symbology' tab, select Quantiles and Graduated Maps • Select variable 'B08301e18' from the drop-down lists in values. NHTS Data Extraction The following section highlights NHTS online tools that provide relevant pedestrian and bicyclist travel summaries at the state and CBSA geographies based on the NHTS data files as presented in Table Since these data are collected via a survey, and not a census, they must be weighted according to ACS demographic information to represent the entire population and produce valid estimates. Survey weights for households and persons are available for all useable households in the NHTS databases. The 2009 and 2017 NHTS national sample datasets (i.e., adjusted for oversampling due to add-ons) offer the ability to estimate non-motorized travel exposure by person trips at the state level. Figure 28 shows the online analysis tool with the total annual person trips by mode per state from the 2009 NHTS. The tool provides choropleth maps of 2009 NHTS person trips per state with drill-down capability to state statistics on trips, mode, and purpose. For example, Colorado produced approximately 7 billion person trips in 2009 with 9.7% (approximately 675 million) being walking trips. Table 25. Structure of 2009 and 2007 NHTS Data Files Data Files Information Included Record Level ID Variables Weight Data unique to a householdExample questions from interview sections: • Number of vehicles • Person Data Household • Type of Residence One record per household unit HOUSEID WTHHFIN • Location of Home • Household Income • Education Data determined once for each completed person interview Example questions from interview sections: • Age • Driver Status HOUSEID Person • Race & Ethnicity One record per person PERSONID WTPERFIN • Travel to Work • Miles driven • Education Data relating to each of the household's vehiclesExample questions from interview section: • Vehicle Data Vehicle • Type of Residence One record per vehicle, if present HOUSEID WTHHFIN • Verified Vehicle Data VEHID • Annualized Vehicle Miles • Household Income Data about each trip the person made on the household's randomly-assigned travel dayExample questions from interview Travel Day section One record per travel day person HOUSEID Trip trip PERSONID WTTRDFIN • Person Data TDTRPNUM • Travel Day Data Sources: 2009 NHTS Users Guide V2, page 6-2 and 2017 NHTS Users Guide, page 53 Another online analysis tool based on the 2009 and 2017 NHTS national sample datasets is the table designer shown in Figure 29 and Figure 30. The table designer tools allow users to build customized data tabulations quickly and easilyThe tabulation outputs are offered in either HTML, Excel, or CSV formats. The NHTS attributes can also be tabulated for various census-based geographies, such as state, CBSA, and rural/urban (http://nhts.ornl.gov/2009/pub/UsersGuideClaritas.pdf)For example, the exposure measure of annual person miles of travel can be calculated by mode per state with these toolsTable 26 shows that in 2017 Arizona generated approximately 114 billion person miles of travel, of which 0.5% (approximately 585 million) were by walking. < Previous Table of Contents Next >
{"url":"https://safety.fhwa.dot.gov/ped_bike/tools_solve/fhwasa18032/step6.cfm","timestamp":"2024-11-04T05:40:40Z","content_type":"text/html","content_length":"186667","record_id":"<urn:uuid:7dccecce-13e2-4462-bef1-c3093e374a58>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00755.warc.gz"}
LibGuides: Math Resources: Linear Functions A function is essentially a slope-intercept equation that has been rewritten to replace y with f(x). Here’s an example of how this is done: The purpose of a function is to show how an input (the x-value) impacts an output (the y-value). Functions are governed by specific rules: • Every x-value can only correspond to one y-value • Every y-value can but will not always correspond to multiple x-values Sometimes, you won’t have a graph or an equation for your function. In cases like these, you would be given a function and a table like this example: To construct a function from a table, you would first find the slope (see this resource for more information). Then, you would identify the y-intercept (see this resource for more information). Then, you would create the function using the slope-intercept form (see this resource for more information).
{"url":"https://resources.nu.edu/MathResources/linearfunctions","timestamp":"2024-11-07T03:05:38Z","content_type":"text/html","content_length":"335696","record_id":"<urn:uuid:39394164-9b16-44c8-9599-34d96612931f>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00541.warc.gz"}
Engineering functions Category: Engineering functions The Excel Engineering Functions perform the most commonly used engineering calculations, many of which relate to Bessel Functions, Complex Numbers or converting between different bases. Engineers may also find the Excel Math and Trig functions useful. In the tables below, the Excel Engineering functions have been grouped into categories, to help you to find the function you need. Selecting a function name will take you to a full description of the function with examples of use. The Excel DEC2OCT function converts a decimal number to its octal equivalent. Syntax: =DEC2OCT (number, ) The DEC2OCT function syntax …
{"url":"https://sophuc.com/category/engineering-functions/page/4/","timestamp":"2024-11-06T07:38:28Z","content_type":"application/xhtml+xml","content_length":"49909","record_id":"<urn:uuid:26ef57d5-0cd6-4fc6-98a5-06c8a5d64d86>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00234.warc.gz"}
One mole of the gas Ar expands through a reversible adiabatic process, from a volume of... One mole of the gas Ar expands through a reversible adiabatic process, from a volume of... One mole of the gas Ar expands through a reversible adiabatic process, from a volume of 1 L and a temperature of 300 K, to a volume of 5 L. A) what is the final temperature of the gas? B) How much work has the expansion carried out? C) What is the change in heat? Assume this is a mono-atomic ideal gas. Note by asker: as the gas is mono-atomic, c[p]=(5/2)*R, c[v]=(3/2)*R a)Adiabatic coefficient k= c[p]/c[v] = 5/3 In adiabatic process , T V^K-1 remains constant (where T is temperature and V is volume) So, 300*1^2/3 = T*5^2/3 where T is the final temperature So,final temperature T= 300/5^2/3 =102.6 K b)initial volume=1 L= 0.001 m^3 , initial temperature = 300 K Also, number of moles= 1 So, using ideal gas equation, PV=nRT=>P=nRT/V=8.314*300/0.001=2494.2 *10^3 pa So, initial pressure= 2494.2 Kpa Similarly,final pressure=8.314*102.6/0.005=170.6 *10^3 pa So, work out= (P[2]V[2]-P[1]V[1])/(1-k)=(170.6*5-2494.2*1)/(1-5/3)=2461.8 J c)No heat is supplied in adiabatic process, So, heat=0 J However, change in internal energy= - work out= -2461.8 J
{"url":"https://justaaa.com/physics/1290171-one-mole-of-the-gas-ar-expands-through-a","timestamp":"2024-11-04T11:22:16Z","content_type":"text/html","content_length":"41876","record_id":"<urn:uuid:162ce1b8-c293-4bcc-a0c1-f70cc4477b07>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00877.warc.gz"}
Project Plan Template Formula Issue I have a question on a formula used in a project plan template currently being used. Many times, the % complete (expected) is greater than 100%. The formula currently being used is Expected Effort Complete/Cumulative Effort. The Cumulative Effort is the duration of the task, in days, and the expected effort complete formula is IF(TODAY() > [Planned Finish]@row, Duration@row, IF(TODAY() >= [Planned Start]@row, TODAY() - [Planned Start]@row, 0))) I cannot figure out where the formula error is that is giving me a greater than 100% expected complete figure. • Your calculation of the % complete (expected) will be greater than 100% if the expected effort complete formula outputs a number greater than the duration of the task. I believe this is happening since your formula is simply subtracting Today from the Start Date without taking into consideration work days. For example, if your Start Date was on a Friday and Today was Monday, then TODAY() - [Planned Start]@row would equal 4 days, but the Duration would actually be 2 working days. Does that make To adjust this, you'll want to use the NETWORKDAYS function in your formula, like so: =IF(TODAY() > [Planned Finish]@row, Duration@row, IF(TODAY() >= [Planned Start]@row, NETWORKDAYS([Planned Start]@row, TODAY()), 0)) Let me know if this resolves your issue! Need more help? 👀 | Help and Learning Center こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions • This is great! Thank you @Genevieve P. Your guidance is much appreciated! Help Article Resources
{"url":"https://community.smartsheet.com/discussion/83617/project-plan-template-formula-issue","timestamp":"2024-11-05T00:59:01Z","content_type":"text/html","content_length":"413352","record_id":"<urn:uuid:966e3651-4890-4dfd-98d7-124eac8e8f68>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00815.warc.gz"}
3 Impossible Question types in CAT & How to Handle them - 2IIM CAT Blog Impossible Question Types This post discusses 3 Impossible question types that appear in Mocks (more than CAT) and is slightly tangential to CAT preparation. Quite a few students get thrown off by some different/ambiguous questions presented in different forums. CAT has transformed into an exam that tests sound temperament more than anything else, in the recent editions. Which is why it is very important to recognise and hold fort when facing these questions. The Impossible question types can be categorized into three. EOK: Examiner-only-knows Let me give you an example. A bell tolls once every 20 seconds, another tolls once every 30 seconds. If both of them ring at the same time, how many time will they ring together in the first hour? Now, they will ring together once every minute. But the cheap-tricks-of-examiners book says that the answer for this question can be 61 instead of 60. You can answer this question only if you know the examiner. Sometimes you cannot answer it even if you know the examiner. These are the Examiner-Only-Knows questions Examiner Only Knows GOK: God-only-knows Let me give you a sample. The difference between the lengths of the diagonals of a parallelogram inscribed in a circle is 2 cms, find the area of the parallelogram. Now, a parallelogram inscribed in a circle has to be a rectangle. And the diagonals of a rectangle are equal. So, it is clear that even the examiner does not know the answer to this one. God Only Knows – GOK God Only Knows NYK: Now-you-know The third category is interesting. And this is probably the most useful as well. This is the Now-you-know category. There was/is a legendary professor of organic Chemistry named Govindarajan in Chennai who used to train students for the JEE in the 80’s, 90’s and 00’s. He was an elderly gentleman even in the late 90’s, and wonderful as he was in teaching, he was also laidback about exams, scores, records, performance-trackers and the like. After one of his famous exams – bunch of students had some issues with the paper because it had some questions being beyond what he had taught in class. (Imagine 17-year olds anxious to tell themselves they messed up only because they hadn’t been taught that bit). He looked at said questions and exclaimed I did not teach you this?! You did not know about this? with an incredulous look on his face. This slowly gave way to a wry smile and he said “Well, well, well. Now You Know.” Now You Know There will always be Now-you-know questions in exams. Stuff that you did not know before, but is probably an important tidbit. Try this Example: Try this one – A six-digit number N of the form ‘abcabc’ where a, b, and c are digits from 0 to 9 has exactly 16 factors, how many values can N take? If you know that a number ‘abcabc’ is ‘abc’ * 1001 and that 1001 = 7 * 11 * 13, this question becomes easy. If you do not know this and you see this in a mock CAT paper, it is probably a good time to say “Now I know” (after you review the paper) 🙂 Key Takeaway There will be multiple types of questions that you do not comprehend the first time you face them. However, how you learn from that experience matters the most in this journey towards your Dream B-school. This is why it is important to aggressively review your mock CATs and not just fixate on the percentiles. Rajesh Balasubramanian takes the CAT every year and is a 4-time CAT 100 percentiler. He likes few things more than teaching Math and insists to this day that he is a better teacher than exam-taker.
{"url":"https://online.2iim.com/cat-exam/blogs/cat-preparation-strategy/3-impossible-question-types-in-cat-how-to-handle-them/","timestamp":"2024-11-04T18:52:54Z","content_type":"text/html","content_length":"61334","record_id":"<urn:uuid:7e83e5f5-ef93-4fa4-988a-e208535b09b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00051.warc.gz"}
Quantum Aspects of Light Propagation - PDF Free Download Quantum Aspects of Light Propagation Anton´ın Lukˇs · Vlasta Peˇrinov´a Quantum Aspects of Light Propagation Vlasta Peˇrinov´a Joint Laboratory of Optics Palack´y University and Institute of Physics of the Czech Academy of Sciences 772 07, Olomouc Czech Republic [email protected] Anton´ın Lukˇs Joint Laboratory of Optics Palack´y University and Institute of Physics of the Czech Academy of Sciences 772 07, Olomouc Czech Republic [email protected] Consulting Editor D.R. Vij Kurukshetra University E-5 University Campus Kurukshetra 136119 India ISBN 978-0-387-85589-9 DOI 10.1007/b101766 e-ISBN 978-0-387-85590-5 Springer Dordrecht Heidelberg London New York Library of Congress Control Number: 2009930842 c Springer Science+Business Media, LLC 2009 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com) Quantum descriptions of light propagation frequently exhibit a replacement of time by propagation distance. It seems to be natural since a propagation lasts some amount of time. The primary intention was to inform more fundamentally inclined, open-minded readers on this approach by this book. We have included also spatiotemporal descriptions of the electromagnetic field in linear and nonlinear optical media. We call some of these formalisms one dimensional (more exactly 1 + 1dimensional), even though they comprise the time variable along with the position coordinate. These descriptions, however, are 3 + 1-dimensional in principle. The rapid development of applications of photonic band-gap structures and experiments on lasing in a disordered medium has directed us to pay attention even to these topics, which has influenced the style of the book, which becomes a very review of these streams. This book has the following features. It reviews both macroscopic and microscopic theories of the electromagnetic field in dielectrics. It takes into account parametric down-conversion experiments. It covers results on nonlinear optical couplers. It includes optical imaging with nonclassical light. It expounds basics of quasimode theory. It respects success of the Green-function approach in describing optical field at dielectric devices, left-handed materials and the Casimir effect for some geometries. It refers to quantization in waveguides, photonic crystals, disordered media, and propagation in strongly scattering media, incoherent and coherent random lasers, and important problems in optical resonators including chaotic cavities. In our opinion it is appropriate to do something more than only formal comparison of various approaches in the future, even though the reader will already have formed an idea of their scope. The simplest approach with one variable (time or propagation distance) and with several frequencies has proven its vitality in the development of the quantum information theory and the quantum computation. At present there exist even books devoted to these fields: Alber, G., Beth, T., Horodecki, M., Horodecki, P., Horodecki, R., R¨otteler, M., Weinfurter, H., Werner, R., and Zeilinger, A. (2001), Quantum Information: An Introduction to Basic Theoretical Concepts and Experiments, Springer-Verlag, Berlin; Nielsen, Michael A. and Chuang, Isaac L. (2000), Quantum Computation and Quantum Information, Cambridge University Press, Cambridge. v The fundamental problem of light propagation in dielectric media is connected with the role of nonclassical light in applications and has been pursued intensively in quantum optics since about 1984. In the present book we review spatio-temporal descriptions of the electromagnetic field in linear and nonlinear dielectric media applying macroscopic and microscopic theories. We mainly pay attention to canonical quantum descriptions of light propagation in a nonlinear dispersionless dielectric medium and linear and nonlinear dispersive dielectric media. These descriptions are regularly simplified by a transition to the one-dimensional propagation, which is illustrated also by descriptions of some optical processes. Quantum theories of light propagation in optical media are generalized from dielectric media to magnetodielectrics. Classical and nonclassical properties of radiation propagating through left-handed media will be presented. The theory is utilized for the quantum electrodynamical effects to be determined in periodic dielectric structures which are known to be a basis of new schemes for lasing and a control of light field state. Quantum descriptions of random lasers are provided. It is an interesting question, to what extent the topic of this book overlaps with the condensed-matter theory. Restricting ourselves to optical devices, we cannot exclude such overlap in principle, because many of them are made of condensed matters. The condensed-matter theory, however, is devoted mainly to problems of conductors and semi-conductors. Photonic crystals can be studied similarly as ordinary electronic crystals, even though for instance the conductivity is replaced by the transmissivity. This does not mean any thematic overlap. Texts on quantum optics have so far based the spatio-temporal description on the quantization of the electromagnetic field in a free space in the hope that differences from the field in a medium are negligible or can be easily included in other ways. A rare exception was for instance the text Vogel, W. and Welsch, D.-G. (1994), Lectures on Quantum Optics, Akademie Verlag, Berlin, where a choice of a suitable approach, albeit a selection of one of possibilities, was declared. The book will be useful to research workers in the field of general optics, quantum optics and electronics, optoelectronics, and nonlinear optics, as well as to students of physics, optics, optoelectronics, photonics, and optical engineering. Olomouc Olomouc Vlasta Peˇrinov´a Anton´ın Lukˇs We have pleasure in thanking Dr. J. Peˇrina, Jr., Ph.D., for communicating files to the publisher, graphics, and word processing and Ing. J. Kˇrepelka, Ph.D., for the careful preparation of figures. This book has arisen under the financial support by the Ministry of Education of the Czech Republic in the framework of the project No. 1M06002 “Optical structures, detection systems, and related technologies for few-photon applications”. 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Origin of Macroscopic Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Lossless Nonlinear Dielectric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Nondispersive Lossless Linear Dielectric . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Quantization in Terms of a Dual Potential . . . . . . . . . . . . . . . 2.2.2 Momentum Operator as Translation Operator . . . . . . . . . . . . 2.2.3 Wave Functional Description of Gaussian States . . . . . . . . . 2.2.4 Source-Field Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.5 Continuum Frequency-Space Description . . . . . . . . . . . . . . . 2.3 Quantum Description of Experiments with Stationary Fields . . . . . . 2.3.1 Spatio-temporal Descriptions of Parametric Down-Conversion Experiments . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 From Coupled Quantum Harmonic Oscillators Back to Interacting Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Macroscopic Theories and Their Applications . . . . . . . . . . . . . . . . . . . . . 85 3.1 Momentum-Operator Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.1.1 Temporal Modes and Their Application . . . . . . . . . . . . . . . . 86 3.1.2 Slowly Varying Amplitude Momentum Operator . . . . . . . . . 88 3.1.3 Space–Time Displacement Operators . . . . . . . . . . . . . . . . . . 102 3.1.4 Generator of Spatial Progression . . . . . . . . . . . . . . . . . . . . . . 104 3.1.5 Nonlinear Optical Couplers . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 3.2 Dispersive Nonlinear Dielectric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 3.2.1 Lagrangian of Narrow-Band Fields . . . . . . . . . . . . . . . . . . . . 117 3.2.2 Propagation in One Dimension and Applications . . . . . . . . . 126 3.3 Modes of Universe and Paraxial Quantum Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 3.3.1 Quasimode Description of Spectrum of Squeezing . . . . . . . 133 3.3.2 Steady-State Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 3.3.3 Approximation of Slowly Varying Envelope . . . . . . . . . . . . . 143 3.3.4 Optical Imaging with Nonclassical Light . . . . . . . . . . . . . . . 152 ix 3.4 3.5 Optical Nonlinearity and Renormalization . . . . . . . . . . . . . . . . . . . . . 173 Quasimode Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 3.5.1 Relation to Quantum Scattering Theory . . . . . . . . . . . . . . . . 193 3.5.2 Mode Functions for Fabry–Perot Cavity . . . . . . . . . . . . . . . . 200 3.5.3 Atom–Field Interaction Within Cavity . . . . . . . . . . . . . . . . . . 207 3.5.4 Several Sets of Quasimodes . . . . . . . . . . . . . . . . . . . . . . . . . . 213 4 Microscopic Theories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 4.1 Method of Continua of Harmonic Oscillators . . . . . . . . . . . . . . . . . . . 223 4.1.1 Dispersive Lossy Homogeneous Linear Dielectric . . . . . . . . 224 4.1.2 Correlation of Ground-State Fluctuations . . . . . . . . . . . . . . . 235 4.2 Green-Function Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 4.2.1 Dispersive Lossy Linear Inhomogeneous Dielectric . . . . . . 239 4.2.2 Dispersive Lossy Nonlinear Inhomogeneous Dielectric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 4.2.3 Elaboration of Linear Theory . . . . . . . . . . . . . . . . . . . . . . . . . 245 4.2.4 Optical Field at Dielectric Devices . . . . . . . . . . . . . . . . . . . . . 253 4.2.5 Modification of Spontaneous Emission by Dielectric Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 4.2.6 Left-Handed Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 4.2.7 Application to Casimir Effect . . . . . . . . . . . . . . . . . . . . . . . . . 280 5 Microscopic Models as Related to Macroscopic Descriptions . . . . . . . . 303 5.1 Quantum Optics in Oscillator Media . . . . . . . . . . . . . . . . . . . . . . . . . . 303 5.2 Problem of Macroscopic Averages . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 5.2.1 Conservative Oscillator Medium . . . . . . . . . . . . . . . . . . . . . . 305 5.2.2 Kramers–Kronig Dielectric . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 5.2.3 Dissipative Oscillator Medium . . . . . . . . . . . . . . . . . . . . . . . . 312 5.3 Single-Photon Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 6 Periodic and Disordered Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 6.1 Quantization in Periodic Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 6.1.1 Classical Description of Electromagnetic Field . . . . . . . . . . 322 6.1.2 Modal Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 6.1.3 Method of Coupled Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 6.1.4 Normalized Modes of the Electromagnetic Field . . . . . . . . . 334 6.1.5 Quantization in Linear Nonhomogeneous Nonconducting Medium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 6.2 Corrugated Waveguides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 6.2.1 Lossless Propagation in a Waveguide Structure . . . . . . . . . . 351 6.2.2 Coupled-Mode Theory Including Gain or Losses . . . . . . . . . 359 6.3 Photonic Crystals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 6.4 Quantization in Disordered Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 6.4.1 Quantization in Chaotic Cavity . . . . . . . . . . . . . . . . . . . . . . . . 394 6.4.2 Open Systems Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 6.4.3 Semiclassical Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 Propagation in Amplifying Random Media . . . . . . . . . . . . . . . . . . . . 408 6.5.1 Strongly Scattering Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 6.5.2 Incoherent and Coherent Random Lasers . . . . . . . . . . . . . . . 414 6.5.3 Modal Decomposition in Optical Resonators . . . . . . . . . . . . 444 6.5.4 Chaotic Resonators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 Chapter 1 The importance of quantum optics has been recognized by both specialists and public since Roy J. Glauber was awarded the Nobel Prize in Physics 2005. Quantum informatics is closely connected with this field. Ingenious, but simple solutions are preferred to intricacies of the quantized field theory with the hope that experimenters realize the simple proposals with appropriate means. From the historical viewpoint, the problem of quantization of the electromagnetic field in vacuo was solved by Dirac (1927) long ago and the quantization of a nonlinear theory is due to Born and Infeld (1934, 1935). With respect to the propagation in linear dielectric media it is appropriate to refer first to Jauch and Watson (1948). A revived interest in this problem can be perceived since the 1990s. First it resembled some dissatisfaction with the situation following the advent of laser in 1958. The new optical effects are analyzed both by the methods of nonlinear optics which belong to classical physics and by those of quantum optics (Shen 1969). In quantum optics, the normal-mode expansion approach is used that is well suited for systems in optical cavities, such as an optical parametric oscillator, but is not appropriate for open systems such as a parametric amplifier. In nonlinear optics (Bloembergen 1965, Shen 1984), the Maxwell equations completed by the constitutive relations are solved with the method of slowly varying envelope approximation and the resultant equations are sometimes simplified on the assumption of parametric approximation. It has become standard that the phenomenological Hamiltonians of quantum optics are frequently introduced without a quantitative connection to the classical equations describing the nonlinear optical effects. The quantization of the electromagnetic field in the presence of a dielectric is possible. This can be done in two ways which are called the macroscopic and microscopic approaches. In the first, the macroscopic approach, the medium is completely described by its linear and nonlinear susceptibilities. No matter degrees of freedom appear explicitly in this treatment. After a Lagrangian which produces the macroscopic Maxwell equations for the field in a nonlinear medium is found, the canonical momenta and the Hamiltonian are derived. Quantization is accomplished by imposing the standard equal-time commutation relations. In the second approach, the microscopic, a model for the medium is constructed and both the field and the matter degrees of freedom appear in the theory. Both are quantized. The result is a A. Lukˇs, V. Peˇrinov´a, Quantum Aspects of Light Propagation, C Springer Science+Business Media, LLC 2009 DOI 10.1007/b101766 1, 1 Introduction theory of mixed matter-field (polariton) modes, which are coupled by a nonlinear interaction. Hillery and Mlodinow (1984) have used the electric displacement field as the canonical variable for nonlinear quantization and they have explored the macroscopic approach to the quantization of homogeneous nondispersive media. They have pointed out that there is a difficulty in including the dispersion in the quantized macroscopic theory. In the past, many authors that dealt with macroscopic quantum theories of light propagation wrote also on space displacements, shifts, and translations of the electromagnetic field along with the time displacements, shifts, and translations, or simply on the (time) evolution. Accordingly, they used the term “space evolution” in the former case. In the following, we will use the term space progression instead of space evolution. Abram (1987) intended to overcome the difficulties of the conventional quantum optics by reformulating its assumptions. He has based the formalism on the momentum operator for the radiation field and investigated in this way not only the spatial progression of the electromagnetic wave but also refraction and reflection. The importance of a proper space–time description of squeezing has been recognized (Bialynicka-Birula and Bialynicki-Birula 1987). The problem of a proper quantum mechanical description of the operation of optical devices has been addressed (Kn¨oll et al. 1986, 1987). Besides this, an attempt at a formulation of quantum theory of propagation of the optical wave in a lossless dispersive dielectric material has been made (Blow et al. 1990). The vacuum propagation and low-order perturbation theory have sufficed for spatio-temporal descriptions of parametric down-conversion experiments (Casado et al. 1997a, Casado et al. 1997b). The experiment on the “induced coherence without induced emission” has been described on restriction to spatial behaviour of fields and the multimode description has been restored too (Peˇrinov´a et al. 2003). The applications have used the fact that the nonlinear processes of quantum optics are described quantum optically in the parametric approximation with linear mathematical tools so that quantization procedures and solutions of the dynamics need not face immense difficulties as for the really nonlinear formalism (Huttner et al. 1990). The formalism of the macroscopic approach to the quantization has been developed (Abram and Cohen 1991). The space–time displacement operators have been related to the elements of the energy–momentum tensor (Serulnik and Ben-Aryeh 1991). The macroscopic quantization of the electromagnetic field was applied to inhomogeneous media (Glauber and Lewenstein 1991). The theoretical methods for investigating propagation in quantum optics, in which the momentum operator is used along with the Hamiltonian, have been developed (Toren and BenAryeh 1994). The excellent review of linear and nonlinear couplers (Peˇrina, Jr. and Peˇrina 2000), where the restriction to merely spatial behaviour of interesting optical fields is accepted, has used a similar approach. The optical solitons have been studied in the nonlinear optics and their quantum properties have been calculated using spatio-temporal descriptions by erudite authors. The dispersion has been treated on the assumption of a narrow-frequency interval (Drummond 1990). Drummond (1994) has presented a review of his theory 1 Introduction and its applications. In addition to previous work (Lang et al. 1973) devoted to the concept of quasinormal modes, the modes of the universe have been used in the treatment of the spectrum of squeezing (Gea-Banacloche et al. 1990a,b). An original approach to the description of a degenerate parametric amplifier (Deutsch and Garrison 1991a) has been related to the theory of paraxial quantum propagation (Deutsch and Garrison 1991b). One-dimensional description of beam propagation has been completed with transverse position coordinates. It has taken into account the existence of small photodetectors or pixels (Kolobov 1999). Abram and Cohen (1994) have developed a travelling-wave formulation of the theory of quantum optics and have applied it to quantum propagation of light in a Kerr medium. A quantum scattering theory approach to quantum-optical measurements has been expounded (Dalton et al. 1999a). In addition to (Lang et al. 1973) and along with an independent work (Ho et al. 1998) devoted to the concept of quasinormal modes, quasimode theory of macroscopic canonical quantization has been invented and applied (Dalton et al. 1999b,c). A macroscopic canonical quantization of the electromagnetic field and radiating atom system involving classical, linear optical devices, based on expanding the vector potential in terms of quasimode functions has been carried out (Dalton et al. 1999b). The relationship between the pure mode and quasimode annihilation and creation operators is determined (Dalton et al. 1999c). A quantum theory of the lossless beam splitter is given in terms of the quasimode theory of macroscopic canonical quantization. The input and output operators that are related via scattering operator are directly linked to multi-time quantum correlation functions (Dalton et al. 1999d). Brown and Dalton (2001a) have generalized the quasimode theory of macroscopic quantization in quantum optics and cavity quantum electrodynamics developed by Dalton, Barnett, and Knight (1999a,b). This generalization admits the case where two or more quasipermittivities are introduced. The generalized form of quasimode theory has beeen applied to provide a fully quantum-theoretical derivation of the laws of reflection and refraction at a boundary (Brown and Dalton 2001b). Huttner and Barnett (1992a,b) have presented a fully canonical quantization scheme for the electromagnetic field in dispersive and lossy linear dielectrics. This scheme is based on the Hopfield model of such a dielectric, where the matter is represented by a harmonic polarization field (Hopfield 1958). Following (Huttner and Barnett 1992a,b), Gruner and Welsch (1995) have calculated the ground-state correlation of the quantum-mechanical fluctuations of the intensity. Gruner and Welsch (1996a) have realized the expansion of the field operators which is based on the Green function of the classical Maxwell equations and preserves the equaltime canonical commutation relations of the field. They have found that the spatial progression can be derived on the assumption of weak absorption. In (Schmidt et al. 1998), the microscopic approach to the quantum theory of light propagation has been extended to nonlinear media and the generalized nonlinear Schr¨odinger equation well known from the description of quantum solitons has been derived for a dielectric with a Kerr nonlinearity. Dung et al. (1998) have developed a quantization scheme for the electromagnetic field in a spatially varying three-dimensional linear dielectric which causes both dispersion and absorption. In the case of a 1 Introduction homogeneous dielectric, the well-known Green function has been used and it has been shown that the indicated quantization scheme exactly preserves the fundamental equal-time commutation relations of quantum electrodynamics. The Green function has also been used in the more complicated case of two dielectric bodies with a common planar interface. Spontaneous decay of an excited atom in the presence of dispersing and absorbing bodies has been investigated using an extension of this formalism (Dung et al. 2000). A microscopic theory of an optical field in a lossy linear optical medium has been developed (Kn¨oll and Leonhardt 1992). Dutra and Furuya (1997) have considered a single-mode cavity filled with a medium consisting of two-level atoms that are approximated by harmonic oscillators. They have shown that macroscopic averaging of the dynamical variables can lead to a macroscopic description. Dutra and Furuya (1998a,b) have observed that the (full) Huttner–Barnett model of a dielectric medium does not comprise all the dielectric permittivities of the medium which can be expected from the classical electrodynamics, although the field theory in linear dielectrics should have such a property. Dalton et al. (1996) have dealt with the quantization of a field in dielectrics and have applied it to the theory of atomic radiation in one-dimensional Fabry–P´erot resonator. Yablonovitch (1987) suggested that three-dimensional periodic dielectric structures could have a photonic band gap in analogy to electronic band gaps in semiconductor crystals, namely, a band of frequencies for which an electromagnetic wave cannot propagate in any direction. This idea and its subsequent experimental proof in the macrowave domain have led to extensive activity aimed at the optimization of photonic band-gap structures for the visible domain and the exploration of their potential applications (Journal of Modern Optics 1994, Journal of the Optical Society of America B 2002, etc.). As soon as quantization in nonhomogeneous dielectric media is solved, not only the case of a finite dielectric medium is worth a treatment but also the case of an infinite periodic medium (Caticha and Caticha 1992, Kweon and Lawandy 1995, Tip 1997). The idea of one-dimensional propagation may be compared with results concerning a mirror waveguide. Nonlinear optics in a photonic band-gap structure has been studied (Tricca et al. 2004, Peˇrina, Jr. et al. 2004, 2005, Peˇrina, Jr. et al. 2007). Sakoda (2002) has formulated quantization of the electromagnetic field in photonic crystals. Spontaneous parametric down conversion in a finite-length multilayer structure has been considered (Centini et al. 2005, Peˇrina, Jr. et al. 2006). Both localization and laser theory, which were developed in the 1960s, were jointly applied in the study of random laser. They were used in strongly scattering gain media. Lasing in disordered media has been a subject of intense theoretical and experimental studies. Random lasers have been classified into incoherent and coherent random lasers. Research works on both types of random lasers have been summarized in the monographic chapter (Cao 2003). In order to understand quantum-statistical properties of random lasers, quantum theory is needed. Standard quantum theory for lasers applies only to quasidiscrete modes and cannot account for lasing in the presence of overlapping modes. In a random medium, the character of lasing modes depends on the amount of disorder. Weak disorder leads to a 1 Introduction confinement of light and to strongly overlapping modes. Statistics naturally belongs to the theory of amplifying random media (Beenakker 1998, Patra and Beenakker 1999, 2000, Mishchenko et al. 2001), which is restricted to linear media. Hackenbroich et al. (2002) have developed a quantization scheme for optical resonators with overlapping (nonorthogonal) modes. Cheng and Siegman (2003) have derived a generalized formalism of radiation-field quantization, which need not rely on a set of orthogonal eigenmodes. True eigenmodes of a system will be non-orthogonal and the method is intended for quantization of an open system, in which a gain or loss medium is involved. We will use units following original papers and although the system of international (SI) units prevails, there are exceptions, so some of the relations (2.25)–(2.69) and (3.276)–(3.322) are in the Gaussian units, the relations (3.323)–(3.392) are in the rationalized cgs units, and the relations (2.1)–(2.2), (3.14)–(3.108), (3.494)– (3.578), (3.109)–(3.125), and (2.15)–(2.24) are in the Heaviside–Lorentz units. Chapter 2 Origin of Macroscopic Approach With the birth of quantum optics in the 1960s it became clear that it would be easy to describe the interaction between the electromagnetic field and the matter in a cavity even on elimination of matter degrees of freedom. A similar travelling-wave description for the electromagnetic field–matter interaction was considered to be possible in terms of a virtual cavity and a momentum operator of the field. This approach to quantization was rather distant from the quantum theory of the electromagnetic field. On a fundamental level the theory of the electromagnetic field in the free space does not differ from the theory of this field in the matter. Macroscopic approaches to quantization of the electromagnetic field are not fundamental theories and modify the free-space electromagnetic-field theory. Especially, quantization of the field power has been assumed. Although the virtual cavity has been beaten, the momentum operator has still enabled one to study quantum aspects of nonlinear optical processes. Quantization restrictions of any kind such as the frequency dispersion of the refractive index were apparent on published work. Efforts emerged to formulate so simple a quantum theory of the electromagnetic field that it allows one to recognize the role of the momentum operator. Formalisms were presented which, to the contrary, did not consider the momentum operator. With the progress in (classical) optics interest in the quantization of the field power in quantum optics has increased. Not always is it necessary to utilize the formalism of the electromagnetic field in the matter. For description of experiments with correlated photons it suffices to describe the electromagnetic field between optical devices and to know the input– output relations for the optical elements, both passive and active, with which the radiation is transformed. 2.1 Lossless Nonlinear Dielectric An approach to the quantum theory of light propagation was considered standard until the critique by Hillery and Mlodinow (1984) and is still. Concerning this approach, let us consider papers by Shen (1967, 1969). Shen (1967) studied A. Lukˇs, V. Peˇrinov´a, Quantum Aspects of Light Propagation, C Springer Science+Business Media, LLC 2009 DOI 10.1007/b101766 2, 2 Origin of Macroscopic Approach quantum statistics of nonlinear optics. He contributed to the contemporary research (Glauber 1965). Quantum theory of radiation had long been formulated (Heitler 1954). For investigation of properties of a medium, incoherent scattering has been a useful tool. For nonlinear optics, coherent scattering has been interesting as well or more. Weak nonlinearity has a significant effect on light only after a longer interaction distance. Light can cover a longer distance easily when contained in a cavity resonator. Quantum statistics has been determined using descriptions suited to the case of a cavity. In principle, the same treatment can be applied to problems of light propagation in media (Shen 1967). But for coherent scattering, it becomes difficult. This case should be treated by the method of many-body transport theory (Ter Haar 1961). In quantum optics, the cavity treatment of the problems of light propagation in media seems to be valid on the following assumption. Photon fields may be quantized in a box of finite volume, which moves in the z direction with a light velocity c ( √c in Shen 1969). One is advised to imagine a box of length cT , where T is the counting time of photodetectors. A partial interaction of the light with the medium can be approximated with no interaction and a complete interaction, which lasts for a time t. The finite medium can be extended to infinity. The resultant change of statistical properties of fields in the box can now be calculated using the cavity treatment (Shen 1967). In nonlinear optics a number of classical descriptions have been developed both as a cavity problem and as a steady-state propagation problem. Then a cavity problem can be converted to a corresponding steady-state propagation problem by replacing t by − cz in the field amplitudes and the latter problem can be changed to the former one by replacing z by −ct when ez is the direction of propagation. It raises expectations that the same is true in the quantum treatment. Shen (1969) pays √ attention to replacing t by − z c and to replacing z by √ct . Here the dependence on the time seems to be more fundamental. It is evident that one is interested in a conversion of a cavity problem to a corresponding steady-state propagation problem. The operators will be space dependent (localized) instead of time dependent. Transformations will be generated with a localized momentum operator instead of the Hamiltonian operator. On quantizing in a volume L 3 and assuming that the field does not vary appreciably over a distance d large compared with the wavelength, and associating the discrete values of the wave vector k with d (instead of L), the localized annihilation † and creation operators bˆ k (z) and bˆ k (z) have been proposed. An appropriate component of the vector-potential operator has the expansion of the form ˆ t) = c A(z, bˆ k (z) exp[−i(ωk t − kz)] + H.c. , 3 2ωk k L where ωk is the frequency, is the Planck constant divided by 2π , k = (ωk ) is the value of the dielectric function at ωk , and H.c. denotes the term Hermitian Lossless Nonlinear Dielectric conjugate to the previous one. The annihilation and creation operators bˆ k (z) and † bˆ k (z), respectively, satisfy the equal-space commutation relation † ˆ bˆ k (z), bˆ k (z) = δkk 1. The smallvariation of the field has been formulated as that of the normally ordered †m moments bˆ k (z)bˆ kn (z) . It is also specified that k = 2πd n , where n is an integer. There is a difficulty. The above picture of a moving box requires a light velocity c independent of the frequency ωk . Shen (1969) utilizes the notation √c for this velocity. Here it is replaced by the phase velocity √ck , with c ≡ c0 , the free-space speed of light. There is another difficulty in view of this picture that d has been used instead of cT . The localized photon-number operator is realized as a configurationspace photon-number operator (Mandel 1966) ˆ n(z) = Ad ˆ † ˆ b (z)bk (z), L3 k k where A is the cross-sectional area of the beam. ˆ (z, t) is considered. The Hamiltonian is A Hamiltonian density H ˆ (z , t) dz . H ˆ = L2 H(t) L ˆ essenA third difficulty is that the localized momentum operator is defined as H (z,t) c tially, not by using an integration with respect to time. It has been assumed that , not that ωk = 2πT n . For free fields, the localized momentum operator is k = 2πn d ˆ P(z) = † k bˆ k (z)bˆ k (z) + 12 1ˆ . For interacting fields, the localized momentum operator has the form of a Hamilto† † nian, but with bˆ k (z) and bˆ k (z) replacing aˆ k (t) and aˆ k (t). ˆ be a vector, but in fact one A momentum operator should have the form ez P(z), does not utilize this. The momentum operator generates translations i ˆ d ˆ ˆ . bk (z) = bk (z), P(z) dz The electric strength vector is derived from the vector potential according to the relation 1 ∂ ˆ ˆ A(z, t). E(z, t) = − c ∂t 2 Origin of Macroscopic Approach We decompose this operator as ˆ E(z, t) = Eˆ (+) (z, t) + Eˆ (−) (z, t), where Eˆ (+) (z, t) ( Eˆ (−) (z, t)) contains the functions (exp(iωk t)). In Shen (1967) the opposite convention is used. Then (2.8) exp(−iωk t) d ˆ (+) i ˆ (+) ˆ . E (z, t) = E (z, t), P(z) dz Something is more suitable for propagation problem: We define all the quantities at a given plane z = z 0 for all times and try to obtain the propagation towards z ≥ z 0 . According to equations (2.6) and (2.9), the unitary translation operator is z i ˆ ) dz , P(z Uˆ (z, z 0 ) = S exp z0 where S is the space-ordering operation. The space-ordered product has a similar definition as the time-ordered product. Field operators at different spatial points z, z 0 are connected by this unitary operator: ˆ ˆ 0 , t)Uˆ (z, z 0 ). E(z, t) = Uˆ † (z, z 0 ) E(z There are indications that any “alternative” quantum theory is avoided. Such an indication is the fact that the localized momentum operator has been derived from the Hamiltonian density. With this in mind, we pass from the “spatial Heisenberg picture” to a spatial Schr¨odinger picture. In the latter picture, a localized density matrix (statistical operator) progresses: ρ(z) ˆ = Uˆ (z, 0)ρ(0) ˆ Uˆ † (z, 0). Here ρ(0) ˆ is a given statistical operator. Then the correlation function of fields at different times is expressed in two forms: Eˆ (−) (z, t1 ) . . . Eˆ (−) (z, tn ) Eˆ (+) (z, tn ) . . . Eˆ (+) (z, t1 ) = Tr ρ(0) ˆ Eˆ (−) (z, t1 ) . . . Eˆ (−) (z, tn ) Eˆ (+) (z, tn ) . . . Eˆ (+) (z, t1 ) = Tr ρ(z) ˆ Eˆ (−) (0, t1 ) . . . Eˆ (−) (0, tn ) Eˆ (+) (0, tn ) . . . Eˆ (+) (0, t1 ) . The equation of motion for a statistical operator ρ(z) ˆ is ∂ i ˆ ρ(z) ˆ = P(z), ρ(z) ˆ . ∂z With the help of these localized operators, the calculations for steady-state propagation in a medium become the same as the corresponding calculations for a cavity Nondispersive Lossless Linear Dielectric 11 √ with t replaced by − cz (Shen 1967) and by z c (Shen 1969). The problem of beam splitting was mentioned. Essentially, the same proposal has been included in Shen (1969). 2.2 Nondispersive Lossless Linear Dielectric The study of nonlinear optical phenomena and their inclusion in an effective nonlinear theory of the electromagnetic field has utilized the asymmetry of most optical media, which are nonlinear with respect to the electric-field, but linear relative to the magnetic field. The canonical momentum should be the magnetic induction in place of the more usual electric-field strength. Such a theory may not be capable of describing the Bohm–Aharonov effect. Besides such a theory we expound a simple quantization connected to considerations of the role of the Poynting vector operator and the momentum operator. A description of the field distribution in space must be completed with a quantum state of the field in quantum physics. A renewed interest in the spatio-temporal description leads to the study of the wave functional of the electromagnetic field despite the doubts of the pioneers of theoretical physics of the photonic wave function. On neglecting dispersion and nonlinearity, a macroscopic theory of the quantized electromagnetic field in a medium can be very close to the usual theory of this field in free space. In contrast to this, solutions have been disseminated, which include the dispersion and the nonlinearity at least approximately. 2.2.1 Quantization in Terms of a Dual Potential According to a pioneering paper of Hillery and Mlodinow (1984), the standard macroscopic quantum theory of electrodynamics in a nonlinear medium is due to Shen (1967) and has been elaborated upon by Tucker and Walls (1969). Hillery and Mlodinow (1984) have pointed out some problems with the standard theory, above all that it is not consistent with the macroscopic Maxwell equations. One approach to the derivation of a macroscopic quantum theory would be to begin from a quantum microscopic theory as explored in the linear case by Hopfield (1958). The other approach is to take the expression for the energy of the radiation in nonlinear medium, which differs from the free-field Hamiltonian in part, and to keep interpreting the electric-field (up to the sign) as the canonically conjugated variable to the vector potential. Then, this macroscopic classical theory is quantized. (Let us note that it differs from Shen (1969)). The Hamiltonian formulation of the theory consists in the noncanonical Hamiltonian ˆ EM + H ˆ Inoncan , ˆ noncan = H H 2 Origin of Macroscopic Approach ˆ 2 ) d3 x, ˆ EM = 1 (Eˆ 2 + B H 2 1 ˆ · Pˆ d3 x, ˆ HInoncan = E 2 (2.16) (2.17) with Eˆ being the electric field strength operator and Pˆ being the polarization of the medium, and the Heaviside–Lorentz units having been used. The polarization is a function of the electric field which may be written as a power series. This theory may be called standard. It can easily be seen that, as an undesirable “quantum effect”, we obtain an improper expression for the time derivative of the magnetic-induction ˆ field B. It is assumed that the medium is lossless, nondispersive, and homogeneous. A Lagrangian is considered which gives proper equations of motion. The electric and magnetic fields are expressed in terms of the vector potential A and the scalar potential A0 : E=− ∂A − ∇ A0 , B = ∇ × A. ∂t The appropriate Lagrangian density depends on the first partial derivatives of the four-vector A = ( A0 , A). The momentum canonical to A is Π = (Π0 , Π), where Π0 = 0. The vanishing of Π0 indicates that the system is constrained. It has been shown how to utilize the Dirac quantization procedure for constrained Hamiltonian systems (Dirac 1964). It can be derived that the canonical momentum is Π = −D. The canonical Hamiltonian has the form H = HEM + HI , HI = E· P− P(λE) dλ d3 x. In order to simplify the quantization of the macroscopic Maxwell theory, the dual potential Λ has been introduced along with Λ and Λ0 , which we call the dual vector and scalar potentials. The relation (2.18) is replaced by D = ∇ × Λ, B = ∂Λ + ∇Λ0 . ∂t It can be shown that the canonical momentum is Π × = B. Upon expressing the canonical Hamiltonian functional in terms of the electric-displacement and Nondispersive Lossless Linear Dielectric magnetic-induction fields, the results are the same: H = H ×. Then, the usual Hamiltonian theory for the electromagnetic field in a nonlinear dielectric medium and the alternative have been quantized in the ordinary way. We can compare ˆ ˆ i (x, t), Π ˆ j (x , t) = iδi⊥j (x − x )1, A ˆ ˆ i (x, t), Π ˆ ×j (x , t) = iδi⊥j (x − x )1. Λ The transverse δ function has been used and made a reference to Bjorken and Drell (1965). Hillery and Mlodinow (1984) do not mention propagation except a paragraph on the interpretation problems, where they recommend to confine the medium to part of the quantization volume and to place the field source and the detector outside of the medium, being aware that they require the consideration of propagation. It is added that different diagonalizations indicated by the quadratic part of the total Hamiltonian generate different kinds of normal ordering. A doubt is expressed that there is an appropriate kind and the microscopic approach is propounded. Dispersion is also considered a reason for a microscopic theory to be contemplated. 2.2.2 Momentum Operator as Translation Operator In the late 1980s, the problem of propagation did not seem to be typical of quantum optics. Abram addressed the problem of light propagation through a linear nondispersive lossless medium (Abram 1987). Although this model can be an appropriate limit of the Huttner–Barnett model, we expound the main ideas of Abram (1987). Abram criticized the modal Hamiltonian formalism, especially the inclusion of the linear polarization term in the Hamiltonian: ? 1 8π (E 2 + H 2 + 4π χ E 2 ) dV, where E (H ) is the magnitude of the electrical (magnetic) field strength, χ is the (linear) susceptibility of the material, and V is a quantization volume. This would lead to an incorrect result, mainly to a change of the frequencies of the modes which does not occur. He decided to extend the traditional theory of quantum optics to describe propagation phenomena without invoking the modal Hamiltonian. According to him one of the propagation phenomena, refraction, suggests 2 Origin of Macroscopic Approach the momentum as the concept appropriate for the description of these phenomena. Quantum mechanically, space and momentum are canonically conjugate variables. Let us remark that microscopic models demonstrate that a Hamiltonian including light–matter interaction can be considered. These are a good antidote against the idea that “space and momentum are canonically conjugate variables like time and energy”. Propagation of the electromagnetic field is described by the Maxwell equations: ∇ ×H= 1 ∂D , c ∂t ∇ ×E=− 1 ∂B , c ∂t (2.26) (2.27) ∇ · B = 0, ∇ · D = 0, where D = E + 4π P is the electric displacement, B is the magnetic induction, E and H are the electric and magnetic field strengths, respectively, P is the (linear and nonlinear) polarization induced in the medium, and c is the speed of light. We assume that there are no free charges or currents and that we are dealing with nonmagnetic materials, so that B = H. For simplicity, we shall consider only the case of plane waves propagating along the z-axis, with the electric field polarized along the x-axis and the magnetic field along the y-axis. This reduces the Maxwell equations to scalar differential equations, the directions of all vectors being implicit. We shall further assume that light is propagating in a linear dielectric, where the induced polarization is at all times proportional to the incident electric field: P = χ E, where we assume the susceptibility of the material for simplicity to be a scalar (neglecting its tensorial properties), independent of frequency (no dispersion). It is convenient to define also the dielectric function of the material = 1 + 4π χ , and the refractive index n, n= The change in the total energy which is given by the integrated energy flux (the Poynting vector) over the surface of a body or volume is proposed in Abram (1987) as the proper quantum-mechanical Hamiltonian. The change in the total momentum is given as the integrated flux of the Maxwell stress tensor. The momentum is treated Nondispersive Lossless Linear Dielectric on the same footing as the Hamiltonian. However, the enigma of the Hamiltonian (2.25) is solved. We may consider a square pulse which enters a dielectric. The total energy is conserved, but the energy density is increased by a factor of n, because the volume V reduces to V = Vn . In volume V the wavelengths of the modes become λ = λn , but the oscillator frequencies remain unchanged. It is interesting that in the absence of reflection, the electric and magnetic fields of the transmitted (T ) waves in the dielectric are related to the corresponding incident (I ) fields in free space by 1 ET = √ E I , n √ n HI . HT = This change in the energy density implies a similar increase for the total momentum of the pulse, the components of which are always proportional to the wave vectors of the excited modes. In propagation along the z-axis the Maxwell stress tensor is replaced by the energy density. When the propagation along the ±z-axis in free space is considered with the electric field polarized along the x-axis and the magnetic field along the y-axis (χ = 0, ˆ ≡ A(z, ˆ t) is usually written = 1), the electromagnetic vector-potential operator A as ( = 1) ˆ t) = c A(z, 1 2π 2 † aˆ j eiω j t−ik j z + aˆ j e−iω j t+ik j z , V ωj j † where aˆ j , aˆ j are the creation, annihilation operators, respectively, for a photon in the jth mode of the wave vector k j (with k− j = −k j ) and the frequency ω j = c|k j | fulfilling the Bose commutation relations. To simplify the notation, we omit unit vectors. It is convenient to rearrange equation (2.35) in a manner that is familiar to solid-state physicists, ˆ t) = c A(z, 1 2π 2 † aˆ j eiω j t + aˆ − j e−iω j t e−ik j z . V ωj j The electric and magnetic field operators may be obtained as 1 ∂ ˆ ˆ eˆ j A(z, t) = E(z, t) = − c ∂t j 2π ω j 2 1 = −i † bˆ j − bˆ − j , 2 Origin of Macroscopic Approach and ˆ t) = ˆ (z, t) = ∂ A(z, hˆ j H ∂z j = −i 2π ω j V † bˆ j + bˆ − j , where s j ≡ sgn j and bˆ j = aˆ j e−iω j t+ik j z . When products of these operators are encountered, we suppose that they are symˆ ˆ ≡ H ˆ (z, t) can be metrized. The Hermiticity of the operators Eˆ ≡ E(z, t) and H verified using the relations † eˆ j = eˆ − j , † hˆ j = hˆ − j . The energy density operator can be written as 1 ˆ2 ˆ2 E +H 8π uˆ j = uˆ = (2.42) (2.43) 1 eˆ j eˆ − j + hˆ j hˆ − j 8π j 1 ˆ† ˆ † ω j b j b j + bˆ − j bˆ − j + 1ˆ 2V j 1ˆ 1 †ˆ ˆ ωj bjbj + 1 . = V j 2 The energy fluxes due to the forward (backward) waves alone can be expressed uniquely: ωj † ωj † 1ˆ 1ˆ ˆ ˆ ˆ ˆ uˆ + = b j b j + 1 , uˆ − = bjbj + 1 . V 2 V 2 j(>0) j(<0) Nondispersive Lossless Linear Dielectric ˆ is then The total momentum operator G † 1 ˆ = V (uˆ + − uˆ − ) = k j bˆ j bˆ j + 1ˆ . G c 2 j It is important to understand the relations (3.9) and (3.10) in Abram (1987) well. We interpret (3.9) concerning elementary quantum mechanics as z| pˆ z |ψ = −i ∂ z|ψ , ∂z where |z are position coordinate states and |ψ is an arbitrary pure state. The similarity with equation (3.10) from Abram (1987) ˆ ∂Q ˆ Q], ˆ = −i[G, ∂z ˆ is any operator, fades. We would prefer a definition of the operator Q. ˆ Let where Q us consider ˆ ≡ Q(z, ˆ ˆ ˆ (z, t)], Q t) = Q[ E(z, t), H ˆ . Since the differential operator ∂ is where Q[•, •] is a formal series in Eˆ and H ∂z ˆ •], it suffices to verify the relation just as differentiation as the superoperator −i[G, ˆ = E, ˆ H ˆ . It is true at least in the situations treated in Abram (1987). (2.49) for Q Although the operators bˆ j ≡ bˆ j (z, t) are studied using (2.49), the Heisenberg equation of motion, and the initial condition bˆ j (0, 0) = aˆ j ˆ as appropriate for any operator Q(z, t), we perceive that the operators do not obey ˆ our definition of the operator Q. We may calculate the Poynting vector operator as c ˆ c ˆ ˆ eˆ j h − j EH = Sˆ = 4π 4π j cω j † 1 bˆ j bˆ j + 1ˆ . = sj V 2 j (2.52) (2.53) The Poynting vector operators due to the forward (backward) waves alone can be expressed uniquely: Sˆ + = cω j † cω j † ˆb bˆ j + 1 1ˆ , Sˆ − = − ˆb bˆ j + 1 1ˆ . j j V 2 V 2 j(>0) j(<0) 2 Origin of Macroscopic Approach The total energy operator of the free field inside the volume of quantization is thus V ˆ 1 † Hˆ = Uˆ = ω j bˆ j bˆ j + 1ˆ . S+ − Sˆ − = c 2 j The investigation of the case χ = 0, = 1 does not lead to any new expansions ˆ . The individual components of the rearranged elecof the field operators Eˆ and H tric and magnetic field operators according to (2.37) and (2.38) satisfy a modified operator algebra with respect to that of the harmonic oscillator: ˆ [ˆe j , eˆl ] = [hˆ j , hˆ l ] = 0, 4π ω j ˆ ˆ δ− j,l 1, [ˆe j , h l ] = s− j V (2.56) (2.57) where δ j,l is the Kronecker δ function. The knowledge of these commutators and of ˆ the derivation of (2.42) through (2.47), the generalized total momentum operator G, should have been generalized accordingly, e.g. the relation (2.42) becoming 1 ˆ2 ˆ2 E + H 8π 1 eˆ j eˆ − j + hˆ j hˆ − j , = 8π j uˆ = which enables us to derive the Maxwell equations both via the temporal derivatives and via the spatial derivatives. The energy density operator (2.58) can be generalized. In the expansion (2.43) we can set uˆ j = uˆ j refr , uˆ j refr = ωj ˆ† ˆ † † † b j b j + bˆ − j bˆ − j − 2π χ bˆ j − bˆ − j bˆ − j − bˆ j . 2V The energy density operator uˆ refr may be diagonalized through a Bogoliubov transformation. To this end we introduce an anti-Hermitian operator Rˆ of the form Rˆ = † † bˆ j bˆ − j − bˆ j bˆ − j and introduce the operators ˆ ˆ † Bˆ j = e−γ R bˆ j eγ R = (cosh γ )bˆ j − (sinh γ )bˆ − j , where γ = 1 1 ln = ln n. 4 2 Nondispersive Lossless Linear Dielectric On substitution ˆ ˆ † bˆ j = eγ R Bˆ j e−γ R = (cosh γ ) Bˆ j + (sinh γ ) Bˆ − j , the operator Rˆ takes the form Rˆ = † † Bˆ j Bˆ − j − Bˆ j Bˆ − j , and the energy density operator has the diagonal form uˆ j refr = nω j ˆ † ˆ † B j B j + Bˆ − j Bˆ − j . 2V The momentum operator is then given by ˆ refr = V (uˆ + − uˆ − ) = Kj G c j 1 † Bˆ j Bˆ j + 1ˆ , 2 with K j = nk j and the Hamiltonian can be calculated as Hˆ refr = 1 † Bˆ j Bˆ j + 1ˆ . 2 By inserting (2.63) into (2.37) and (2.38), respectively, we can obtain the electric and magnetic field operators inside the dielectric: 2π ω j 2 1 ˆ E(z, t) = −i ( Bˆ j − Bˆ − j ) and ˆ (z, t) = −i H 2π nω j V ( Bˆ j + Bˆ − j ). Similarly as above, this relation can be interpreted as a result of the replacement bˆ j → Bˆ j and a consequence of the quantized classical equations (2.33) and (2.34). For normal incidence on a sharp vacuum–dielectric interface, both reflection and diffraction occur. We will not treat this more general case according to Abram (1987). 2 Origin of Macroscopic Approach 2.2.3 Wave Functional Description of Gaussian States Białynicka-Birula and Białynicki-Birula (1987) have tried first to define the squeezing that is a generalization of the standard definition for one mode of radiation. This definition can be reformulated with respect to Białynicki-Birula (2000). The Riemann–Silberstein–Kramers complex vector has been introduced B(r, t) 1 D(r, t) +i √ , F(r, t) = √ √ 0 μ0 2 √ √ where we have divided by 0 , μ0 , as is appropriate with SI units. It has been shown how the Green function method can be used for solving linear equations for ˆ t). This approach allows that the medium under investigation the field operator F(r, is inhomogeneous and time dependent. It is not clear whether the complex vector (2.70) is then useful. It has been suggested that the periodicity of the electric permittivity tensor (r, t) or the magnetic permeability μ(r, t) can be important for the generation of squeezed states. Only the dispersion of the medium has not been considered. It has been derived that photon pair production is a necessary condition for squeezing. It is tempting to generalize the concept of a Gaussian state of the finitedimensional harmonic oscillator to the case of an infinite oscillator. BiałynickaBirula and Białynicki-Birula (1987) treat the time development of the Gaussian states in the free-field case. There the Schr¨odinger picture is adopted and an analogue of the Schr¨odinger representation in quantum mechanics has been introduced. Let us recall the quadrature representation in quantum optics. This representation is ˆ t), the a wave functional Ψ[A, t]. Let us observe that contrary to the operator A(r, argument A(r) of the wave functional does not depend on t, but the wave functional does depend on t. The Hamiltonian in this representation has the form H= 1 2 δ 2 + [∇ × A(r)]2 0 δA(r)2 μ0 d3 r. In Białynicka-Birula and Białynicki-Birula (1987), the wave functional of the vacuum state, i.e. the simplest Gaussian state of the electromagnetic field, can be found, as well as that of the “most general”. Thus, the exposition is confined to pure Gaussian states while it is possible to generalize it also to mixed Gaussian states of the electromagnetic field. The pure Gaussian state is determined by a complex matrix kernel, i.e. by two real matrix kernels. It is shown that the expectation values ˆ = B and D ˆ = D (equivalently, E ˆ = E) evolve according to the free-field B Maxwell equations and also the equations which the complex matrix kernel obeys can be found there. The whole electromagnetic field is treated as a huge infinite-dimensional harmonic oscillator. The wave function and the corresponding Wigner function become then functionals of the field variables. Mr´owczy´nski and M¨uller (1994) have Nondispersive Lossless Linear Dielectric considered only the scalar field. Białynicki-Birula (2000) starts from the wave functional of the vacuum state (Misner et al. 1970) 1 0 1 3 3 B(r) · B(r ) d r d r Ψ0 [A] = C exp − 2 4π μ0 |r − r |2 ˜ → −D) and from the wave functional (we change A 1 μ0 1 3 3 ˜ D(r) · D(r ) d r d r . Ψ0 [−D] = C exp − 2 4π 0 |r − r |2 The normalization constant C is an issue and it has not been completely solved in Białynicki-Birula (2000). The analogy with the one-dimensional harmonic oscillator leads to other notions. The Wigner functional of the electromagnetic field in the ground state is W0 [A, −D] = exp{−2N [A, −D]}, where N [A, −D] = 1 0 1 B(r) · B(r ) 2 4π μ0 |r − r |2 μ0 1 D(r) · D(r ) d3 r d3 r . + 0 |r − r |2 The expression (2.75) also plays the role of a norm for the photon wave function (Białynicki-Birula 1996a,b). The Wigner functional for the thermal state of the electromagnetic field has been presented. This state is mixed and it even has infinitely many photons in the whole field. In each of the subsequent cases, the wave functional and the Wigner functional have been introduced. The exception, the mixed state, has no wave functional. Let us remark that for (the statistical operator of) such a state the matrix element can be considered which is a functional of two arguments, A and A . In particular, the Wigner functional for the coherent state of the electromagnetic field |A, −D has been presented, where A(r), D(r) are the vector potential and the electric displacement vector, respectively, which characterize the state. The exposition is related to the hot topic of the superpositions of coherent states of the electromagnetic field. The exposition continues with the Wigner functionals for the states of the electromagnetic field that describe a definite number of photons. An example of the functional for the one-photon state with the photon mode function f(r) has been included. The norm (2.75) has not been related to any inner product of the photon wave functions, but these notions are connected. In contrast to Białynicka-Birula and 2 Origin of Macroscopic Approach Białynicki-Birula (1987), we introduce quadrature operators as Xˆ 1 [D] = Xˆ 2 [B] = ˆ 0) · f(r) d3 r, D(r, ˆ 0) · g(r) d3 r, B(r, where 1 f(r) = 4π 2 g(r) = 1 4π 2 μ0 1 D(r ) d3 r , 0 |r − r |2 0 1 B(r ) d3 r . μ0 |r − r |2 The commutator of the Xˆ 1 and Xˆ 2 operators is [ Xˆ 1 [D], Xˆ 2 [B]] = i f(r) · [∇ × g(r)] d3 r. Let us note that the right-hand sides of (2.78) and (2.79) comprise the operator |∇|−1 up to a certain factor (cf. Milburn et al. 1984). Without resorting to this notation, we obtain that i ˆ D(r1 ) · A(r1 ) d3 r1 1. (2.81) [ Xˆ 1 [D], Xˆ 2 [B]] = 4 We see easily that the usual commutator − 12 i1ˆ is yielded by the field (D, B) (or (A, −D)) with the property [−D(r1 )] · A(r1 ) d3 r1 = 2. We have not deepened the contrast by introducing the notation Xˆ 1 [−D] and Xˆ 2 [A] on the left-hand sides of (2.76) and (2.77). Białynicki-Birula (2000) presents the Wigner functional for the squeezed vacuum state: 0 1 B · KBB · B Wsq [A, −D] = exp − μ0 μ0 + D · KDD · D + B · KBD · D d3 r d3 r , (2.83) 0 where KBB , KDD , and KBD are real matrix kernels. The kernel KBD is not independent of KBB and KDD , but it must obey the condition that is reminiscent Nondispersive Lossless Linear Dielectric of the Schr¨odinger–Robertson uncertainty relation (Białynicki-Birula 1998). The problem of the time evolution is also discussed. It has been conceded that the Wigner function is not a very powerful tool for making detailed calculations. Just as in the field theory, the symmetric ordering is vexed. Another open question is how the projection of this Wigner functional onto Wigner functions of any orthogonal (complete or incomplete) modal system looks out. It is appropriate to mention here work concerning the photon wave function (Inagaki 1998, Hawton 1999, Kobe 1999), although it is relevant mainly to the electromagnetic field in vacuo. Using a straightforward procedure, Mendonc¸a et al. (2000) have quantized the linearized equations for an electromagnetic field in a plasma. They have determined an effective mass for the transverse photons. An extension of the quantization procedure leads to the definition of a photon charge operator. Zale´sny (2001) has found that the influence of a medium on a photon can be described by some scalar and vector potentials. He has extended the concept of the vector potential to relativistic velocities of the medium. He has derived formulae for the mass of photon in resting and moving dielectric and the velocity of the photon as a particle. 2.2.4 Source-Field Operator Kn¨oll et al. (1987) have compared the problem of quantum-mechanical treatment of action of optical devices with the input–output formalism (Collett and Gardiner 1984, Gardiner and Collett 1985, Yamamoto and Imoto 1986, Nilsson et al. 1986, cf. also Gea-Banacloche et al. 1990a,b). Apart from the fact that only a very particular setup is considered in the input–output formalism, the theory does not take into account the full space–time structure of the field. Kn¨oll et al. (1987) have elaborated on the approach developed on the basis of quantum field theory and applied to the problem of spectral filtering of light (Kn¨oll et al. 1986). The only assumptions are that the interaction between sources and light is linear in the vector potential and the optical system is lossless and that the condition of sufficiently small dispersion is fulfilled. First, the classical Maxwell equations with sources and optical devices are formulated and solved by the procedure of mode expansion and the quantized version is derived. The classical Maxwell equations comprise the relative permittivity (r) = n 2 (r), where n(r) is the space-dependent refractive index. The mode functions Aλ (r) are introduced as the solutions of equation ∇ × (∇ × Aλ (r)) − (r) ωλ2 Aλ (r) = 0, c2 2 Origin of Macroscopic Approach where ωλ2 is the separation constant for each λ, from which the gauge condition can be derived ∇ · [(r)Aλ (r)] = 0. It is assumed that these solutions are normalized and orthogonal in the sense of equation ˆ (r)Aλ (r) · Aλ (r) d3 r = δλλ 1. In terms of these functions, the vector potential can be decomposed. In the standard † manner the destruction and creation operators aˆ λ and aˆ λ are defined, which have the properties † ˆ [aˆ λ , aˆ λ ] = δλλ 1, † † [aˆ λ , aˆ λ ] = 0ˆ = [aˆ , aˆ ]. λ † On inserting the operators aˆ λ and aˆ λ into the decomposition of the vector potential, ˆ t) is defined: the operator of the vector potential A(r, ˆ t) = A(r, † Aλ (r) aˆ λ (t) + aˆ λ (t) . The source quantities ra and pa are considered as the operators rˆ a and pˆ a , which obey the standard commutation relations ˆ [ˆrka , pˆ k a ] = iδaa δkk 1, [ˆrka , rˆk a ] = 0ˆ = [ pˆ ka , pˆ k a and the commutation relations † [ˆrka , aˆ λ ] = 0ˆ = [ˆrka , aˆ λ ], † [ pˆ ka , aˆ λ ] = 0ˆ = [ pˆ ka , aˆ ]. λ ˆ t) can be used for the derivation of the electric-field strength The operator A(r, operator which is associated with the radiation field by the relation ˆ t) ˆ t) = − ∂ A(r, E(r, ∂t and for the derivation of the magnetic field strength operator ˆ t). ˆ t) = ∇ × A(r, B(r, Nondispersive Lossless Linear Dielectric Nevertheless, the mode functions are redefined so that they obey the normalization condition δλλ . (2.94) (r)Aλ (r) · Aλ (r) d3 r = 20 ωλ The form of the normalization conditions (2.87) and (2.94) is tailored to realmode functions and the necessity of modification of some fundamental relations is commented on by Kn¨oll et al. (1987). All of these field operators may be written in the form † ˆ t) = (2.95) Fλ (r)aˆ λ (t) + F∗λ (r)aˆ λ (t) . F(r, λ ˆ t), the functions Fλ (r) can be In dependence on the choice of the operator F(r, derived from the mode functions of the vector potential Aλ (r). ˆ t) into two parts It is often convenient to decompose a given field operator F(r, by the relation ˆ t) = Fˆ (+) (r, t) + Fˆ (−) (r, t), F(r, where Fˆ (+) (r, t) = Fλ (r)aˆ λ (t), Fˆ (−) (r, t) = [Fˆ (+) (r, t)]† . Further, the Heisenberg equations of motion for the field operators are derived, so that the field operators can be expressed in terms of free-field and source-field operators. It is typical of the approach of Kn¨oll et al. (1987) that any field operator Fˆ k(+) is decomposed into a free-field operator and a source-field operator as follows: (+) (r, t) + Fˆ ks (r, t), Fˆ k(+) (r, t) = Fˆ kfree where (+) Fˆ kfree (r, t) = Fˆ ks (r, t) = Fkλ (r)aˆ λfree (t), θ (t − t )K kk (r, t; r , t ) Jˆk (r , t ) d3 r dt . Here vector components are labelled by the index k and repeated indices k mean summation. 2 Origin of Macroscopic Approach Unfortunately, the operator aˆ λfree (t) was not defined, so that we may only guess that aˆ λfree (t)|t=t0 = aˆ λ (t0 ) for t = t0 , and the dynamics for t ≥ t0 can be found in Kn¨oll et al. (1987). In equation (2.101), the kernel K kk is defined by the relation K kk (r, t; r , t ) = − 1 Fkλ (r)A∗k λ (r ) exp[−iωλ (t − t )]. i λ Inserting equation (2.99) yields the following representation of Fˆ k(+) : Fˆ k(+) (r, t) (+) θ(t − t )K kk (r, t; r , t ) Jˆk (r , t ) d3 r dt + Fˆ kfree (r, t). ˆ (+) , it holds that Fkλ = In particular, if Fˆ k(+) is identified with the vector potential A k Akλ and the kernel K kk takes the form K kk (r, t; r , t ) = − 1 Akλ (r)A∗k λ (r ) exp[−iωλ (t − t )]. i λ Analogously, if one is interested in the electric-field strength of the radiation Eˆ k(+) , the appropriate form of the kernel K kk is K kk (r, t; r , t ) = − 1 ωλ Akλ (r)A∗k λ (r ) exp[−iωλ (t − t )]. λ So the symmetry relations ∗ K kk (r, t; r , t ) = ∓K k k (r , t ; r, t) ˆ (+) and Eˆ (+) , respectively. The information on the action of the optiare valid for A k k cal instruments on the source field is contained in the space–time structure of the kernel K kk , which may be regarded as the apparatus function also used in classical optics. Further, the commutation relations for various combinations of field operators at different times are studied and relationships between field commutators and sourcequantity commutators are derived. The following abbreviations of the notation are used: x = {r, t} Nondispersive Lossless Linear Dielectric and others, by which the superscripts +, − are introduced also for Jˆk (x) and K kk (x, x ). With these generalizations, it holds that ( j) ( j) ( j) Fˆ k (x) = Fˆ kfree (x) + Fˆ ks (x), j = +, −, ( j) ( j) ( j) Fˆ ks (x) = θ (t − t )K kk (x, x ) Jˆk (x ) dx . (2.108) (2.109) When appropriate, the time ordering symbols T+ and T− are used. Let us consider ˆ 2 (t2 )... A ˆ n (tn ). The symbol T+ introduces the time ˆ 1 (t1 ) A any operator product A ˆ ordering of the operators Ai (ti ) with the latest time to the far left: ˆ 2 (t2 )... A ˆ n (tn ) ˆ 1 (t1 ) A T+ A ˆ i2 (ti2 )... A ˆ in (tin ) with ti1 > ti2 > · · · > tin , ˆ i1 (ti1 ) A =A ˆ i (ti ) with the latest and the symbol T− introduces time ordering of the operators A time to the far right: ˆ 1 (t1 ) A ˆ 2 (t2 )... A ˆ n (tn ) T− A ˆ i2 (ti2 )... A ˆ in (tin ) with ti1 < ti2 < · · · < tin . ˆ i1 (ti1 ) A =A From (2.91) it follows that (j ) (j ) ( j1 ) ( j2 ) [ Fˆ k1 1 (x1 ), Fˆ k2 2 (x2 )] = [ Fˆ k1 free (x1 ), Fˆ k2 free (x2 )] ˆ ( j2 , j1 ) (x2 , x1 ), ˆ ( j1 , j2 ) (x1 , x2 ) − D +D k1 k2 k2 k1 where ˆ ( j1 , j2 ) (x1 , x2 ) = − D k1 k2 θ (t2 − t2 )θ (t2 − t1 )θ (t1 − t1 ) (j ) (j ) (j ) (j ) ⊗K k1 1k (x1 , x1 )K k2 2k (x2 , x2 )[ Jˆk 1 (x1 ), Jˆk 2 (x2 )] dx1 dx2 . 1 From an inspection of equation (2.113), we readily learn that ˆ ( j1 , j2 ) (x1 , x2 ) = 0ˆ if t1 > t2 . D k1 k2 The commutators in (2.112) are ( j) ( j) ˆ j = +, −, [ Fˆ k1 free (x1 ), Fˆ k2 free (x2 )] = 0, ˆ (x1 ), Fˆ k(−) (x2 )] = Fk1 k2 (x1 , x2 )1, [ Fˆ k(+) 1 free 2 free 2 Origin of Macroscopic Approach where Fk1 k2 (x1 , x2 ) = Fk1 λ (r1 )Fk∗2 λ (r2 ) exp[−iωλ (t1 − t2 )]. ˆ It would be interesting to find the particular forms of the commutators between A (+) (−) ˆ ˆ ˆ and E or between A and E . Further, these commutation relations are used to express field correlation functions of free-field operators and source-field operators and to describe the effect of the optical system on the quantum properties of light fields. The method of transformation of normal and time orderings is demonstrated for the following important class of correlation functions: G (m,n) k1 ...km+n (x 1 , ..., x m+n ) ⎤⎡ ⎤" m m+n (x j )⎦ ⎣T+ (x j )⎦ . = ⎣T− Fˆ k(−) Fˆ k(+) j j ⎡ This transformation is understood in the relation = O+ T+ Fˆ k(+) (x1 ) Fˆ k(+) (x2 ) 1 2 ˆ (+) (x1 ) Fˆ (+) (x2 ) + Fˆ (+) (x2 ) . Fˆ k(+) (x ) + F 1 k1 s k2 s k2 free 1 free In equation (2.119) and the following ones the ordering symbols O+ and O− are (xi ), used. The symbol O+ introduces the following ordering of operators Fˆ k(+) is (+) Fˆ k j free (x j ): (i) Ordering of the operators Fˆ k(+) (xi ), Fˆ k(+) (x j ) with the operators Fˆ k(+) (x j ) to is j free j free (+) the right of the operators Fˆ ki s (xi ). (ii) Substitution of equation (2.109) for the operators Fˆ k(+) (xi ) and T+ time ordering is of the source-quantity operators Jˆki (xi ) in the resulting source-quantity operator products before performing the integrations with respect to ti . The symbol O− introduces the following operator ordering in products of operators (xi ), Fˆ k(−) (x j ): Fˆ k(−) is j free (i) Ordering of the operators Fˆ k(−) (xi ), Fˆ k(−) (x j ) with the operators Fˆ k(−) (x j ) to is j free j free (−) ˆ the left of the operators Fki s (xi ). (ii) Substitution of equation (2.109) for the operators Fˆ k(−) (xi ) and T− time ordering is † ˆ of the source-quantity operators J (x ) in the resulting source-quantity operator ki products before performing the integrations with respect to ti . Nondispersive Lossless Linear Dielectric Equation (2.119) may now be generalized: T+ (x j ) = O+ Fˆ k(+) j ˆ (+) (x j ) , (x ) + F Fˆ k(+) j k s free j j ˆ (−) (x j ) . (x ) + F Fˆ k(−) j kjs j free (x j ) = O− Fˆ k(−) j n j=1 Using relations (2.118), (2.120), and (2.121), we may represent the correlation functions as G (m,n) k1 ...km+n (x 1 , ..., x m+n ) ⎧ ⎨ ⎫ ⎬ Fˆ k(−) (x j ) + Fˆ k(−) (x j ) js j free ⎧ ⎨ ⎫" ⎬ Fˆ k(+) (x j ) + Fˆ k(+) (x j ) js j free When at the points of observation the following conditions are fulfilled (−) (+) = 0 = Fˆ kfree ... , . . . Fˆ kfree then the relation (2.122) can be simplified: G (m,n) k1 ...km+n (x 1 , ..., x m+n ) ⎤⎡ ⎤" ⎡ m m+n (x j )⎦ ⎣O+ (x j )⎦ . = ⎣O− Fˆ k(−) Fˆ k(+) js js j=1 When written in more detail, into the relation (2.124), the complex kernels K k j k j (x j , x j ) are introduced. It is noted that the effect of the beam splitter that is used for mixing of source light with the reference beam in the case of homodyne detection is described by the assumption that the reference light beam is a free field. In Kn¨oll et al. (1987) the relation (2.122) is specialized to a multimode coherent free field |{αλ } , (+) (x)|{αλ } = Fk (x)|{αλ } , Fˆ kfree 2 Origin of Macroscopic Approach that is to say G (m,n) k1 ...km+n (x 1 , ..., x m+n ) ⎫ ⎧ m ⎨ ⎬ (x j ) Fk∗j (x j )1ˆ + Fˆ k(−) = O− js ⎭ ⎩ j=1 ⎫" ⎧ m+n ⎨ ⎬ (x j ) Fk j (x j )1ˆ + Fˆ k(+) ⊗ O+ . js ⎭ ⎩ Finally, the theory is applied to the photocount statistics. Following Glauber’s theory of photon detection (Glauber 1965, Kelley and Kleiner 1964), the probability of observing precisely n events in a counting time interval [t, t + Δt) is given by the relation ) * n 1 ˆ ˆ Δt) , Γ(t, Δt) exp −Γ(t, pn (t, Δt) = Ω n! where ˆ Δt) = Γ(t, S(t1 − t2 ) Eˆ k(−) (ri , t1 ) Eˆ k(+) (ri , t2 ) dt1 dt2 may be interpreted as the operator of the integrated intensity. Here ri are position vectors of the detector atoms and S(t) is a response function. Let us note that one usually assumes that S(t1 − t2 ) = ηδ(t1 − t2 ), with some η. In relation (2.127), the ordering symbol Ω introduces the following operator ordering: (i) The normal ordering of the operators Eˆ k(−) (x), Eˆ k(+) (x) with the operators Eˆ k(−) (x) to the left of the operators Eˆ k(+) (x). (ii) T+ ordering of the operators Eˆ k(+) (x) and T− ordering of the operators Eˆ k(−) (x). In analogy with (2.122), relation (2.127) becomes ) * n 1 ˆ ˆ Δt)] , Γ(t, Δt) exp[−Γ(t, pn (t, Δt) = O n! Nondispersive Lossless Linear Dielectric where the Ω ordering is simply replaced by the O ordering defined as follows: (−) (−) (+) (+) (i) The normal ordering of the operators Eˆ ks (x), Eˆ kfree (x), Eˆ ks (x), Eˆ kfree (x) with (−) (−) (+) (+) the operators Eˆ ks (x), Eˆ kfree (x) to the left of the operators Eˆ ks (x), Eˆ kfree (x). (+) (+) ˆ ˆ (ii) O+ ordering of the operators E ks (x), E kfree (x) and O− ordering of the opera(−) (−) tors Eˆ ks (x), Eˆ kfree (x). The fulfilling of the conditions (2.123) causes a modification of relation (2.128) as follows: ˆ Δt) = Γ(t, (−) (+) S(t1 − t2 ) Eˆ ks (ri , t1 ) Eˆ ks (ri , t2 ) dt1 dt2 . In the case of mixing the source field light with a coherent free-field reference beam, there is an analogy with the relation (2.126): ˆ Δt) = Γ(t, S(t1 − t2 ) (−) (+) × Ek∗ (ri , t1 )1ˆ + Eˆ ks (ri , t1 ) Ek (ri , t2 )1ˆ + Eˆ ks (ri , t2 ) dt1 dt2 . A generalization of the Wick theorem on transforming a time-ordered product onto a sum of normally ordered terms was performed by Agarwal and Wolf (1970). The quantum theory of the radiation field interacting with atomic sources in the presence of a linear, dispersionless, and absorptionless dielectric with spacedependent refractive index has been applied to the description of the action of a resonator-like cavity with input–output coupling and filled with an active medium (Kn¨oll and Welsch 1992). 2.2.5 Continuum Frequency-Space Description Blow et al. (1990) have formulated the quantum theory of optical wave propagation without recourse to cavity quantization. This approach avoids the introduction of a box-related mode spacing and enables one to use a continuum frequency-space description. In this chapter and in that by Blow et al. (1991) a continuous-mode quantum theory of electromagnetic field has been developed. As usual in the quantum field theory, the box-related modes are considered whose creation and destruction operators satisfy the usual independent boson commutation relations: † ˆ [aˆ i , aˆ j ] = δi j 1. Different modes of the cavity, labelled by i and j, have frequencies given by different integer multiples of the mode spacing Δω. The mode spectrum becomes 2 Origin of Macroscopic Approach continuous as Δω → 0 and in this limit the transformation to continuous-mode operators is convenient: √ ˆ aˆ i → Δω a(ω). (2.134) A complete orthonormal set of functions was considered which may describe states of finite energy. The set is numerable infinite and to each function in it a destruction operator is assigned. Such operators have all the usual properties of the operators of the monochromatic mode. Further specific states of the field have been treated such as coherent states, number states, noise and squeezed states. With the use of noncontinuous operators, a generalization of the single-mode normal ordering theorem was proved. Field quantization in a dielectric has been treated including the material dispersion and the theory has been applied to the pulse propagation in an optical fibre. A comparison with results by Drummond (1990, 1994) would be in order. Let us consider the fields in a lossless dielectric material with the real relative permittivity (ω) and the refractive index n(ω) related by (ω) = [n(ω)]2 . Let us recall the definition of the phase velocity vF (ω) = ω c = k n(ω) and that of the group velocity vG (ω) 1 ∂k 1 ∂ = = [ωn(ω)]. vG (ω) ∂ω c ∂ω The normalization of the field operators is fixed by requirement that the normally ordered total energy density operator Uˆ (z, t) has the diagonal form: ˆ free = A H Uˆ (z, t) dz = ˆ ωaˆ † (ω)a(ω) dω. The field operators are obtained in accordance with the relation ∂ ˆ (+) ∂ ˆ (+) (z, t), Bˆ (+) (z, t) = A (z, t) Eˆ (+) (z, t) = − A ∂t ∂z and with the expansion of the vector-potential operator ˆ (+) (z, t) = vG (ω) 4π 0 cωn(ω)A ˆ λ) exp[−i(ωt − kz)] dk. (k, λ)a(k, Nondispersive Lossless Linear Dielectric Noting that dk = + dω ˆ ˆ λ) = vG (ω)a(ω), , a(k, vG (ω) and taking the polarization to be parallel to the x-axis, it follows from (2.139) that the field operators are ω 4π 0 cAn(ω) n(ω)z ˆ dω × a(ω) exp −iω t − c Eˆ (+) (z, t) = i ˆ (+) ωn(ω) 4π 0 c3 A n(ω)z ˆ dω. × a(ω) exp −iω t − c (z, t) = i Alternatively, the propagation constant can be expanded to the second order in frequency and a partial differential equation can be obtained (cf. Drummond 1990). Assuming a narrow bandwidth, the slowly varying field envelope can be represented ˆ t), which obeys the equation by the operator a(z, i k ∂ 2 ∂ ˆ ˆ t) + ˆ t) = 0, a(z, a(z, ∂z 2 ∂t 2 where k is the second derivative with respect to the frequency of the propagation constant, evaluated at the central frequency. The equation has been simplified using the transformation of envelope into a frame moving with the group velocity. This is necessary for the envelope to be slowly varying. In the classical nonlinear optics the stationary fields have also envelopes, but they seem to be defined otherwise. The treatment of this problem in the noncontinuous basis proceeds from the replacement ˆ t) = a(z, φ j (z, t)ˆc j , where φ j (z, t) are a complete orthonormal set of functions on z and cˆ j are destruction operators obeying the usual commutation relations. The advantage of this treatment is that the functional dependence on z and t is contained in the c-number ˆ t) as in the propagation equation (2.144) functions rather than the operators a(z, for example. It is not emphasized by Blow et al. (1990) that the solution of 2 Origin of Macroscopic Approach equation (2.144) preserves the equal-space, not equal-time, commutators. Similarly, the set of functions φ j (z, t) enjoys the orthonormality and completeness only as the equal-space, but not equal-time, properties. The propagation equation (2.144) now yields the following equations for the noncontinuous basis functions: i k ∂ 2 ∂ φ j (z, t) = 0. φ j (z, t) + ∂z 2 ∂t 2 Finally, the process of photodetection in free space is considered and the results applied to homodyne detection with both local oscillator and signal fields pulsed. The results of sets of measurements in which the photocurrent is integrated over periods T can be predicted by the use of an operator τ +T ˆ = ˆ dt. aˆ † (t)a(t) (2.147) M τ Here τ is the start time of the measurements, the detector is placed at z = 0, and 1 ˆ =√ ˆ a(t) a(ω) exp(−iωt) dω. (2.148) 2π Let us further consider a balanced homodyne detector in which the light beam under study is superposed on a local oscillator by combining them at a 50:50 beam splitter. The measured quantity is the difference in the photocurrents of two detectors placed in the output arms of the beam splitter and it can be represented by the operator (Collett et al. τ +T † ˆ ˆ [aˆ † (t)aˆ L (t) − aˆ L (t)a(t)] dt, (2.149) O=i τ where aˆ L (t) and aˆ L (t) are the continuum creation and destruction operators of the ˆ correspondingly for the signal field. local oscillator field and aˆ † (t) and a(t) For homodyne detection of pulsed signals it is advantageous to use a pulsed local oscillator. The pulsed signal is described by the noncontinuous basis function φ0 (t) and the local oscillator is described by a normalized function φL (t), the field of the local oscillator being in the coherent state |{αL (t)} , where + αL (t) = NL exp(iθL )φL (t), (2.150) with NL the mean total number of photons in the pulse and θL the externally controlled local oscillator phase. Let us recall the definition of a coherent state: ˆ |{α(t)} = D({α(t)})|0 , with ˆ D({α(t)}) = exp ˆ [α(t)aˆ † (t) − α ∗ (t)a(t)] , Nondispersive Lossless Linear Dielectric which is close enough to that by Blow et al. (1990) except for the exchange of space for time. It is assumed that the signal field is described by a set of noncontinuous operators dˆ i at the output of a nonlinear system and the signal field at the input to the system is described by a similar set of operators cˆ i . The action of the nonlinear system is defined by the relations † dˆ 0 = μˆc0 + ν cˆ 0 , dˆ i = cˆ i , i > 0. In the relation (2.149) it is necessary to substitute ˆ = a(t) φi (t)dˆ i . φiL (t)ˆciL , In analogy, we consider aˆ L (t) = where the subscript L only modifies the familiar meanings and φ0L (t) = φL (t). It is shown how the formulation of the quantum field theory is modified for the one-dimensional optical system. The fields are defined in an infinite waveguide parallel to the z-axis, but of finite cross-sectional area A of the rectangular form with sides parallel to the x- and y-axes. The x and y wave-vector components are thus restricted to discrete values and any three-dimensional integral over this spatial region is converted according to d3 k → (2π )2 dk z . A k ,k x On the assumption that the modes with k x = 0 or k y = 0 are vacuum ones, a reduced Hilbert (namely Fock) space can be exploited. The summation in (2.156) can, therefore, be removed and putting k z = k, the other conversions are A δ(k − k ), (2π)2 √ A ˆ λ). ˆ λ) → a(k, a(k, 2π δ (3) (k − k ) → The vector-potential operator has been modified for the dispersive lossless medium and compared with Drummond (1990) and Loudon (1963), the positive-frequency part is 2 Origin of Macroscopic Approach ˆ (+) A × (r, t) = vG (ω) 16π 3 0 cωn(ω) ˆ λ) exp[−i(ωt − k · r)] d3 k, (k, λ)a(k, c |k|. n(ω) The expression (2.159) can be converted to the one-dimensional form easily (as indicated above, cf. (2.140)). McDonald (2001) has considered a variation of the physical situation of “slow light” to show that the group velocity can be negative at central frequency. A Gaussian pulse can emerge from the far side of a slab earlier than it hits the near side and the pulse emission at the far side is accompanied by an antipulse emission, the antipulse propagating within the slab so as to annihilate the incident pulse at the near side. 2.3 Quantum Description of Experiments with Stationary Fields Burnham and Weinberg (1970) found that the measured value of the correlation time between the two optical photons produced in a parametric process was very small, an effect of a practical interest. Laboratory techniques for doing experiments with single photons also have advanced. Since 1985, such photon pairs have become familiar for the study of nonclassical aspects of light (Horne et al. 1990). The process of optical parametric three-wave mixing in a second-order nonlinear medium consists of the coherent interaction between pump, signal, and idler waves. This process may occur as frequency down conversions, specifically as an optical parametric oscillation and an optical parametric amplification. In a travelling-wave setting, the optical parametric generation is called a spontaneous parametric downconversion. The photon pairs (biphotons) produced in parametric down-conversion are useful in experiments concerning fundamental questions of quantum theory. The description of experiments has been facilitated by studies of Campos et al. (1990). The autocorrelation and cross-correlation properties of the signal and idler beams produced in the parametric down-conversion have been studied, e.g. in Joobeur et al. (1994). A unified treatment of the experiment on the interference of a “biphoton with itself” and of other three experiments has been provided by Casado et al. (1997a). A fourth-order interference has been obtained in the four cases, and the uniformity has been achieved also by the use of the Wigner (or Weyl) representation of the field operators. A similar treatment of the famous experiment and of another one has been presented by Casado et al. (1997b). A second-order interference has been Quantum Description of Experiments with Stationary Fields treated in the two cases and the stochastic properties of the pump beam have been respected. The studies of the fundamentals of quantum mechanics underlie such interesting applications as quantum cryptography and quantum computing (Bowmeester et al. 2000). The experiments have become very popular (Shih 2003). Design of experiments for undergraduate students has become feasible (Galvez et al. 2005). Here we return from the Wigner to the Hilbert-space formalism as in Peˇrinov´a and Lukˇs (2003). First we consider the three-dimensional expansion of the operator of a chosen component of the electric vector after Casado et al. (1997a). As in the schematics of the experiments the field is restricted to paths leading to detectors, we introduce one-dimensional expansions of the electric-field operator. We attempt to consider orthogonal modal functions, although we cannot define them everywhere, but only on the paths. We are aware of the dangerous position, where one cannot evaluate the orthogonality property for the lack of a complete definition. In this approach we do not start with the description of the process of parametric down-conversion from a Hamiltonian, but with the response of the output fields of a nonlinear crystal to the input fields (Casado et al. 1997a) when two paths cross such a crystal. Such a response depends also on stochastic properties of the pump beam, which is assumed to be monochromatic however. The experiment on the interference of signal and idler photons (Ghosh et al. 1986) can do with the simple description, when the lack of the second-order interference is derived. In the use of two detectors we consider four paths and modify (double) the description. Nevertheless, we do not reproduce the well-known result. Similarly we proceed in the case of the experiment of Rarity and Tapster (1990), which was also used to test Bell’s inequality using phase and momentum. In contrast, in the case of the experiment of Franson (1989) we are allowed to return to the simple description, as essentially two paths are involved, even though the schematic is more complicated. This experiment was proposed in order to test a Bell inequality for energy and time. Next we deal with induced coherence and indistinguishability in two-photon interference (Zou et al. 1991). In this case the schematic comprises two nonlinear crystals, the number of paths is greater, but since two paths belong to each crystal, the simple description is appropriate. The lack of induced emission made it a “mindboggling” experiment (Greenberger et al. 1993), but the indistinguishability of the paths along which the signal photon arrives at the detector (in fact, the biphoton arrives at the two detectors) is still held for the reason of interference. We may refer to Casado et al. (1997b), where stochastic properties of the pump beam are taken into account. Two experiments are analysed: frustrated two-photon creation by interference, and induced coherence and indistinguishability. Coincidences are not studied and a second-order interference has been obtained in the two cases. Last we mention the frustrated two-photon creation via interference (Herzog et al. 1994) restricting ourselves to the second-order interference and the monochromatic pump. 2 Origin of Macroscopic Approach 2.3.1 Spatio-temporal Descriptions of Parametric Down-Conversion Experiments In the Hilbert-space representation of the light field, the electric vector is represented as a sum of two mutually conjugate operators (Casado et al. 1997a) ˆ (−) (r, t), ˆ t) = E ˆ (+) (r, t) + E E(r, ωk ˆ (+) (r, t) = i E aˆ (t)eik·r , 3 k,λ k,λ 2L k,λ (2.161) (2.162) where L 3 is the normalization volume, aˆ k,λ (t) is the annihilation operator for a photon whose wave vector is k and whose polarization vector is k,λ , and ωk = c|k|. Equations (2.161) and (2.162) correspond to the Heisenberg picture, where all time dependence of the averages comes from the creation and annihilation oper† ators aˆ k,λ (t) and aˆ k,λ (t). In this picture the state of the field is represented by a time-independent statistical operator ρ. ˆ As we do not study experiments involving polarizing devices, we find it convenient to use a scalar approximation well known in classical optics. When Casado et al. (1997a) use the subscripts on the (Wigner representations of) field operators they indicate that the light beam contains frequencies within a range and that “transverse” components of wave vectors are limited by small upper values. We believe that such subscripts indicate which part of the field is considered. The laser theory and, in general, the theory of resonators connect the quantum field with the annihilation operators not via the complex exponentials, but via more general modal functions, which are often related to the device. We suppose that such an approach can be interesting also in our study, after we find modal functions that are connected to the linear devices used and to the mirrors. Obviously, the free evolution of operators aˆ k0 (zeroth-order solution) is transformed into a kind of linear dynamics of the “relevant” component Eˆ ss(+)0 (r, t) of the electric vector via the appropriate modal functions, with ss being any subscript. This process can be formalized by a quadratic Hamiltonian, which differs from the free-field Hamiltonian only by the meaning of the creation and annihilation operators. We restrict ourselves to the operator ρˆ that represents a vacuum state. We suppose that one or two nonlinear crystals involved in the experiment are described in terms of interaction Hamiltonians. The action of the scattering operator on the initial field can be “guessed”. The interaction lasting only for a short time and being spatially confined to the medium suggests to us an appropriate modification of the linear dynamics. We modify also the notation for the resulting field by omitting the initial subscript 0. (i) The process of parametric down-conversion We are going to study the process of parametric down-conversion of light in the Hilbert-space representation. We refer to any of our figures for a sketch of the Quantum Description of Experiments with Stationary Fields setup used for parametric down-conversion. A nonlinear crystal is pumped by a laser beam V , producing a continuum of coloured cones around the axis defined by the pump. In experimental practice two narrow correlated beams, called “signal” Eˆ s and “idler” Eˆ i , are selected by means of apertures, filters, or just the detectors. Let us take the origin of the coordinate system, 0 ≡ 01 , at the centre of the crystal. We treat the pump beam as an intense monochromatic wave represented, in the scalar approximation, by V (r, t) = V ei(k0 ·r−ω0 t) + c.c., where V is a complex amplitude of a pump beam, ω0 is a frequency of the pump beam, k0 is an appropriate wave vector, and c.c. means the complex conjugate term to the previous one. In a product with the identity operator it may be added to the electric-field operator. Now, let us consider two narrow correlated beams, called signal and idler, with average frequencies ωs , ωi and wave vectors ks , ki , respectively, fulfilling the matching conditions ωs + ωi = ω0 , ks + ki = k0 . The response Eˆ s(+) (r, t) and Eˆ i(−) (r, t) of a nonlinear crystal to the input fields (+) (r, t) and Eˆ 0i(−) (r, t) is as Eˆ 0s (+) ˆˆ Eˆ (−) (r, t), (r, t) + e−iω0 t gV G Eˆ s(+) (r, t) = (1ˆˆ + g 2 |V |2 Jˆˆ ) Eˆ 0s 0i ˆE (−) (r, t) = eiω0 t gV G ˆˆ † Eˆ (+) (r, t) + (1ˆˆ + g 2 |V |2 Jˆˆ ) Eˆ (−) (r, t), i ˆˆ and where g is an effective coupling constant, 1ˆˆ is the identity superoperator, G Jˆˆ are antilinear and linear superoperators, respectively, which substitute expansions in annihilation (creation) operators for annihilation (creation) operators ( Jˆˆ ˆˆ yields yields an expansion in the annihilation operators in the first equation and G (+) ˆ † ˆ Eˆ (r, t) ≡ an expansion in the creation operators in the same equation), and G 0s (−) ˆ † ˆ Eˆ (r, t)] . The relation (2.165) can be modified (doubled) so that it relates out[G 0s (+) put fields Eˆ s(+) (r, t), Eˆ i(−) (r, t), j = 1, 2, to input fields Eˆ 0s (r, t), Eˆ 0i(−)j (r, t), j = j j j 1, 2. Since a pointlike crystal is considered (Casado et al. 1997b), it may be interesting to imagine Equations (2.165) at r = 0 without the subscript 0 on the right-hand side. It can occur, but at the cost of other notation. The interaction does not change the field just in front of the crystal, so we can interpret the initial field as the “in” resulting field. As it is almost at the centre of the crystal, it differs negligibly from the initial field just behind the crystal, which becomes the “out” resulting field. In order to determine the detection probabilities in the Hilbert-space representation, we adopt the correlation properties (Casado et al. 1997a). In such a work it has proved convenient to substitute slowly varying amplitudes Fˆ J(+) (r, t) [ Fˆ J(−) (r, t)] for 2 Origin of Macroscopic Approach ˆ (−) the amplitudes Eˆ (+) J (r, t) [ E J (r, t)], the relation between them being Fˆ J(+) (r, t) = eiω J t Eˆ (+) J (r, t), J = s, i. According to Casado et al. (1997a) it is essential to use the following relation, which is still an approximation: r rab ab exp iωa , (2.167) Fˆ (+) (rb , t) = Fˆ (+) ra , t − c c where ωa is some frequency appropriate to a light beam and r ab = ea ·(rb −ra ), with ea being the unit vector in the direction of propagation. Since the vectors whose dot product is taken are usually of the same direction, the magnitude of displacement vector may be evoked. If we consider the signal beam emerging from the crystal at different times t and t , we can use the autocorrelations (Casado et al. 1997a): Fˆ J(−) (r, t) Fˆ J(+) (r, t ) = g 2 |V |2 μ J (t − t), J = s, i. The following autocorrelations, and their complex conjugates, vanish: Fˆ J(+) (r, t) Fˆ J(+) (r, t ) = 0, J = s, i. The relation holds at any point of the outgoing beam, most interestingly just behind the crystal. With respect to the cross correlation, we prefer to characterize the signal and idler field operators just behind the crystal at different times (Casado et al. 1997a): ˆ (+) Fˆ s(+) out (0, t) Fi out (0, t ) = gV ν(t − t). It is useful to know that Fˆ s(+) (r, t) Fˆ i(−) (r, t ) = Fˆ s(−) (r, t) Fˆ i(+) (r, t ) = 0. In the Hilbert-space formalism, the usual theory of detection (by photon absorption) is based on the normal ordering. The joint detection rate is given by Pab (ra , t; rb , t ) = K 0| Eˆ (−) (ra , t) Eˆ (−) (rb , t ) Eˆ (+) (rb , t ) Eˆ (+) (ra , t)|0 (2.172) in the Heisenberg picture, where K = K a K b and K a (K b ) is a constant related to the efficiency of the detector and the energy of a single photon. The well-known property of Gaussian random variables A, B, C, and D, ABC D = AB C D + AC B D + AD BC , Quantum Description of Experiments with Stationary Fields applies not only in the Weyl (Casado et al. 1997a) but also in the normal ordering and entails that the joint detection rate is written in three terms. The first two terms are fourth order in g, while the last term is second order in g. We may discard the first two terms (Casado et al. 1997a) and finally obtain Pab (ra , t; rb , t ) = K | Eˆ (+) (ra , t) Eˆ (+) (rb , t ) |2 . We will determine the visibility V of the intensity interference: V = Rabmax − Rabmin , Rabmax + Rabmin Rabmax = w 2 − w2 Pabmax (τ ) dτ , Rabmin = w 2 − w2 Pabmin (τ ) dτ , with w the coincidence window which we choose to be w = 13 × 10−9 s, defined in terms of the integral , , , w , ,, , 2 , d h ,ν τ + d , ,ν τ + h , dτ , = K M , , , c c c c , −w -2 . σ2 h − d 2 = exp − 2 c 1 d +h d +h σ σ × erf √ + w ∓ erf ∓ √ w − . 2 c c 2 2 (2.177) Here 2 erf(x) = √ π e−t dt, 2 d, h are parameters of an experimental setup, c = 2.998 × 108 ms−1 is the speed of light, ν(τ ) is a Gaussian, 1 1 2 4 √ −σ 2 τ 2 σe , (2.179) ν(τ ) = |ν(τ )| = K π where σ = 1012 s−1 . Let us remember that for σ −1 w, we have erf(±∞) = ±1. Particularly, 2d 2d 1 σ σ erf √ + w + erf √ w − , 2 c 2 2 c σ (2.180) M(0, 0) = erf √ w . 2 d d , c c 2 Origin of Macroscopic Approach (ii) Experiment on the interference of signal and idler photons Let us start with an experiment demonstrating the coherence properties of the parametric down-conversion photon pairs as proposed in Ghosh et al. (1986). It is assumed that ωs = ωi = ω20 and the signal and idler beams are directed to a screen by means of two mirrors. There is no second-order interference between the two beams. When two detectors are put on the screen one can show a fourth-order, or intensity–intensity, interference. As seen from Fig. 2.1, the tracing of the beams is not so evident, and we modify the well-known result (Casado et al. 1997a, Ghosh et al. 1986). A report on the experiment was brief (Ghosh and Mandel 1987). Fig. 2.1 Experimental setup on interference on a screen In what follows, we will specify modal functions and nonlinear dynamics of field operators. We introduce the notation for the points of reflection 0Ms j = (b, 0, z Ms j ), 0Mi j = (−b, 0, z Mi j ), j = 1, 2, where z Ms j = bd bd , z Mi j = , j = 1, 2, 2b − x j 2b + x j with d being the distance from the centre of the crystal to the screen and b being the distance from the axis of the pumping to the mirrors. We consider the initial electric field in the form Eˆ 0(+) (r, t) = V (+) (r, t)1ˆ + ˆ (+) (r, t) , (r, t) + E Eˆ s(+) ij0 j0 where r = (x, y, z), k = (k x , k y , k z ), and (r, t) = Eˆ s(+) j0 vs j k (r)aˆ s j k0 (t), vi j k (r)aˆ i j k0 (t), k∈[k]s j (r, t) = Eˆ i(+) j0 k∈[k]i j Quantum Description of Experiments with Stationary Fields with the orthonormal systems of functions vs j k (r), k ∈ [k]s j , vi j k (r), k ∈ [k]i j , j = 1, 2, vs j k (r)2 = vi j k (r)2 = |vs j k (r)|2 d3 r = ωk , k ∈ [k]s j , |vi j k (r)|2 d3 r = ωk , k ∈ [k]i j . e , es j being a unit vector of The [k]s j is a set of integer multiples of the vector 2π L sj the signal beam at the origin. Similarly for [k]i j . A formal expression for v J k (r), J = s j , i j , j = 1, 2, is of the form ωk exp(ik · r) for r k, z < z M J , k ∈ [k] J , AL ωk exp[ik · (r − 0J )] for (r − 0J ) k , v J k (r) = −i AL z > z M J , k ∈ [k] J , v J k (r) = i where J = s j , i j , j = 1, 2, A is the effective transverse area of the beam, 0J = (2b, 0, 0) for J = s1 , s2 , 0J = (−2b, 0, 0) for J = i1 , i2 , k = (−k x , k y , k z ). In a standard fashion, we associate the signal and idler modal functions (2.187) with fields we denote as Eˆ (+) J 0 (r, t), J = s j , i j , j = 1, 2. After switching on the nonlinear interaction, part of the field is not influenced: (r, t) = Eˆ s(+) (r, t), Eˆ i(+) (r, t) = Eˆ i(+) (r, t) for z < 0, Eˆ s(+) j j0 j j0 whereas for z > 0 provided that g|V | 1, the perturbative approximation of the solution of the Heisenberg equations of motion that retains terms up to g 2 can be written as ˆˆ Eˆ (−) (r, t) + g 2 |V |2 Jˆˆ Eˆ (+) (r, t), (r, t) = Eˆ s(+) (r, t) + e−iω0 t gV G Eˆ s(+) j ij0 j sj0 j j0 ˆˆ Eˆ (−) (r, t) (r, t) = Eˆ i(+) (r, t) + e−iω0 t gV G Eˆ i(+) j sj0 j j0 (r, t), j = 1, 2, + g 2 |V |2 Jˆˆ j Eˆ i(+) j0 Eˆ s(−) (r, t) = [ Eˆ s(+) (r, t)]† , Eˆ i(−) (r, t) = [ Eˆ i(+) (r, t)]† , j0 j0 j0 j0 ˆˆ and Jˆˆ are antilinear and linear superoperators, respectively, which substitute and G j j the expansions in annihilation operators ( Jˆˆ j yields an expansion in the annihilation ˆˆ yields an expansion in creation operators) for annihilation (creation) operators (G j operators). Compare Casado et al. (1997a), where G j and J j are appropriate linear 2 Origin of Macroscopic Approach operators acting on functions of the argument r for complex-valued functions. The ˆˆ and Jˆˆ have the properties superoperators G j j Δt ˆˆ aˆ (t) = G f (k, k )u − ω − ω ) (ω j j0k 0 k k 2 [k ] sj × exp [it(ω0 − ωk − ωk )] aˆ j0k (t) for k ∈ [k]i j , Jˆˆ j aˆ j0k (t) = f (k, k ) f ∗ (k , k ) [k ]i j [k ]s j Δt Δt ×u (ωk + ωk − ω0 ) u (ωk − ωk ) exp [it(ωk − ωk )] 2 2 (2.193) × aˆ j0k (t) for k ∈ [k]s j , respectively, with sin x ix e , x aˆ j0k (t) = aˆ j0k (0)e−iωk t . u(x) = (2.194) (2.195) Supposing that in the sense of classical nonlinear optics, f (k, k ) is a distribution with a support determined by the condition ω0 − ωk − ωk = 0, we easily obtain that † ˆˆ aˆ (t) = G f (k, k )aˆ j0k (t) for k ∈ [k]i j , j j0k [k ]s j Jˆˆ j aˆ j0k (t) = [k ]s j ⎡ ⎣ ⎤ f (k, k ) f ∗ (k , k )⎦ aˆ j0k (t) for k ∈ [k]s j , [k ]i j which is a great unexpected simplification. Further we will express the intensity correlations that have been determined in the experiment. Introducing (r, t) = exp (iωs t) Eˆ s(+) (r, t), Fˆ s(+) j we express the field at a point r j , j = 1, 2, on the screen as ω0 r s rs j j Fˆ (+) (r j , t) = Fˆ s(+) exp i 0, t − out j c 2c ω0 r i ri j j (+) ˆ exp i , j = 1, 2, + Fi j out 0, t − c 2c Quantum Description of Experiments with Stationary Fields where rs j = rs (x j ) = ri j = ri (x j ) = 2 zM + b2 + s j 2 zM + b2 + i j / (d − z Ms j )2 + (b − x j )2 , j = 1, 2, / (d − z Mi j )2 + (b + x j )2 , and the subscript out indicates that the field behind the nonlinear crystal is considered. Hence, we can obtain the relations (2.209) below. By taking into account the correlation relations (0, t) Fˆ i(+) (0, t ) = Fˆ i(+) (0, t ) Fˆ s(+) (0, t) Fˆ s(+) 1 out 2 out 2 out 1 out = gV ν(t − t), we get (cf. Casado et al. 1997a) Fˆ (+) (r1 , t1 ) Fˆ (+) (r2 , t2 ) ω rs ri 0 = gV ν t2 − t1 + 1 − 2 exp i (ri2 + rs1 ) c c 2c ω rs2 ri1 0 + ν t1 − t2 + − exp i (ri1 + rs2 ) . c c 2c Assuming that the beams with different subscripts j are mutually uncorrelated, we finally get (cf. Casado et al. 1997a) P12 (r1 , t + τ1 ; r2 , t + τ2 ) ≈ K g 2 |V |2 , rs rs ri ,,2 ,, ri ,,2 , × ,ν τ2 − τ1 + 1 − 2 , + ,ν τ1 − τ2 + 2 − 1 , c c c c rs rs ri ri + 2Re ν τ2 − τ1 + 1 − 2 ν ∗ τ1 − τ2 + 2 − 1 c c c c ω 0 × exp i (ri2 + rs1 − ri1 − rs2 ) , 2c where K is a constant related to the efficiency of the detectors, K = K1 K2, K1 = 2η1 2η2 , K2 = , ω0 ω0 η J , J = 1, 2, is the efficiency of the detector D J . 2 Origin of Macroscopic Approach The visibility is expressed by the formula (2.175), where Rsimax + Rsimin = 2g 2 |V |2 rs1 − ri2 rs1 − ri2 ri1 − rs2 ri1 − rs2 × M , +M , , c c c c rs1 − ri2 ri1 − rs2 Rsimax − Rsimin = 4g 2 |V |2 M , . c c (2.206) (2.207) We may ask whether the visibility has its proper meaning for all x1 , x2 , whether it is associated only with the extremes of the detection rate, i.e. x1 and x2 for which Rsi = Rsimax or Rsi = Rsimin . By relation (2.204) and the choice (2.179) we are interested in C = ±1, where ω0 C = cos 2 ri + rs2 ri2 + rs1 − 1 c c × 109 Hz. with ω0 = 2πc 351 The original formulae for rs j , ri j (instead of the relation (2.201)) were as (Ghosh et al. 1986) / (2b − x j )2 + d 2 , j = 1, 2, / ri j = ri (x j ) = (2b + x j )2 + d 2 . rs j = rs (x j ) = In Fig. 2.2 a short period of the oscillations of the cosine is depicted after Ghosh et al. (1986). An equal phase is assumed on any of the lines y2 ≡ x2 −x1 = constant. The short oscillation period corresponds to the change in the signed distance Fig. 2.2 The dependence of the cosine C of the phase on the position coordinates x1 and x2 for d = 1 and b = 0.1. The analysis of the setup in Fig. 2.1 after Ghosh et al. (1986). The cosine C of the phase depends only on the signed distance of the detectors Quantum Description of Experiments with Stationary Fields y2 ≡ x2 − x1 . As obvious from Fig. 2.3, the complement of the visibility, 1 − V , depends only on the coordinate of the point amid the detectors D1 and D2. An equal visibility V is assumed on the lines y1 = 12 (x1 + x2 ) = constant. For y1 = 0.0015 the visibility almost vanishes. Fig. 2.3 The complement of the visibility, 1 − V , versus the position coordinates x1 and x2 for d = 1 and b = 0.1. The analysis of the setup in Figure 2.1 after Ghosh et al. (1986) (iii) The experiment of Rarity and Tapster Rarity and Tapster (1990) demonstrated a violation of Bell’s inequality using phase and momentum of photon pairs instead of polarization as in previous experiments. They selected two signal beams of the same colour (the frequency ωs ) and two idler beams also of the same colour (frequency ωi = ωs ). They directed one of the signal beams and one of the idler beams to a mirror M1 and another mirror M2 (see Fig. 2.4). On the paths to the mirror M2 they increased the phase of the signal by ϕs and that of the idler by ϕi . They coherently mixed the two signals and idlers. Fig. 2.4 Experimental setup of Rarity and Tapster Now we will determine nonlinear dynamics of field operators. We assume the electric field in the form (2.161), (2.162), and (2.165), with another orthonormal system of functions vs j k (r), k ∈ [k]s j , vi j k (r), k ∈ [k]i j , j = 1, 2. The distinction is in the sets [k]s j and [k]i j , which are appropriate to the experimental setup. 2 Origin of Macroscopic Approach We will give a specification of v J k (r), J = s j , i j , j = 1, 2. We associate points 0BSi , 0BSs with the beam splitters. We introduce the notation z BSi , z BSs such that 0BSi = (0, 0, z BSi ), 0BSs = (0, 0, z BSs ). We respect the phase shifters and the spots of reflection on the mirror M2 with the notation z PSi , z Mi2 , z PSs , z Ms2 . We have located the phase shifter for the idler beam, the spot of reflection of the idler beam, the phase shifter for the signal beam, and the spot of reflection of the signal beam, respectively, at 0 1 1 z PSs z PSi , 0, z PSi , (−b, 0, z Mi2 ), −b , 0, z PSs , (−b, 0, z Ms2 ). −b z Mi2 z Ms2 The origin 0 has its image 0s2 = 0i2 = (−2b, 0, 0) in the mirror M2 . Then we assume that the mirror M1 is simple. We respect the spots of reflection on the mirror M1 with the notation z Mi1 , z Ms1 . We have located the spot of reflection of the idler beam and the spot of reflection of the signal beam, respectively, at (b, 0, z Mi1 ), (b, 0, z Ms1 ). The origin 0 has its image 0s1 = 0i1 = (2b, 0, 0) in the mirror M1 , which is assumed to be simple so far. We specify that ωk (2.213) exp(ik · r) for r k, z < z M J , k ∈ [k] J , AL ωk ω J1 δx exp ik · (r − 0 J ) + for (r − 0J ) k , v J k (r) = −i AL c (2.214) z M J < z < z BS J1 , k ∈ [k] J , v J k (r) = i where δx is a path-length difference, J = s1 , i1 , J1 = s, i, k = (−k x , k y , k z ), ωk (2.215) exp(ik · r) for r k, z < z PS J1 , k ∈ [k] J , AL ωk v J k (r) = i exp[i(k · r + ϕ J1 )] for r k, AL z PS J1 < z < z M J , k ∈ [k] J , (2.216) ωk v J k (r) = −i exp{i[k · (r − 0J ) + ϕ J1 ]} for (r − 0J ) k , AL z M J < z < z BS J1 , k ∈ [k] J , (2.217) v J k (r) = i where J = s2 , i2 , J1 = s, i. Quantum Description of Experiments with Stationary Fields Beam splitter BSs with the transmissivities ts , ts and reflectivities rs , rs : Input values are ωk ωs δx vs1 kin (0BSs ) = −i exp ik · (0BSs − 0s1 ) + , k ∈ [k]s1 , AL c ωk vs2 kin (0BSs ) = −i exp{i[k · (0BSs − 0s2 ) + ϕs ]}, k ∈ [k]s2 . AL (2.218) (2.219) Under the assumption δx = 0 we can perform the exchange k ↔ k and write the output values for k ∈ [k]s1 ωk AL ωk −i AL ωk vs2 k out (0BSs ) = −i AL ωk −i AL vs1 kout (0BSs ) = −i exp{ik · (0BSs − 0s1 )}ts exp{i[k · (0BSs − 0s2 ) + ϕs ]}rs , exp{ik · (0BSs − 0s1 )}rs exp{i[k · (0BSs − 0s2 ) + ϕs ]}ts . Performing the exchange k ↔ k in (2.221), we have for (r − 0s1 ) k , z > z BSs , k ∈ [k]s1 , ωk exp[ik · (r − 0s1 )]ts AL ωk −i exp{i[k · (r − 0BSs ) + k · (0BSs − 0s2 ) + ϕs ]}rs , AL vs1 k (r) = −i and for (r − 0s2 ) k , z > z BSs , k ∈ [k]s2 , ωk exp{i[k · (r − 0BSs ) + k · (0BSs − 0s1 )]}rs AL ωk −i exp{i[k · (r − 0s2 ) + ϕs ]}ts . AL vs2 k (r) = −i Beam splitter BSi with the transmissivities ti , ti and reflectivities ri , ri can be described analogously: In (2.222) and (2.223) we perform the replacement s ↔ i. In a standard fashion, we associate the modal functions, e.g. (2.213), (2.214), (2.215), (2.216), (2.217), (2.222), and (2.223), with fields we denote Eˆ (+) j0 (r, t), J = s1 , s2 , i1 , i2 . Nonlinear dynamics is described in the same way as for the experiment on the interference of signal and idler photons by relations (2.188), (2.189), (2.190), 2 Origin of Macroscopic Approach (2.192), and (2.193). It allows the fields with z < 0 to stay initial and those with z > 0 at least to obey the same rules we have used to calculate the modal functions. We introduce the slowly varying field operators Fˆ J(+) (r, t) = exp (iω J1 t) Eˆ (+) J (r, t), where J = s1 , s2 , J1 = s and J = i1 , i2 , J1 = i, for expressing the intensity correlations. The field operators at the signal and idler detectors placed at rs , ri , respectively, are ωr ϕs rs s s ˆ (+) F (r , t) = t + + iϕs 0, t − exp i Fˆ s(+) s s s2 out 2 c ωs c ωs r s rs exp i , 0, t − + rs Fˆ s(+) 1 out c c ˆF (+) (ri , t ) = ti Fˆ (+) 0, t − ri + ϕi exp i ωiri + iϕi i2 i2 out c ωi c ri ωi r i exp i , 0, t − + ri Fˆ i(+) 1 out c c with 0 the centre of the coordinate system, rs and ri the path lengths of the lower signal and idler beams, respectively, to the appropriate detector. In the experiment under consideration both upper paths were modified by δx, since the upper and the lower mirrors were not at exactly the same distance from the pumping beam axis (Casado et al. 1997a) . The mirror above the pumping beam axis is not simple, but a mirror assembly which enables one to change δx (Rarity and Tapster 1990). In the following, we assume δx = 0. (r, t) is correlated with the We will take into account that the signal field Fˆ s(+) 1 (+) (+) (r, t), but Fˆ s(+) (r, t) is idler field Fˆ i2 (r, t) and also Fˆ s2 (r, t) is correlated with Fˆ i(+) j 1 (+) ˆ uncorrelated with F (r, t), j = 1, 2, these pairs not fulfilling matching conditions. ij If we consider that the time intervals rcs − rci − ωϕss , rcs − rci + ωϕii are small in comparison with the coherence time of signal and idler given by the function ν(τ ), we obtain r ri s (+) (+) ˆ ˆ Fs2 (rs , t) Fi2 (ri , t ) ≈ ri ts exp i ωs + ωi + ϕs c c r ri s + rs ti exp i ωs + ωi + ϕi ν(t − t). (2.227) c c From this we have Psi (rs , t + τ ; ri , t + τ ) = K g 2 |V |2 |ν (τ − τ )|2 × |ri ts |2 + |rs ti |2 + 2Re{r∗s ts ri ti∗ exp[i(ϕs − ϕi )]} . Quantum Description of Experiments with Stationary Fields The visibility is expressed by the formula (2.175), where Rsimax + Rsimin = 2g 2 |V |2 M (0, 0) |ts ri |2 + |rs ti |2 , Rsimax − Rsimin = 8g 2 |V |2 M (0, 0) |ts ri ||rs ti |. The dependence of the visibility on the beam splitters + with the transmissivities + |ts | ∈ [0, 1], |ti | ∈ [0, 1] and the reflectivities |rs | = 1 − |ts |2 , |ri |= 1 − |ti |2 is plotted in Fig. 2.5. Fig. 2.5 The visibility V versus moduli of the amplitude transmissivities ts , ti from the “lower side” of the beam splitters for signal and idler beams for δx = 0 Unfortunately, the quantity under consideration is not dependent on the characteristic of the nonlinear optical process. The surface plotted has a boundary condition zero. It may be equal to unity in the sense of the equality |ts ri | = |ti rs |. The maximum is attained on the line segment connecting the points |ts | = 0, |ti | = 0 and |ts | = 1, |ti | = 1. The interference manifests itself as a cosine variation of the coincidence rate with ϕs − ϕi . (iv) The experiment of Franson Franson (1989) proposed a test of “Bell’s inequality for energy and time”. He arranged two Mach–Zehnder interferometers and let a signal and an idler beam each pass through an interferometer (see Fig. 2.6). The experiment was originally proposed for an atom and free-space propagation. The coincidence detection shows a fourth-order interference as a cosine dependence on 1c (ωs ΔL s + ωi ΔL i ), where ΔL s (ΔL i ) is the length difference between the long (short) route of the signal (idler) beam through the corresponding interferometer. In the past few years several groups have performed experiments of that type. In Tapster et al. (1994) Franson’s experiment has been adapted to parametric down-conversion and fibres. 2 Origin of Macroscopic Approach Fig. 2.6 Experimental setup of Franson’s type. For simplicity Eˆ iBS 0 ≡ Eˆ iBS1s 0 and Eˆ sBS 0 ≡ Eˆ sBS1i 0 For the description of the nonlinear dynamics of field operators, we consider the initial electric-field in the form (+) Eˆ 0(+) (r, t) = V (+) (r, t)1ˆ + Eˆ s0 (r, t) + Eˆ i(+) (r, t) BS 0 1s (r, t), + Eˆ i0(+) (r, t) + Eˆ s(+) BS 0 1i where Eˆ (+) J 0 (r, t) = v J k (r)aˆ J k0 (t), J = s, i, iBS1s , sBS1i . k∈[k] J The modal functions as restricted to linear segments are ωk ik·r vsk (r) = i e for r k, z < z BS1s , k ∈ [k]s , AL ωk ik·r viBS1s k (r) = i e for (r − 0BS1s ) k, z > z BS1s , k ∈ [k]iBS1s . AL (2.233) (2.234) Here 0BS1s is the centre of the beam splitter BS1s , z BS1s is the corresponding z-coordinate, and [k]iBS1s is the set of wave vectors k of the beam corresponding to the unused input port of this beam splitter. The modal functions at the output of this beam splitter are ωk ik·r ωk ik·(r−0BS1s iBS ) 1s r e ts + i e vsk (r) = i s AL AL for r k, z BS1s < z < z BS2s , k ∈ [k]s , Quantum Description of Experiments with Stationary Fields ωk ik·(r−0BS s ) ωk ik·r 1s r + i viBS1s k (r) = i e e ts s AL AL for (r − 0BS1s ) k, z M1s < z < z BS1s , k ∈ [k]iBS1s , where 0BS1s iBS , 0BS1s s are chosen such that 1s k · (r − 0BS1s iBS ) = k · (r − 0BS1s ) + k · 0BS1s , k ∈ [k]s , k · (r − 0BS1s s ) = k · (r − 0BS1s ) + k · 0BS1s , k ∈ [k]iBS1s . Here z M1s is the z-coordinate of the centre of the signal mirror and z BS2s is the z-coordinate of 0BS2s , the centre of the beam splitter BS2s . After the reflection from the first mirror, the modal function is viBS1s k (r) = −i −i ωk ik ·(r−0M BS s ) 1s 1s r e s AL ωk ik ·(r−0M ) 1s t for (r − 0 e s M1s ) k , AL z M1s < z < z M2s , k ∈ [k]iBS1s , where z M2s is the z-coordinate of the centre of the second mirror and 0M1s BS1s s and 0M1s are chosen such that k · (r − 0M1s BS1s s ) = k · (r − 0M1s ) + k · (0M1s − 0BS1s s ), k ∈ [k]iBS1s , k · (r − 0M1s ) = k · (r − 0M1s ) + k · 0M1s , k ∈ [k]iBS1s . (2.240) (2.241) After the reflection from the second mirror, the modal function is ωk −ik·(r−0M M BS s ) 2s 1s 1s r viBS1s k (r) = i e s AL ωk −ik·(r−0M M ) 2s 1s t for (r − 0 +i e s M2s ) −k, AL z M2s < z < z BS2s , k ∈ [k]iBS1s , where 0M2s M1s BS1s s and 0M2s M1s are chosen such that −k · (r − 0M2s M1s BS1s s ) = −k · (r − 0M2s ) +k · (0M2s − 0M1s BS1s s ), k ∈ [k]iBS1s , −k · (r − 0M2s M1s ) = −k +k · (0M2s − 0M1s ), k · (r − 0M2s ) ∈ [k]iBS1s . 2 Origin of Macroscopic Approach The output modal functions for the beam to be detected are ωk ik·r 2 e ts AL - . ωk ik·(r−0BS1s iBS ) ωk ik·(r−0BS M M ) 1s + i 2s 2s 1s + i e e ts rs AL AL vsk (r) = i ωk ik·(r−0BS M M BS s ) 2 2s 2s 1s 1s r , for r k, z > z BS2s , k ∈ [k]s , e s AL where 0BS2s M2s M1s and 0BS2s M2s M1s BS1s s are chosen such that k · (r − 0BS2s M2s M1s ) = k · (r − 0BS2s ) − k · (0BS2s − 0M2s M1s ), k ∈ [k]s , (2.246) k · (r − 0BS2s M2s M1s BS1s s ) = k · (r − 0BS2s ) −k · (0BS2s − 0M2s M1s BS1s s ), k ∈ [k]s . The output modal functions for the second beam are ωk −ik·(r−0BS2s BS1s iBS ) 2 1s r e s AL - . ωk −ik·(r−0BS s ) ωk −ik·(r−0M M BS ) 2s 2s 1s 1s + i +i e e rs ts AL AL ωk −ik·(r−0M M ) 2 2s 1s t for (r − 0 +i e BS1s ) −k, s AL z > z BS2s , k ∈ [k] iBS1s , viBS2s k (r) = i where 0BS2s BS1s iBS and 0BS2s s are chosen such that 1s −k · (r − 0BS2s BS1s iBS ) = −k · (r − 0BS2s ) 1s +k · (0BS2s − 0BS1s iBS ), k ∈ [k]iBS1s . 1s −k · (r − 0BS2s s ) = −k · (r − 0BS2s ) + k · 0BS2s , k ∈ [k]iBS1s . (2.249) (2.250) In a standard fashion, we associate the modal functions which travel to the above detector, (2.233), (2.234), (2.235), (2.236), (2.239), (2.242), (2.245), and (2.248), (+) (r, t), Eˆ i(+) (r, t). Exchanging s ↔ i, we introduce with fields we denote as Eˆ s0 BS1s 0 modal functions, which travel to the lower detector. We relate them with fields we denote as Eˆ i0(+) (r, t), Eˆ s(+) (r, t). Switching on the nonlinear interaction, we find the BS1i 0 field to obey the relations (2.189) and (2.190). Counter to propagation the field stays initial and along with propagation it at least obeys the rules we have used to generate the modal functions. For simplicity, it is assumed that t J = tJ = r J = rJ = √12 , J = s, i. Quantum Description of Experiments with Stationary Fields For the calculation of intensity correlations determined in the experiment, we introduce the slowly varying field operators Fˆ s(+) (r, t) = exp (iωs t) Eˆ s(+) (r, t), Fˆ i(+) (r, t) = exp (iωi t) Eˆ i(+) (r, t). The field operators at the signal and idler detectors placed at rs , ri , respectively, are Fˆ s(+) (rs , t) L s,long |rBS1s | |rBS1s | 1 |rs − rBS2s | (+) ˆ Fsout 0, t − − − exp iωs = 2 c c c c L s,long L s,long |rs − rBS2s | (+) ˆ − i FiBS in 0BS1s , t − − exp iωs 1s c c c |rBS1s | L s,short |rBS1s | |rs − rBS2s | (+) ˆ + Fsout 0, t − − − exp iωs c c c c L s,short |rs − rBS2s | L s,short + i Fˆ i(+) , t − − exp iω 0 BS1s s BS1s in c c c |rs − rBS2s | , (2.252) × exp iωs c (2.253) Fˆ i(+) (ri , t) = Fˆ s(+) (rs , t),, . , s↔i Let us denote L s,short (L i,short ) a length of the short arm of the interferometer for the signal (idler) beam. Supposing that ΔL s ≡ L s,long − L s,short (ΔL i ≡ L i,long − L i,short ) is much greater than the coherence length of the signal (idler) in order to avoid the second-order interference, we get (Casado et al. 1997a) 1 Fˆ s(+) (rs , t + τ ) Fˆ i(+) (ri , t + τ ) = gV 4 i × ν(τ − τ ) exp ωs |rBS1s | + L s,long + |rs − rBS2s | c i + ωi |rBS1i | + L i,long + |ri − rBS2i | c ΔL i − ΔL s i + ν τ − τ + exp ωs |rBS1s | + L s,short c c i + |rs − rBS2s | + ωi |rBS1i | + L i,short + |ri − rBS2i | , c 2 Origin of Macroscopic Approach provided that |rBS1J | + L J,short + |r J − rBS2J | is the same for J = s and J = i. We finally obtain Psi (rs , t + τ ; ri , t + τ ) = 1 2 2 K g |V | |ν(τ − τ )|2 16 , , , ΔL i − ΔL s ,,2 + ,,ν τ − τ + , c ΔL i − ΔL s ∗ − 2Re ν(τ − τ )ν τ − τ + c i × exp (ωs ΔL s + ωi ΔL i ) . (2.255) c The visibility is given in (2.175), where Rsimax + Rsimin 1 ΔL i − ΔL s ΔL i − ΔL s = g 2 |V |2 M (0, 0) + M , , 8 c c Rsimax − Rsimin 1 2 2 ΔL i − ΔL s = g |V | M 0, . 4 c The dependence of the visibility on the difference (ΔL i −ΔL s) is plotted in Fig. 2.7. The variation of the visibility is due to the function M dc , hc , but the function erf does not contribute to it. The distance between the points of inflection is 2 σc = 6×10−4 m. The interference manifests itself as a cosine variation of the coincidence rate with 1c (ωs ΔL s + ωi ΔL i ). Fig. 2.7 The visibility V versus the difference (ΔL i − ΔL s ) ∈ [−10−3 , 10−3 ] measured in metres Quantum Description of Experiments with Stationary Fields (v) Induced coherence and indistinguishability in two-photon interference Zou et al. (1991) performed an experiment in which fourth-order interference is observed in the superposition of signal photons from two coherently pumped parametric down-conversion crystals, when the paths of the idler photons are aligned. The experimental setup is outlined in Fig. 2.8, in which two nonlinear crystals NL1 and NL2 are optically pumped by two mutually coherent, classical pump waves of complex amplitudes V j (r, t) = V j exp[i(k0 · r − ω0 t)], j = 1, 2. We assume that V1 = V2 exp(ik0 · 02 ) = V . On the contrary, there were similar crystals in the experiment, but we consider more general ones. The parametric down-conversion occurs at both crystals, each with the emission of a signal photon and an idler photon. We are interested in the joint detection rate of the detectors Ds and Di when the trajectories of the two idlers i1 , i2 are aligned and the path difference between the two signals is varied slightly. Fourth-order interference disappears when the idlers are misaligned or separated by a beam stop. Fig. 2.8 Experimental setup on induced coherence without induced emission. For simplicity, Eˆ sBS 0 ≡ Eˆ sBSi 0 In what follows, we will specify modal functions and the nonlinear dynamics of field operators. We consider the initial electric field in the form Eˆ 0(+) (r, t) = V (+) (r, t)1ˆ + (r, t) + Eˆ i0(+) (r, t) + Eˆ s(+) (r, t), Eˆ s(+) j0 BS 0 i Eˆ (+) J 0 (r, t) = v J k (r)aˆ J k0 (t), J = s1 , s2 , i, sBSi . k∈[k] J The modal functions as restricted to linear segments are ωk ik·r vs1 k (r) = i e for r k, z < z Ms , k ∈ [k]s , AL ωk ik·r vs2 k (r) = i e for (r − 02 ) k, z < z BSs , k ∈ [k]s , AL (2.261) (2.262) 2 Origin of Macroscopic Approach ωk ik ·(r−0M ) s for (r − 0 e Ms ) k , AL z Ms < z < z BSs , k ∈ [k]s . vs1 k (r) = −i Here 0Ms and k are used as in the definition of modal functions related to Fig. 2.1 and k has the same meaning. Here, as in the definitions related to Fig. 2.4, we specify that 0Ms has been chosen so that k · (r − 0Ms ) = k · (r − 0Ms ) + k · 0Ms , k ∈ [k]s . Similarly as in (2.222) and (2.223) for t = t = √1 , 2 r = r = , we have ωk ik ·(r−0M ) 1 ωk ik ·(r−0BS ) i s √ s √ +i e e vs1 k (r) = −i AL AL 2 2 for (r − 0Ms ) k , z > z BSs , k ∈ [k]s , ωk ik·(r−0 ) i ωk ik·r 1 BS M s s vs2 k (r) = −i e e √ √ +i AL AL 2 2 for (r − 02 ) k, z > z BSs , k ∈ [k]s , where 0BSs and 0BSs Ms have been chosen so that, respectively, k · (r − 0BSs ) = k · (r − 0BSs ) + k · 0BSs , k ∈ [k]s , k · (r − 0BSs Ms ) = k · (r − 0BSs ) + k · 0Ms , k ∈ [k]s ; ωk ik·r e for r k, z < z BSi , k ∈ [k]i , vik (r) = i AL ωk ik·r vsBSi k (r) = i e for (r − 0BSi ) k, z > z BSi , k ∈ [k]sBSi , AL ωk ik·r ωk ik·(r−0BS s ) i BSi r e t+i e vik (r) = i AL AL for r k, z > z BSi , k ∈ [k]i , ωk ik·(r−0BS i ) ωk ik·r i r + i vsBSi k (r) = i e e t AL AL for (r − 0BSi ) k, z < z BSi , k ∈ [k]sBSi , (2.267) (2.268) (2.269) (2.270) where k is defined relative to the beam splitter BSi and 0BSi i and 0BSi sBSi have been chosen so that, respectively, k · (r − 0BSi sBSi ) = k · (r − 0BSi ) + k · 0BSi i for k ∈ [k]i , k · (r − 0BSi i ) = k · (r − 0BSi ) + k · 0BSi for k ∈ [k]sBSi . (2.273) (2.274) Quantum Description of Experiments with Stationary Fields The modal functions (2.269), (2.270), (2.271), and (2.272) travel to the lower detector. We relate them with fields we denote as Eˆ i0 (r, t), Eˆ sBSi 0 (r, t). Switching on the first nonlinear interaction, we find the field to obey the relations (2.189), (2.190), and (2.191) for j = 1 with i1 → i. Counter to propagation the field remains initial and along with propagation it at least obeys the rules we have used to generate the modal functions. Now we would like to interpret the subscript 0 not as the order of solution but as a number of the initial stage. Since the stage is followed by the first stage, we would like to modify the relations (2.189), (2.190), and (2.191) so that they confess the first-stage operators on the left-hand side, which would lead to the use of the subscript 1. Switching on the second nonlinear interaction, we find the field to obey the relations (2.189), (2.190), and (2.191) for j = 2 with i2 → i, but the first-stage field operators to have been substituted for the operators on the right-hand side. The ˆˆ and Jˆˆ depends on the nonlinear crystal located at 0 . action of the operators G 2 2 2 It also depends on the pump beam at the same crystal. Counter to propagation the field stays first stage and along with propagation it still at least obeys the rules to generate the modal functions. Further we will express the intensity correlations that have been determined in the experiment. Again, we introduce the operators (2.224), where J = s1 , s2 , i, J1 = s, s, i. The field operators at the signal and idler detectors placed at rs , ri , respectively, are ˆFs(+) (rs , t) = √1 − i Fˆ s(+) 01 , t − d exp iωs d 2 1 c c 2 h h + Fˆ s(+) exp iωs , 02 , t − 2 c c ˆF (+) (ri , t ) = Fˆ (+) 02 , t − l exp iωi l . i i c c We still assume different crystals and derive a slight generalization of the wellknown experiment (Casado et al. 1997a). By taking into account the correlation relations ˆ (+) (02 , t ) = tgV1 ν1 t − t − f exp iωi f , (0 , t) F Fˆ s(+) 1 i 1 c c (02 , t) = gV2 ν2 (t − t), Fˆ i(+) (02 , t ) Fˆ s(+) 2 (2.277) (2.278) 2 Origin of Macroscopic Approach we get (Casado et al. 1997a) gV Fˆ s(+) (rs , t) Fˆ i(+) (ri , t ) = √ 2 2 l f d i × − itν1 τ − τ − − + exp [ωs d + ωi (l + f )] c c c c l h i + ν2 τ − τ − + (2.279) exp [ωs h + ωil] . c c c We finally obtain 1 Psi (rs , t + τ ; ri , t + τ ) = K g 2 |V |2 2 , , , , , l l f d ,,2 ,, h ,,2 , × ,tν1 τ − τ − − + + ,ν2 τ − τ − + c c c , c c , l l f d h + 2Im tν1 τ − τ − − + ν2∗ τ − τ − + c c c c c i × exp [ωs (d − h) + ωi f ] . (2.280) c We have hopefully corrected the factor, changed a sign with respect to the reflection from the mirror, and changed signs of the argument of ν(τ ) relying on the identity ν(τ ) = ν(−τ ), where ν1 (τ ) = ν2 (τ ) ≡ ν(τ ). The visibility is expressed by the formula (2.175), where for d = l + f , l = h Rsimax + Rsimin = g 2 |V |2 M (0, 0) |t|2 + 1 , Rsimax − Rsimin = 2g |V | M (0, 0) |t|. The maximum visibility is equal to unity and, in general, it depends on the transmissivity of the beam splitter, as can be seen from Fig. 2.9. The interference manifests itself as a cosine variation of the coincidence rate with ωc0 f . (vi) Frustrated two-photon creation via interference Herzog et al. (1994) performed a simple experiment interpreted as showing interference of two processes. They placed three mirrors in the three beams, laser, signal, and idler, that emerge from a nonlinear crystal, NL, and put a detector into the reflected idler beams (see Fig. 2.10). In the standard quantum interpretation a pair of correlated photons can be created either by the laser beam travelling from left to right or when the reflected laser beam travels from right to left. In both cases the idler photon may arrive at the detector. As the two possibilities are indistinguishable Quantum Description of Experiments with Stationary Fields Fig. 2.9 The visibility V versus the modulus of the amplitude transmissivity |t| ∈ [0, 1] of the beam splitter BSi . It is assumed that d =l + f,l = h Fig. 2.10 Experimental setup on frustrated two-photon creation via interference they interfere and the counting rate oscillates depending on the position of a chosen mirror. Accordingly the description of the pump beam is given by V (+) (r, t) = V ei(k0 ·r−ω0 t) − V ei[−k0 · (r−2l0 e0 )−ω0 t] = V ei(k0 ·r−ω0 t) − V eiϕ0 ei(−k0 ·r−ω0 t) for x = 0, y = 0, z < l0 , where e0 is the direction vector of the forward-propagating pump beam, ϕ0 = 2|k0 |l0 = 2 ω0cl0 . The modal functions are ωk ik·r ωk iϕs −ik·r e −i e e vsk (r) = i 2AL 2AL for r k, z < z Ms , k ∈ [k]s , where ϕs = 2 ωcs ls , z Ms = e0 · 0Ms . The modal functions vik (r) are expressed similarly. Associating the modal functions with quantum fields, we must consider that (+) (+) (+) (r, t) = Eˆ sF0 (r, t) + Eˆ sB0 (r, t), Eˆ s0 2 Origin of Macroscopic Approach ωk ik·r e aˆ sk0 (t), 2AL k∈[k]s ωk iϕs −ik·r (+) aˆ sk0 (t) e e Eˆ sB0 (r, t) = −i 2AL k∈[k] (+) (r, t) = i Eˆ sF0 are the forward-propagating component and the backward-propagating component, (+) (+) (r, t), and Eˆ iB0 (r, t) are expressed respectively. The field operators Eˆ i0(+) (r, t), Eˆ iF0 similarly. Nonlinear dynamics is described by the relations (+) (+) Eˆ sF (0 − (r = 0)es , t) = Eˆ sF0 (0, t), (+) (+) Eˆ iF (0 − (r = 0)ei , t) = Eˆ iF0 (0, t), where e j , j = s, i, is the direction vector of the forward-propagating signal, idler beam, respectively: (+) (+) ˆˆ Eˆ (−) (0, t), (0 + (r = 0)es , t) = (1 + g 2 |V |2 Jˆˆ ) Eˆ sF0 (0, t) + e−iω0 t gV G Eˆ sF iF0 (+) (−) ˆ ˆ −iω t 2 2 0 ˆ Eˆ (0, t) + (1 + g |V | Jˆ ) Eˆ (+) (0, t), Eˆ (0 + (r = 0)e , t) = e gV G i 2ls (+) (+) Eˆ sB (0 + (r = 0)es , t) = − Eˆ sF 0 + (r = 0)es , t − c 2ls 2 2 ˆˆ ˆ (+) = −(1 + g |V | J ) E sF0 0, t − c 2ls (−) ˆ i(ϕs +ϕi ) −iω0 t ˆ ˆ −e e gV G E iF0 0, t − , c 2li (+) (+) (0 + (r = 0)ei , t) = − Eˆ iF 0 + (r = 0)ei , t − Eˆ iB c ˆˆ Eˆ (−) 0, t − 2li = −ei(ϕs +ϕi ) e−iω0 t gV G sF0 c 2li (+) − (1 + g 2 |V |2 Jˆˆ ) Eˆ iF0 , 0, t − c Eˆ (+) (0, t) = (1 + g 2 |V |2 Jˆˆ ) Eˆ (+) (0 + (r = 0)e , t) sBout ˆˆ Eˆ (−) (0 + (r = 0)e , t), +e gV e G i iB −iω0 t iϕ0 ˆˆ ˆ (−) (0, t) = e gV e G E (0 + (r = 0)e , t) −iω0 t (+) Eˆ iBout (+) + (1 + g |V | Jˆˆ ) Eˆ iB (0 + (r = 0)ei , t). 2 Quantum Description of Experiments with Stationary Fields Further we will calculate the quantum mean intensities that have been determined in the experiment. Introducing the slowly varying field operators Fˆ J(+) (r, t) = eiω J t Eˆ (+) J (r, t), iω J t ˆ (+) Fˆ J(+) E J F (r, t), F (r, t) = e iω J t ˆ (+) Fˆ J(+) E J B (r, t), J = s, i, B (r, t) = e ˆ (−) and Fˆ J(−) (r, t), Fˆ J(−) F (r, t), FJ B (r, t), we express the field operators at the signal and idler detectors placed at rs , ri , respectively, as ˆ (+) 0, t − r J exp i ω J r J , J = s, i. (r , t) = F Fˆ J(+) J B J Bout c c The quantum mean intensity or single photodetection rate in the detector Ds is , , , , Ps (rs , t) = K 0 , Eˆ (−) (rs , t) Eˆ (+) (rs , t) , 0 , , , , (−) (+) (rs , t) Fˆ sB (rs , t) , 0 , = K 0 , Fˆ (2.294) (2.295) where K is a constant related to the efficiency of the detector and the energy of a single photon. Considering the forward propagation, reflections, and the backward propagation, we obtain that rs ˆ (+) rs (−) (+) (−) FsB 0, t − Fˆ sB (rs , t) Fˆ sB (rs , t) = Fˆ sB 0, t − c c 2l 2l i s = 2g 2 |V |2 μs (0) + μs − cos(ϕs + ϕi − ϕ0 ) , c c where we have relied on the identity μs (τ ) = μs (−τ ). From this, 2ls 2li − cos(ϕs + ϕi − ϕ0 ) . Ps (rs , t) = 2K g 2 |V |2 μs (0) + μs c c The photodetection rate in the detector Di is expressed similarly (Casado et al. 1997b). In conclusion, we have mostly dealt with the fourth-order interference in parametric down-conversion experiments. The 1986, 1990, 1994 (adapted back to free space), and 1991 experiments were chosen according to a review article of other authors. Coincidence measurements in the various setups are essentially (or sufficiently well) described in terms of the cross correlation between the signal and the idler. We have “promoted” the schemes of the experiments, where only paths through nonlinear and linear optical elements and the free space (with possible reflections from perfect mirrors) to detectors are drawn, to a reason of a certain neglect of the 2 Origin of Macroscopic Approach beams’ divergence. We have replaced the usual assumption that the electric field is expanded in terms of an incomplete set of plane waves, which is relatively complete with respect to the expected direction of propagation, by the hypothesis that there exists an incomplete or relatively complete system of more complicated modal functions, which have still been specified only on the paths. We have performed conventional quantization by introducing annihilation operators in place of the classical complex amplitudes of the modes. We tried to choose sufficiently realistic values of the parameters for all the four experiments and to find visibilities of the intensity interference. 2.3.2 From Coupled Quantum Harmonic Oscillators Back to Interacting Fields One of the interference experiments we have described in Section 2.3.1, the experiment of Zou et al. (1991) which has been analysed in Wang et al. (1991a,b), has attracted much attention. The arrangement of two down-converters is pumped by mutually coherent beams and the two down-converters are connected by the idler beam. The spontaneous emission from the first nonlinear crystal in the idler serves as a stimulating idler input to the second nonlinear crystal that acts as an optical amplifier. The interference of signal beams from both the crystals can be observed. A beam splitter placed between the two nonlinear crystals in the idler beam can change the strength of their connection since it attenuates the emerging field. The parametric down-conversion in the second nonlinear crystal is stimulated by idler photons when the idler field is strong “per frequency unit”. In this situation, the polychromatic theory yields results similar to those obtained by the monochromatic treatment, i.e. about multiples of the latter. When the idler field is weak per frequency unit, the second nonlinear crystal is proven to “ignore” the idler photons. The monochromatic description, even though completed by optimal scaling of its results, is far from being persuasive here. The assumption of the strong idler field is implicit in work contributing to the monochromatic theory. ˇ acˇ ek and Peˇrina (1996) it has been shown that the distribution of photonIn Reh´ number sum in signal modes interpolates between a Bose–Einstein distribution and a convolution of two Bose–Einstein distributions. The latter distribution occurs when the idler beam is blocked. In general, the photon-number sum is distributed as if it corresponded to the number of signal degrees of freedom which varied between 1 and 2. A nonclassical distribution of photon-number sums restricted to even sums of photon numbers cannot occur, because the correlation between the photon numbers of the two signal beams is not complete. It has been found that the distribution of phase difference derived from the Q function narrows when the connection of both the down-converters via the idler mode closes up. The monochromatic treatment associates each travelling wave with a quantum harmonic oscillator. The simple formalism of several coupled harmonic oscillators is useful for an analysis of the travelling-wave setup of interference experiment due to Quantum Description of Experiments with Stationary Fields Zou et al. (1991) up to suppression of the induced emission. The more complicated approach used originally for the analysis seems to be unsuitable for treating the phenomenon of induced emission. We try to formalize here a comparison between the two approaches. When the induced emission occurs, it can be utilized. The phenomenon of induced emission makes the phase of an amplified field adopt the same phase as the incident locking field (Wang et al. 1991a, Wiseman and Mølmer 2000). The induced emission can also be used in parametric down-conversion to lock the phase of the idler and, from this, that of the signal (since the phase sum of the signal and idler is locked to the pump phase). If the field used to lock the idler of one down-converter (crystal NL2 in Fig. 2.11) is itself the idler output of another down-converter (crystal NL1 in Fig. 2.11), the two signal fields will also be locked in phase. Thus, they will have, in principle, perfect first-order coherence and so will interfere at a final beam splitter not included in Fig. 2.11. If there is no connection between the two down-converters, and hence no induced emission, the two signals will be incoherent and there will be no interference. Fig. 2.11 Scheme of two parametric processes with aligned idler beams with the spatial Heisenberg picture made explicit Zou et al. (1991) and Wang et al. (1991a) had a negligible probability of both crystals producing a down-converted photon pair and used a quantum-mechanical explanation based on indistinguishability of paths to explain the interference they observed in the experiment. To the contrary, the interference was lost when one could tell which crystal had emitted each signal photon. Using multimode analysis of the experiment, they derived that there could be no induced emission in their experiment. Nonetheless, they found that for perfect matching of idler modes, the signal fields from NL1 and NL2 show perfect interference. The multimode approach to the analysis used by Wang et al. (1991a) yields an explanation involving a sufficient number of realistic parameters. Even though in the foregoing section the formalism yielded results similar enough to those of Wang et al. (1991a), here we try to come near their analysis. However, there exists a simple quantal description that may claim that it conforms to results of the multimode analysis. Such simple models have been published. Concerning this, we may refer ˇ acˇ ek and Peˇrina (1996), Wiseman and Mølmer (2000), and Peˇrinov´a et al. to Reh´ (2000) and provide what is a continuation of Peˇrinov´a et al. (2000). (i) Formalism of several modes We turn to the quantum analysis of the Zou–Wang–Mandel experiment. The experimental arrangement consists of two parametric down-conversion crystals with 2 Origin of Macroscopic Approach aligned idler beams, which are partially connected due to the presence of a beam splitter in between, and is illustrated in Fig. 2.11. Restricting ourselves to the quasimonochromatic light beams (or quasimonochromatic components of these), we can describe the system by four modes, s1 (the signal mode for crystal NL1), i (the idler modes, which are identified), s2 (the signal mode for crystal NL2), and 0 (the escape mode for the beam splitter) (Peˇrinov´a et al. 2000). We consider the input annihilation operators aˆ s1 (0), aˆ 0 (1), aˆ s2 (2), aˆ i (0), the output annihilation operators aˆ s1 (1), aˆ 0 (2), aˆ s2 (3), aˆ i (3), and the intermediate annihilation operators aˆ i (1), aˆ i (2). Here s1 , s2 stand for the signal mode of crystal 1 and that of crystal 2, respectively, i for the idler mode, and 0 for the “escape” mode of the beam splitter. To obtain four-mode unitary transformations between the stages 0, 1, 2, 3, we consider also the appendage input annihilation operators aˆ 0 (0), aˆ s2 (0), aˆ s2 (1) and the appendage output annihilation operators aˆ s1 (2), aˆ s1 (3), aˆ 0 (3). Of course, in the description of the dynamics below, we will have to be consistent with the identities aˆ 0 (0) = aˆ 0 (1), aˆ s2 (0) = aˆ s2 (1) = aˆ s2 (2), aˆ s1 (1) = aˆ s1 (2) = aˆ s1 (3), aˆ 0 (2) = aˆ 0 (3). Let us consider, in the Hilbert space of these four modes, an arbitrary ˆ j), j = 0, 1, 2, 3, a jth-stage operator. We will write the equation giving operator M( ˆ j) from its value M(0) ˆ the transformation of M( before the interaction to its value ˆ ˆ M(1) after the action of the first down converter, to its value M(2) after the action of ˆ the beam splitter, and to its value M(3) after the action of the second down converter. The appropriate relations read ˆ j)Uˆ j+1 ( j), j = 0, 1, 2. ˆ j + 1) = Uˆ † ( j) M( M( j+1 Having prescribed equations of motion of operators, we have adopted a spatial modified Heisenberg picture of the dynamics. In the Heisenberg picture the input state does not change, while in the modified Heisenberg picture it changes like that of the free field. In our case of the “discrete” space (cf. j = 0, 1, 2, 3), the change of the state cannot be specified satisfactorily, but fortunately, we will not need it. In (2.298) Uˆ j+1 ( j) for j = 0, 2 describe the down conversion in the undepleted pump approximation. The crystals are assumed to be identical or distinct and pumps are assumed to be identical so that (2.299) Uˆ j+1 ( j) = exp iκ j +1 aˆ s j +1 ( j)aˆ i ( j) + H.c. , j = 0, 2, 2 where κ j +1 = 2 χ v c p j +1 l 2 j 2 +1 , χ is the quadratic susceptibility of the matter of which both the nonlinear crystals are made, vp j +1 are classical complex amplitudes of 2 pumping beams, c is the speed of light, l1 and l2 are the lengths of the first crystal and the second crystal, respectively. In between the down converters the idler from crystal NL1 is put through a beam splitter BS and becomes the idler for crystal NL2. This process is described by † † † Uˆ 2 (1) = exp i[ω¯ 0 aˆ 0 (1)aˆ 0 (1) + ω¯ i aˆ i (1)aˆ i (1) + (γ ∗ aˆ 0 (1)aˆ i (1) + H.c.)] , (2.300) Quantum Description of Experiments with Stationary Fields where ω¯ 0 ω¯ i (t − t ) t + t ∓i f (t, t ), = arg 2 2 γ = −ir f (t, t ), (2.301) (2.302) with , , , , Arccos , t+t , (t + t )∗ 2 f (t, t ) = / . , ,2 |t + t | 1 − , t+t , Here t and r are the transmission and reflection amplitude coefficients, respectively, for the idler mode and t and r are those for the “escape” mode. The modulus of the transmission amplitude coefficient |t| can vary between zero (where the second emission is spontaneous) and unity (where the second down conversion is stimulated in the highest degree). Hence, † aˆ i (2) = Uˆ 2 (1)aˆ i (1) Uˆ 2 (1) = taˆ i (1) + r aˆ 0 (1), † aˆ 0 (2) = Uˆ (1)aˆ 0 (1)Uˆ 2 (1) = raˆ i (1) + t aˆ 0 (1), 2 and the unitarity of the transformation matrix implies that ∗ |t|2 + |r|2 = 1, |r |2 + |t |2 = 1, tr + rt = 0. ˇ acˇ ek and It is advantageous to assume that t = t∗ , and from this r = −r ∗ (Reh´ Peˇrina 1996), and that Re t > 0. Then Arccos(Re t) , f (t, t ) = + 1 − (Re t)2 ω¯ 0 = ±(Im t) f (t, t ). ω¯ i (2.306) (2.307) On applying the relation (2.298) at the input ( j = 0) and at the stage 2 ( j = 2), we obtain that † aˆ s1 (1) = aˆ s1 (0) cosh(κ1 ) + iaˆ i (0) sinh(κ1 ), aˆ i (1) = iaˆ s†1 (0) sinh(κ1 ) + aˆ i (0) cosh(κ1 ), † aˆ s2 (3) = aˆ s2 (2) cosh(κ2 ) + iaˆ i (2) sinh(κ2 ), aˆ i (3) = iaˆ s2 (2) sinh(κ2 ) + aˆ i (2) cosh(κ2 ). 2 Origin of Macroscopic Approach Using Equations (2.304), we easily obtain the following relations: † aˆ s1 (1) = cosh(κ1 )aˆ s1 (0) + i sinh(κ1 )aˆ i (0), aˆ s2 (3) = t † sinh(κ1 ) sinh(κ2 )aˆ s1 (0) + it cosh(κ1 ) sinh(κ2 )aˆ i (0) † it∗ cosh(κ2 )aˆ s2 (2) + ir∗ sinh(κ2 )aˆ 0 (1). ∗ The statistical properties of the system in the Heisenberg picture can be obtained when we take into account that the initial, in fact, “permanent” statistical operator of the system is given as ρˆ ≡ ρ(0) ˆ and when we average ˆ j) = Tr{ρˆ M( ˆ j)}. M( Here, concretely, the statistical operator is a tensor product of separate vacuum statistical operators ρˆ = |0 j j 0|. j=s1 ,i,s2 ,0 ˆ ≡ M(0) ˆ We may introduce also the abbreviations M and we consider the Schr¨odinger picture, where the relation (2.298) is replaced by the evolution relations † ˆ j)Uˆ j+1 , j = 0, 1, 2, ρ( ˆ j + 1) = Uˆ j+1 ρ( with Uˆ j ≡ Uˆ j (0) given in (2.299) and (2.300). The equivalence of both the pictures can be proved and the statistical properties can be expressed in similar terms as in (2.312) ˆ j) = Tr{ρ( ˆ = M ( ˆ j) . M ( ˆ j) M} Since all the initial fields are in the vacuum states, it is easy to obtain the expectation values aˆ s†1 (1)aˆ s1 (1) = sinh2 (κ1 ), aˆ s†2 (3)aˆ s2 (3) aˆ s†1 (1)aˆ s2 (3) = sinh (κ2 )[1 + |t| sinh (κ1 )], 2 = t sinh(κ1 ) cosh(κ1 ) sinh(κ2 ). (2.317) (2.318) We will show in the Heisenberg picture that the input–output relation is connected to the SU(2,2) group. In fact, ⎞ ⎛ aˆ s1 (3) m s1 s1 ⎜ aˆ † (3) ⎟ ⎜ m is1 ⎟ ⎜ ⎜ i ⎝ aˆ s2 (3) ⎠ = ⎝ m s2 s1 † m 0s1 aˆ 0 (3) ⎛ m s1 i m ii m s2 i m 0i m s1 s2 m is2 m s2 s2 m 0s2 ⎞ ⎞⎛ aˆ s1 (0) m s1 0 ⎟ ⎜ † m i0 ⎟ ⎟ ⎜ aˆ i (0) ⎟ , m s2 0 ⎠ ⎝ aˆ s2 (0) ⎠ † m 00 aˆ 0 (0) Quantum Description of Experiments with Stationary Fields where m s1 s1 = cosh(κ1 ), m s1 i = i sinh(κ1 ), m s1 s2 = m s1 0 = 0; m is1 = −it∗ sinh(κ1 ) cosh(κ2 ), m ii = t∗ cosh(κ1 ) cosh(κ2 ), m is2 = −i sinh(κ2 ), m i0 = r∗ cosh(κ2 ); m s2 s1 = t∗ sinh(κ1 ) sinh(κ2 ), m s2 i = it∗ cosh(κ1 ) sinh(κ2 ), m s2 s2 = cosh(κ2 ), m s2 0 = ir∗ sinh(κ2 ); m 0s1 = ir sinh(κ1 ), m 0i = −r cosh(κ1 ), m 0s2 = 0, m 00 = t. From the form of the relation (2.319) it is evident that the operator Nˆ ( j) = nˆ s1 ( j) + nˆ s2 ( j) − nˆ i ( j) − nˆ 0 ( j), j = 0, 3, is independent of j. This conservation law suggests the SU(2,2) group. The coefficients of the transformation (2.319) verify the pseudoorthogonality relations m js1 m ∗ks1 + m js2 m ∗ks2 − m ji m ∗ki − m j0 m ∗k0 = g jk , j, k = s1 , i, s2 , 0, where g jk = g j j δ jk , gs1 s1 = gs2 s2 = 1, gii = g00 = −1. We observe that the antinormally ordered moments have the expression † aˆ j (3)aˆ j (3) = |m js1 |2 + |m js2 |2 , j = s1 , s2 , and the normally ordered moments † aˆ j (3)aˆ j (3) = |m js1 |2 + |m js2 |2 , j = i, 0. More generally, aˆ s1 (3)aˆ s†2 (3) = m s1 s1 m ∗s2 s1 + m s1 s2 m ∗s2 s2 , aˆ s2 (3)aˆ s†1 (3) = aˆ s1 (3)aˆ s†2 (3) ∗ , aˆ i (3)aˆ 0 (3) = m is1 m ∗0s1 + m is2 m ∗0s2 , † aˆ 0 (3)aˆ i (3) = aˆ i (3)aˆ 0 (3) ∗ . Further nonvanishing moments are aˆ j (3)aˆ k (3) = m js1 m ∗ks1 + m js2 m ∗ks2 , j = s1 , s2 , k = i, 0, and † aˆ j (3)aˆ k (3) = aˆ j (3)aˆ k (3) ∗ . 2 Origin of Macroscopic Approach The rest second-order moments vanish: † aˆ j (3)aˆ k (3) = aˆ j (3)aˆ k (3) = 0, j = s1 , s2 , k = i, 0, aˆ j (3)aˆ k (3) = † † aˆ j (3)aˆ k (3) = 0, j = s1 , s2 , k = s1 , s2 , and j = i, 0, k = i, 0. Quantum statistics of radiation in the process under study is that of a four-mode Gaussian state, starting with the quantum characteristic function: CS (βs1 , βs2 , βi , β0 , 3) ˆ s2 (βs2 , 0) D ˆ i (βi , 0) D ˆ 0 (β0 , 0)} ˆ s1 (βs1 , 0) D = Tr{ρ(3) ˆ D ˆ s2 (βs2 , 3) D ˆ i (βi , 3) D ˆ 0 (β0 , 3)}, ˆ s1 (βs1 , 3) D = Tr{ρˆ D where the displacement operators are given by ˆ j (β j , k) = exp[β j aˆ † (k) − β ∗j aˆ j (k)], j = s1 , s2 , i, 0, k = 0, 3. D j ˆ j (β j , 0). On substituting into the relation (2.332) ˆ j (β j ) ≡ D By the remark above, D according to (2.319), we obtain that CS (βs1 , βs2 , βi , β0 , 3) ˆ s2 (βs2 (3)) D ˆ i (βi (3)) D ˆ 0 (β0 (3))}, ˆ s1 (βs1 (3)) D = Tr{ρ(0) ˆ D − βs∗1 (3) = −βs∗1 m s1 s1 − βs∗2 m s2 s1 + βi m is1 + β0 m 0s1 , −βs∗2 (3) = −βs∗1 m s1 s2 − βs∗2 m s2 s2 + βi m is2 + β0 m 0s2 , βi (3) = −βs∗1 m s1 i − βs∗2 m s2 i + βi m ii + β0 m 0i , β0 (3) = −βs∗1 m s1 0 − βs∗2 m s2 0 + βi m i0 + β0 m 00 . From the known quantum characteristic function for the initial vacuum state ⎧ ⎫ ⎨ 1 ⎬ |β j |2 , CS (βs1 , βs2 , βi , β0 , 0) = exp − (2.336) ⎩ 2 ⎭ j=s1 ,s2 ,i,0 we derive that CS (βs1 , βs2 , βi , β0 , 3) = exp ⎡ + ⎣−βs1 βs∗2 Bs∗1 s2 − βi β0∗ Bi0∗ + j=s1 ,s2 ,i,0 j=s1 ,s2 k=i,0 β j βk C ∗jk |β j |2 B jS ⎤ ⎦ . + c.c. Quantum Description of Experiments with Stationary Fields Here the coefficients B jS , B jk , C jk can be expressed in the form 1 † B jS = aˆ j (3)aˆ j (3) − , j = s1 , s2 , 2 1 † B jS = aˆ j (3)aˆ j (3) + , j = i, 0, 2 Bs1 s2 = aˆ s1 (3)aˆ s†2 (3) , Bi0 = aˆ i (3)aˆ 0 (3) , C jk = aˆ j (3)aˆ k (3) , j = s1 , s2 , k = i, 0. Taking into account that aˆ j (3) = 0, j = s1 , s2 , i, 0, we see that we are consistent with the more general notation † B jA = Δaˆ j (3)Δaˆ j (3) , j = s1 , s2 , † B jN = Δaˆ j (3)Δaˆ j (3) , j = i, 0, ˆ a , ˆ and with the coefficients Bs1 s2 , Bi0 , C jk , j = s1 , s2 , k = i, 0, where Δaˆ = a− after similar replacement. We confine ourselves to the study of the signal beams in what follows, which are described by the reduced statistical operator ˆ ρˆ signal (3) = Tri Tr0 {ρ(3)}, where Tri and Tr0 are partial traces over the idler and escape modes, respectively. Quantum characteristic function in the state described by the statistical operator (2.340) can easily be obtained: CS (βs1 , βs2 ) ≡ CS (βs1 , βs2 , 3) = CS (βs1 , βs2 , 0, 0, 3). In Peˇrinov´a et al. (2003), the same function has been introduced as ˆ s1 (βs1 , 1) D ˆ s2 (βs2 , 3) ˆ D CS (βs1 , βs2 ; 1, 3) = Tr ρ(0) ⎡ ⎤ = exp ⎣− |β j |2 B jS + −βs1 βs∗2 Bs∗1 s2 + c.c. ⎦ . j=s1 ,s2 In the following we simplify the notation s1 , s2 for the signal modes to 1, 2, respectively. From the characteristic function Cs (β1 , β2 ) = exp s 2 (|β1 |2 + |β2 |2 ) CS (β1 , β2 ), where s = 1, 0, −1 in the subscript and also s = N , S, A denote the normal, symmetrical, and antinormal orderings of field operators, we can establish the Φs 2 Origin of Macroscopic Approach quasidistribution related to the respective ordering of field operators Φs (α1 , α2 ) = 1 π4 Cs (β1 , β2 ) exp α1 β1∗ − α1∗ β1 + α2 β2∗ − α2∗ β2 d2 β1 d2 β2 . After integrating, we obtain Φs (α1 , α2 ) = 1 π2K 12s 1 2 2 ∗ ∗ × exp [−B2s |α1 | − B1s |α2 | + (B12 α1 α2 + c.c.)] , K 12s where 1 B1A = cosh2 (κ1 ), B1S = B1A − , B1N = B1A − 1, 2 B2A = cosh2 (κ2 ) + |t| sinh2 (κ2 ) sinh2 (κ1 ), 1 B2S = B2A − , B2N = B2A − 1, 2 ∗ B12 = t∗ sinh(κ1 ) cosh(κ1 ) sinh(κ2 ), K 12s = B1s B2s − |B12 |2 . Especially, for s = −1 it holds that ΦA (α1 , α2 ) = 1 α1 , α2 |ρˆ signal (3)|α1 , α2 , π2 where |α1 , α2 is the two-mode coherent state, which yields the expansion ΦA (α1 , α2 ) = × ∞ n 2 =max(0,−q) ∞ 1 2 2 exp(−|α | − |α | ) 1 2 π2 q=−∞ m ∞ 1 =max(0,−q) ∗(m +q) n +q α1 1 α1m 1 α2∗n 2 α2 2 ρ(m 1 + q, m 1 , n 2 , n 2 + q) √ (m 1 + q)!m 1 !n 2 !(n 2 + q)! for any ΦA quasidistribution that does not depend on α1 α2 , α1∗ α2∗ . Here ρ(n 1 , m 1 , n 2 , m 2 ) = n 1 , n 2 |ρˆ signal (3)|m 1 , m 2 Quantum Description of Experiments with Stationary Fields are the usual matrix elements. Equating the expansion coefficients for (2.345) with s = −1 and those of (2.349), we arrive at the expression ρ(m 1 + q, m 1 , n 2 , n 2 + q) = √ (m 1 + q)!m 1 !n 2 !(n 2 + q)! (m 1 − p)!(n 2 − p)! p!( p + q)! p=max(0,−q) min(m 1 ,n 2 ) (K 12A − B2A )m 1 − p (K 12A − B1A )n 2 − p B12 B12 ρ(m 1 + q1 , m 1 , n 2 , n 2 − q2 ) = 0 for q1 = −q2 . m +n 2 +q+1 1 K 12A while obviously (ii) Photon-number statistics Numbers of photons in signal modes complete the picture of the quantum correlation between these beams. The joint photon-number distribution p(n 1 , n 2 ) can ˇ acˇ ek and Peˇrina 1996). A substitution into be expressed in the concise form (Reh´ (2.351) leads to slightly more complicated expression: p(n 1 , n 2 ) = ρ(n 1 , n 1 , n 2 , n 2 ). This distribution can be seen in Fig. 2.12 for |t| = 1, B1N = B2N = 3, |B12 | = 3. It differs from the product of pertinent marginal distributions by larger “diagonal” probabilities. To the contrary for |t| small, the joint photon-number distribution is approximately the product of its marginal photon-number distributions. Fig. 2.12 Joint photon-number distribution for |t| = 1; B1N = B2N = 3, n j ∈ [0, 10], j = 1, 2 As for the experimental arrangement under study, it depends on l1 , t, l2 , whereas the numerical demonstration is restricted to the case when the length of the first crystal is kept fixed. Consequently, the mean photon number B1N in the first signal mode is constant and this convenient behaviour is, for the sake of illustrations, 2 Origin of Macroscopic Approach extended also to the second one as a relationship between |t| and κ2 , sinh2 (κ2 ) = B2N . 1 + |t|2 B1N (2) The Glauber degree of coherence (Peˇrina 1991) γ12 is the complex-valued quantum correlation measure related to the normal ordering: (2) γ12 =√ ∗ B12 . B1N B2N In the numerical illustration of the quantum correlation measures, we assume κ1 = κ2 = κ and find κ1 by the inversion of the formula B1N = sinh2 (κ1 ) = aˆ s†1 (1)aˆ s1 (1) = n 1 (1). (2) | = RN = 1, see Fig. 2.13. This maximum The limit case |t| = 1 is interesting, |γ12 correlation does not correspond to a weaker correlation between the signal photon numbers. In the multimode analysis (Wang et al. 1991a) of the experiment (Zou et al. Fig. 2.13 Quantum correlation measure RN versus the modulus of the transmission amplitude coefficient |t| ∈ [0, 1]; it is assumed that n 1 (1) = 10−2 , 1, 10, 100, 104 (the curves a, b, c, d, e, 1991), the visibility of the interference between the signal fields has been expressed, the interference manifests itself as oscillations in the counting rate Is (see (2.412) below) when propagation times of the idler beam from NL1 to NL2, of the first signal beam from NL1 to Ds , and of the second signal beam from NL2 to Ds are incremented by δτ0 , δτ1 , δτ2 , respectively. Deriving a simplified visibility Vsimple for the formalism of several modes provides Vsimple √ 2RN B1N B2N = . B1N + B2N √ As B1N B2N ≤ 12 (B1N +B2N ), the visibility cannot exceed the correlation measure RN and the equality is attained for B1N = B2N . The maximum obtainable visibility Quantum Description of Experiments with Stationary Fields between two fields in an experiment is given by the correlation measure RN , cf. Wiseman and Mølmer (2000). On substituting (2.338) with (2.316), (2.317), and (2.318) into (2.355), we find that RN = |t| cosh(κ1 ) 1 + |t|2 sinh2 (κ1 ) Noting that the idler beam, before it enters the beam splitter, has the same statistics as the output signal 1, we can rewrite (2.358) in terms of the mean photon number n¯ 1 (1) = sinh2 (κ1 ) as RN = |t| 1 + n 1 (1) . 1 + |t|2 n 1 (1) Wiseman and Mølmer (2000) considered the relevant limits in this form. The singlephoton regime which is the regime of experiment and theory in Zou et al. (1991) and Wang et al. (1991a,b) occurs for n 1 (1) 1. Up to the first order in the rescaled lengths κ1 , κ2 of the crystals, we simply obtain RN = |t|. The probability of a down conversion at crystal NL1 over interaction time is less than or equal to n 1 (1), or it is small. The probability to have crystals over inter down conversions at both † † action time is less than or equal to aˆ s1 (1)aˆ s1 (1)aˆ s2 (3)aˆ s2 (3) = B1N B2N + |B12 | 2 , or it is negligible. The single-photon regime applies in the multimode analysis of Wang et al. (1991a), because each of the narrow-bandwidth signal modes (ks1 , ωs1 ), (ks2 , ωs2 ), with directions characterized by ks j and with the frequencies ωs j , j = 1, 2, that form broad-band signal fields, receives only a small part of the pumping photons over interaction time. The same applies to idler modes and idler fields. The signal fields s1 and s2 from the two down converters are allowed to come together and interfere at the detector Ds . In the spatial interaction picture, the state of the field produced by the crystals is given by |ψ(3) ≡ Uˆ 3 (0)Uˆ 2 (0)Uˆ 1 (0)|0 s1 ,i,s2 ,0 , where Uˆ j+1 (0) for j = 1, 2, 3 are given by relations (2.299) and (2.300), where the annihilation operators aˆ s j +1 ( j) → aˆ s j +1 (0), aˆ i ( j) → aˆ i (0), aˆ 0 ( j) → aˆ 0 (0). 2 We will drop the argument (0) at the annihilation operators in what follows. Here |0 s1 ,i,s2 ,0 ≡ |0 s1 |0 i |0 s2 |0 0 and, in general, |n s1 , n i , n s2 , n 0 ≡ |n s1 s1 |n i i |n s2 s2 |n 0 0 . In the Schr¨odinger picture, the operators do not change and in the interaction picture, which is the modified Schr¨odinger picture, they change like the Heisenberg picture free-field operators. An analogue of relation (2.298) for a discrete space is not used 2 Origin of Macroscopic Approach in quantum optics (the free-field propagation is absorbed in the interaction). Fortunately, we will use just the interaction-picture annihilation operators. Expanding the operators Uˆ j+1 (0) according to κ j +1 , j = 0, 2, we obtain that 2 |ψ(3) |0 s1 ,i,s2 ,0 + iκ2 |0, 1, 1, 0 + iκ1 (t|1, 1, 0, 0 + r|1, 0, 0, 1 ), when κ j +1 are small. For |t| = 1 we have a single-photon state in the idler mode 2 and in the collection of the signal modes. In general, one can infer a conversion at crystal NL1 after a photocount in the escape mode. We introduce the probability of the detection p1,0,0,1 (3) = |κ1 |2 |r|2 . Let us assume that one infers a conversion at crystal NL2 after no photocounts in the escape mode. We introduce the probability of a correct inference of the conversion at crystal NL2 p0,1,1,0 (3) = |κ2 |2 and that of such a wrong inference p1,1,0,0 (3) = |κ1 |2 |t|2 . On a photocount in the escape mode it is certain that the conversion has happened at NL1. On no photocounts in this mode, the posterior probabilities are |κ2 |2 , |κ1 |2 |t|2 + |κ2 |2 |κ1 |2 |t|2 Prob(n s1 = 1 ∩ n s2 = 0|n i = 1 ∩ n 0 = 0) = . 2 |κ1 | |t|2 + |κ2 |2 Prob(n s1 = 0 ∩ n s2 = 1|n i = 1 ∩ n 0 = 0) = (2.366) (2.367) Here the underlining means a random variable. The counting rate registered by Ds exhibits perfect interference when the idler fields are perfectly aligned. This may be regarded as reflecting the intrinsic impossibility of knowing whether the detected photon comes from NL1 or NL2 (Wang et al. 1991a). The multiphoton conditional states can be found in Luis and Peˇrina (1996a). Let us consider the annihilation operators aˆ s j +1 , j = 0, 2. The action of the two 2 operators on the state |ψ(3) is asymptotically for small κ j +1 expressed as 2 aˆ s1 |ψ(3) iκ1 (t|0, 1, 0, 0 + r|0, 0, 0, 1 ), aˆ s2 |ψ(3) iκ2 |0, 1, 0, 0 . (2.368) (2.369) aˆ s†1 aˆ s2 κ1 κ2 t∗ , aˆ s†1 aˆ s1 κ12 , aˆ s†2 aˆ s2 κ22 . Quantum Description of Experiments with Stationary Fields It can be verified that these relations hold up to the second order in κ1 , κ2 . We can (2) = t∗ in the limit of small κ j +1 , j = 0, 2. Obviously, the equation find that γ12 2 holds up to the zeroth order, but it can be verified that it is valid up to the first order in κ1 , κ2 . The opposite regime is that where n 1 (1) 1. Here there are many photons on average in all of the down-converted beams. That is, the phase of the stage-1 or stage-2 idler mode should lock the phase of the output signal-2 mode for any nonzero transmission amplitude coefficient t. (iii) Multimode formalism In Wang et al. (1991a) the pump beams at each crystal are represented by complex analytic signals V1 (t) and V2 (t) such that |V j (t)|2 is in units of photons per second ( j = 1, 2). The multimode formalism enables one to respect that the two crystals are centred at 01 and 02 . The multimode formalism views the electric fields as temporal-interaction-picture operators δω (+) aˆ m (ωm ) exp[i(km · r − ωm t)], m = s1 , i, s2 , 0, (2.372) Eˆ m (r, t) = 2π ω m where δω is the mode spacing and aˆ m (ωm ) is the photon annihilation operator for narrow-bandwidth signal (m = s j , j = 1, 2), idler (m = i), and “escape” (m = 0) modes (km , ωm ) at the frequency ωm . The Hilbert space for the multimode analysis is a tensor product of those Hilbert spaces of separate modes, whose vacuum states may be designated as |0 m (ωm ), m = s1 , i, s2 , 0. We con(t), j = 1, 2, at the appropriate detector, sider photon-flux amplitude operators Eˆ s(+) j ˆE s(+) (t)≡ Eˆ s j (rs , t) = Eˆ s(+) (0 j , t − τ j ), where τ j , j = 1, 2, is the propagation time j j of s j from NL j to Ds . In order to compare the sophisticated multimode formalism with the simple formalism of several modes, we must present another appropriate description of the dynamics of the same down-conversion experiment. We adopt the temporal interaction picture and combine it with the spatial interaction picture. In this case, the state produced by the crystal is given by |ψ(3, t) ≡ Uˆ3 (0, t)Uˆ2 (0, t)Uˆ 1 (0, t)|0 s1 ,i,s2 ,0 . Here |0 s1 ,i,s2 ,0 is the vacuum state of all the narrow-bandwidth signal, idler, and “escape” modes (ks j , ωs j ), j = 1, 2, (ki , ωi ), (k0 , ω0 ), respectively. Uˆ j+1 (0, t), j = 0, 2, are unitary operators: Uˆ j+1 (0, t) = lim Uˆ j+1 (0, t, t0 ), t0 →−∞ where Uˆ j+1 (0, t, t0 ) are unitary operators that obey the initial condition , ˆ Uˆ j+1 (0, t, t0 ),t=t0 = 1, 2 Origin of Macroscopic Approach Uˆ j+1 (0, t, t0 ) for j = 0, 2 describe the parametric down conversion and are expressed, indirectly, in terms of Eˆ s(+) (r, t), Eˆ i(+) (r, t); Uˆ 2 (0, t) describes the beam j 2 +1 splitter and is expressed as † † ω0 aˆ 0 (ω )aˆ 0 (ω ) + ωi aˆ i (ω )aˆ i (ω ) Uˆ 2 (0, t) = exp i + γ ∗ aˆ 0 (ω )aˆ i (ω ) + H.c. In this point we differ from the paper by Wang et al. (1991a), who used the initial condition at t = t − t1 and they did not write down the decomposition into stages. Let T denote the time ordering. We will introduce the unitary operator t ˆ U3 (0, t, t − t1 ) = T exp φ(ω0 − ω , ω ) ν2 V2 (0)δω t−t1 ×e−i(ks2 +k † )·02 −i(ω0 −ω −ω )t † aˆ s2 (ω )aˆ i (ω ) e − H.c. dt , where ν j , j = 1, 2, is a constant such that |ν j |2 gives the fraction of incident pump photons that is spontaneously down converted in the steady state, ω0 is the frequency of the monochromatic pump beam, ks2 (k ) is a wave vector that is determined by the frequency ωs 2 (ω ) and the direction of the second signal beam (the idler beam). To the first order in the processes the unitary operator may be expressed as ˆ ˆ φ(ω0 − ω , ω ) U3 (0, t, t − t1 ) = 1 + ν2 V2 (0)δω × ω ω 1 −i(ks2 +k )·02 sin 2 (ω0 − ω − ω )t1 e 1 (ω0 − ω − ω ) 2 t1 † † aˆ s2 (ω )aˆ i (ω ) − H.c. . × exp −i(ω0 − ω − ω ) t − 2 (2.378) From this we obtain the vector |ψ(3, t, t − t1 ) = Uˆ 3 (0, t, t − t1 )|ψ(2, t, t − t1 ) φ(ω0 − ω , ω ) = |ψ(2, t, t − t1 ) + ν2 V2 (0)δω ×e −i(ks2 +k )·02 (ω0 − ω − ω )t1 2 1 (ω0 − ω − ω ) 2 t1 × exp −i(ω0 − ω − ω ) t − |ω s2 |ω i |0 s1 ,0 , 2 (2.379) Quantum Description of Experiments with Stationary Fields where |ω s2 and |ω i are frequency eigenstates of the second signal and the idler beam, respectively, |0 s1 ,0 is the vacuum state of the first signal and escape modes, and |ψ(2, t, t − t1 ) = Uˆ 2 (0, t)Uˆ 1 (0, t, t − t1 )|0 s1 ,i,s2 ,0 † = Uˆ 2 (0, t)Uˆ 1 (0, t, t − t1 )Uˆ 2 (0, t)|0 s1 ,i,s2 ,0 . We transform the vector |ψ(3, t, t − t1 ) to a vector ˆE s2 (t)|ψ(3, t, t − t1 ) = ν2 V2 (0) δω δω φ(ω0 − ω , ω ) 2π ω ω 1 sin (ω − ω − ω )t 0 1 2 × e−ik ·02 1 − ω ) (ω − ω 0 2 t1 × exp −i(ω0 − ω − ω ) τ2 − exp[−i(ω0 − ω )(t − τ2 )] 2 (2.381) × |ω i |0 s1 ,s2 ,0 δω ν2 V2 (0) φ(ω0 − ω , ω ) 2π ω ∞ sin 12 (ω0 − ω − ω )t1 −ik ·02 ×e 1 (ω0 − ω − ω ) −∞ 2 t1 × exp −i(ω0 − ω − ω ) τ2 − dω exp[−i(ω0 − ω )(t − τ2 )] 2 (2.382) × |ω i |0 s1 ,s2 ,0 = ν2 V2 (t − τ2 )|1(02 , t − τ2 ) i |0 s1 ,s2 ,0 , (2.383) where the single-photon state of the idler beam |1 (r, t) i = 2π δω ω −i(k ·r−ω t) φ(ω0 − ω , ω ) |ω i , φ(ω˜ , ω ) is connected with spectral functions φ j (ω , ω ; ω) characterizing the signal and idler fields at any crystal NL j, ˜ ω) = φ2 (ω, ˜ ω), φ(ω, ˜ ω) = φ1 (ω, φ j (ω˜ , ω ) = φ j (ω˜ , ω ; ω0 ), j = 1, 2. The frequency eigenstates are single-photon states: |ω m = aˆ m† (ω )|0 m , 2 Origin of Macroscopic Approach where |0 m = |0 m (ω ), m = i, 0. Unfortunately, the nonvanishing result is obtained only for 0 < τ2 < t1 . To resolve this, Wang et al. (1991a) let t1 → ∞. Should t1 mean the interaction time, it is better to change the integration limits, namely not to consider the integration interval [t − t1 , t], but, for instance, [t − (K 2 + 1)t1 , t − K 2 t1 ], K 2 t1 < τ2 < (K 2 + 1)t1 Eˆ s1 (t)|ψ(3, t, t − t1 ) = Eˆ s1 (t)|ψ(2, t, t − t1 ) . to hold. We further calculate We obtain the appropriate component of the vector |ψ(2, t, t − t1 ) by the action of the unitary operator t † ˆ ˆ ˆ ν1 V1 (0)δω U2 (0, t)U1 (0, t, t − t1 )U2 (0, t) = T exp × φ(ω0 − ω , ω )e −i(ks1 +k )·01 −i(ω0 −ω −ω )t aˆ s†1 (ω )[t∗ aˆ i (ω ) + r aˆ 0 (ω )] − H.c. dt . (2.392) † The calculation proceeds similarly as in the case of NL2, but we replace aˆ i (ω ) by † † taˆ i (ω ) + raˆ 0 (ω ), |ω i by t|ω i + r|ω 0 and we change all the other subscripts that underlie to a change, so that Eˆ s1 (t)|ψ(3, t, t − t1 ) = ν1 V1 (01 , t − τ1 ) t|1(01 , t − τ1 ) i |0 s1 ,s2 ,0 (2.393) + r|1(01 , t − τ1 ) 0 |0 s1 ,i,s2 , where |0 s1 ,s2 ,0 ≡ |ψvac s1 ,s2 ,0 , |0 s1 ,i,s2 ≡ | ψvac s1 ,i,s2 stand for vacuum states, |1(r, t) i is defined in (2.384), and |1(r, t) 0 stands for single-photon state of the “escape” beam |1(r, t) 0 ≡ √ 2π δω φ(ω0 − ω , ω ) exp[−i(k · r − ω t)]|ω 0 . ω Quantum Description of Experiments with Stationary Fields Here a nonvanishing result is obtained only for 0 < τ1 < t1 . Considering a change of the integration limits as above, we see that, to the first order of the processes, no difficulties arise if we change the limits independently of NL2. We do not consider the integration interval [t − t1 , t], but, for instance, [t − (K 1 + 1)t1 , t − K 1 t1 ], K 1 t1 < τ1 < (K 1 + 1)t1 to hold. In other words, the relations (2.383) and (2.393) can be generalized to provide ˆ cτ1 [ Eˆ s(+) (t)|ψ(3, t) ] = Eˆ s1 (t)|ψ(3, t − K 1 t1 , t − (K 1 + 1)t1 ) , A 1 ˆ c(τ0 +τ2 ) [ Eˆ s(+) (t) |ψ(3, t) ] = Eˆ s2 (t)|ψ(3, t − K 2 t1 , t − (K 2 + 1)t1 ) , A 2 (2.398) (2.399) ˆ f denotes an where τ0 is the propagation time of the idler from NL1 to NL2. A attenuation of the field down to the vacuum state outside an interaction length centred at the distance f from NL1 in the direction of propagation of the beam. ˆ f compensates for the difference we have caused with the initial The operator A condition at t = t0 → −∞ instead of the Wang–Zou–Mandel shortening of the ˆ f is not unitary and is even “slightly” nonlinear. integration interval. The operator A Its consideration depends on a neglect of the coherence length in comparison with the interaction length. Using such an operator we can describe, where (within which interaction length) the single-photon states are localized at the time t, ˆ cτ [|1(01 , t − τ ) i |0 s1 ,s2 ,0 ] = |1(01 , t − τ ) i |0 s1 ,s2 ,0 , A ˆ cτ [|1(01 , t − τ ) 0 |0 s1 ,i,s2 ] = |1(01 , t − τ ) 0 |0 s1 ,i,s2 . A (2.400) (2.401) The angular brackets will mean averages and, when operators are involved, the brackets are supposed to average in the state |ψ(3, t) , ˆ = ψ(3, t)| M|ψ(3, ˆ M t) , ˆ being an operator. When the operator is situated inside an interaction length with M centred in the propagation distance f from NL1, it also holds that ˆ A ˆ f [|ψ(3, t) ]. ˆ =A ˆ f [ ψ(3, t)|] M M Hence, one may omit the unusual notation when no ambiguities arise. 2 Origin of Macroscopic Approach Letting ωs and ωi denote the centre frequency of the signal beam and the idler beam, respectively, we have ωs + ωi = ω0 . Introducing the normalized correlation function μ(τ ) of the down-converted light i 1(r1 , t − τ1 )|1(r2 , t − τ2 ) i = μ(τ0 + τ2 − τ1 ) exp[−iωi (τ0 + τ2 − τ1 )], (2.404) where e−iωi τ μ(τ ) = 2π |φ(ω, ˜ ω)|2 e−iωτ dω, we obtain that the relations (2.370) and (2.371) ought to read Eˆ s(−) (t) Eˆ s(+) (t) ν1∗ ν2 t∗ V1∗ (t − τ1 )V2 (t − τ2 ) 1 2 ×μ(τ0 + τ2 − τ1 ) exp[−iωi (τ0 + τ2 − τ1 )], Eˆ s(−) (t) Eˆ s(+) (t) |ν1 |2 |V1 (t − τ1 )|2 , (t) Eˆ s(+) (t) |ν2 |2 |V2 (t − τ2 )|2 , Eˆ s(−) 2 2 † ˆ s(+) (t) . Hence the modulus of the normalized E (t) ≡ where we introduce Eˆ s(−) j j correlation function is | Eˆ s(−) (t) Eˆ s(+) (t) | 1 2 / ˆ (+) ˆ (−) ˆ (+) Eˆ s(−) 1 (t) E s1 (t) E s2 (t) E s2 (t) V1∗ (t − τ1 )V2 (t − τ2 ) |V1 (t − τ1 )|2 |V2 (t − τ2 )|2 |μ(τ0 + τ2 − τ1 )||t|. The maximum value is equal to |t|, which is predicted also by equation (2.359). A linear dependence of visibility on |t|, as seen convincingly in the original work (Zou et al. 1991, Wang et al. 1991a), is the true signature of induced coherence without induced emission. (t)|ψ(t) , j = 1, 2, they are not explicitly presented in Wang As concerns Eˆ s(+) j et al. (1991a), but they may be derived. It emerges that the parameters of the beam (t)|ψ(t) . On the contrary, Eˆ s(+) (t)|ψ(t) in splitter do not enter the relation for Eˆ s(+) 1 1 Wang et al. (1991a) comprises the parameters t∗ , r ∗ . Nevertheless, the statistical properties in Peˇrinov´a et al. (2003) coincide with those in Wang et al. (1991a), because the differences under discussion resemble distinct, yet equivalent pictures. Especially, considering the photon-flux amplitude operators Eˆ s(+) (t) at the detector Ds with a quantum efficiency ηs (Wang et al. 1991a), 1 Eˆ s(+) (t) = √ i Eˆ s(+) (t) + Eˆ s(+) (t) , substituting into the formula for the average rate of photon counting Is = ηs ψ(t)| Eˆ s(−) (t) Eˆ s(+) (t)|ψ(t) , Quantum Description of Experiments with Stationary Fields where Eˆ s(−) (t) = [ Eˆ s(+) (t)]† , and taking into account the orthogonality of single-photon states |1(r1 , t) i and |1(r1 , t) 0 uniquely results in the relation 1 ηs {|ν1 |2 |V1 (t − τ1 )|2 + |ν2 |2 |V2 (t − τ2 )|2 2 +[−iν1∗ ν2 V1∗ (t − τ1 )V2 (t − τ2 ) t∗ μ(τ0 + τ2 − τ1 )e−iωi (τ0 +τ2 −τ1 ) + c.c.]}. (2.412) Is = Peˇrinov´a et al. (2000) have studied quantum statistics of radiation in signal modes of the two-mode parametric processes with aligned idler beams. They have found that the signal beams are in the correlated chaotic state. The strength of correlation depends on the degree to which the paths of the idler beams are superposed and aligned. They have compared different measures of correlation, especially the entropic or information-based measure with the modulus of the usual degree of coherence in the dependence on absolute value of the transmission amplitude coefficient of the beam splitter inserted as an attenuator of the perfect alignment. Some other measures have been introduced taking into account the symmetrical and antinormal orderings of field operators. In contrast to the normal ordering, these orderings do not indicate the maximum correlation for the perfect alignment. The situation with the photon numbers in the signal modes, whose correlation is not maximum for the perfect alignment, serves as motivation for such a more general consideration. The theory of canonical correlation has been applied to the quasidistribution of complex amplitudes related to the symmetrical ordering of field operators. They have taken into account that the quantum correlation has a significant effect on the photon-number sum, photon-number difference, and quantum phasedifference statistics. Essentially, it concerned the variances of number sum and number difference and the dispersions of quantum phase differences according to various definitions. A comparison of distributions of quantum phase difference derived from the phase-space distributions has shown that the phase-difference uncertainty increases from the normal ordering, through the symmetrical and antinormal orderings, whereas the system of canonical phase related to the antinormal ordering of exponential phase operators ranges between the symmetrical and antinormal orderings, but by no means exactly. The paper (Peˇrinov´a et al. 2000) reveals that the correlated chaotic state is the mixed partial phase-difference state. In addition to the marginal distributions, the joint number-sum and phase-difference distribution has been considered, but for the canonical quantum phase difference and the Luis– S´anchez-Soto phase difference only. The quasidistribution of number difference and phase difference has been defined with the properties that the marginal distribution of the phase difference is the canonical one. They have addressed the number sum and the quantum phase difference as simultaneously measurable observables and the number difference and the quantum phase difference as canonically conjugate observables. 2 Origin of Macroscopic Approach Peˇrinov´a et al. (2003) have compared the simple formalism of several coupled harmonic oscillators with multimode formalism in the analysis of an interference experiment. On focusing on several modes they have been able to study phase properties of “correlated chaotic beams”. Then they have assumed the single-photon regime as also previous authors did. They have indicated that, assuming several coupled harmonic oscillators, the previous authors did not try to include time delays between optical elements into the analysis. Peˇrinov´a et al. (2003) have also formally expressed, for instance, that one works with a single-photon state of some signal modes in the several-mode formalism whenever one describes the experiment with a superposition of single-photon states of modes that form the signal beam. The utility of a simple single-mode theory has been clarified in the case where single spatial mode filters and narrow-band optical filters are used to filter the output state of parametric down-conversion Li et al. 2005). Peˇrina and Kˇrepelka (2005) have derived joint photon-number distributions in signal and idler modes and have illustrated related concepts taking into account experimental data. Peˇrina and Kˇrepelka (2006) have provided the generalization of this description to stimulated parametric down-conversion. Peˇrina et al. (2007) have reported on a measurement of the joint signal–idler photoelectron distribution of twin beams. Parameters of the previously published model (Peˇrina and Kˇrepelka 2005) have been estimated. The specific result that the joint signal–idler quasidistribution of integrated intensities can be approximated by a well-behaved function even in the case where the quasidistribution is not an ordinary function has been comprised. Peˇrina (2008) has shown that a nonlinear planar waveguide pumped by a beam orthogonal to its surface may serve as a versatile source of photon pairs. He considers the pump-pulse duration, pump-beam transverse width, and angular decomposition of the pump-beam frequency and their effect on characteristics of a photon pair, such as the spectral widths of signal and idler fields, the pair time duration, and the degree of entanglement between the two fields. Chapter 3 Macroscopic Theories and Their Applications There were several attempts at a justification of the momentum-operator approach. It is appropriate that they include quantization of the electromagnetic field at least in the one-dimensional case. A complete analysis could be provided only for the parametric processes, in which the momentum operator is effectively quadratic. It has been noted that the nonlinearity of the process may lead up to a need of a renormalization. Nevertheless, there is a modicum of papers on this theme in quantum optics. A general approach to quantization of the electromagnetic field in a nonlinear medium enables one to compare properties of the momentum operator with those of the space–time displacement operator. We present applications of the traditional approach in quantum optics. The spatio-temporal approach has been developed with respect to quantum solitons. An attempt has been made to take into account the frequency dispersion of a medium at least up to inclusion of the group velocity and to preserve the traditional descriptions of nonlinear processes by introducing narrow-band fields. A mention of the quasimode description of the spectrum of squeezing will be restricted to an analysis of coupling of the cavity modes and propagating modes. The paraxial propagation of a light beam with the parabolic approximation and the asymptotic expansion of the beam has been completed with a quantized version. As an example the nonlinear process has been presented, whose description includes the renormalization. In optical imaging with nonclassical light one wants to investigate the quantum fluctuations of light at different spatial points in the plane perpendicular to the propagation direction of the light beam. In such spatial points very small photodetectors or pixels may be located. Finally the application of one of the macroscopic approaches has led to the description of several linear optical devices and to the study of radiating atoms in a linear medium, which is a recurrent theme by the way. 3.1 Momentum-Operator Approach Several papers devoted to macroscopic approaches to quantization of the electromagnetic field advocate the momentum operator. In general, such an operator A. Lukˇs, V. Peˇrinov´a, Quantum Aspects of Light Propagation, C Springer Science+Business Media, LLC 2009 DOI 10.1007/b101766 3, 3 Macroscopic Theories and Their Applications should be one of the space–time displacement operators. To the contrary, almost paradoxical properties of these operators have been derived. The exposition is concluded with applications of the traditional approach to the nonlinear optical couplers. 3.1.1 Temporal Modes and Their Application Huttner et al. (1990) have developed a formalism that describes in a fully quantummechanical way the propagation of light in a linear and nonlinear lossless dispersive medium. At first, they assume a similar situation as Abram (1987), i.e. they consider only the one-dimensional case restricting themselves even to fields propagating in the +z-direction only. They take for granted that in quantum field theory there is a generator for spatial progression, i.e. that relation (2.49) holds for any operator. They remark that the change in the quantization volume pointed out by Abram (1987) is not defined when the medium is dispersive, i.e. when the refractive index depends on the frequency, but they develop Abram’s idea of the use of the energy flux not being dependent on the medium (cf. Caves and Crouch 1987). In their opinion, the classical analysis of nonlinear optical processes shows that in order to obtain simple equations of propagation it is useful to introduce photon-flux amplitude, i.e., a quantity whose square is proportional to the photon flux. At present we hesitate to accept the consequences of their approach (cf. however Ben-Aryeh et al. 1992). Specifying the state at a given point (e.g. z = 0) and within a time period T cannot substitute specifying the state at an initial time (t = 0) and within a quantization length L. Temporal modes of discrete frequencies ωm , , cannot substitute the spatial modes. The equal-space commutation where ωm = 2mπ T relations ˆ ωi ), aˆ † (z, ω j )] = δi j 1ˆ [a cannot substitute the usual equal-time commutation relations. In comparison with Abram (1987), we see the following changes. In Huttner et al. (1990), the MKSA (SI) system of units is used. Instead of immediately reducing the unsymmetrical Maxwell stress tensor to a single component, the momentum density is first reduced. The normal ordering is used where necessary. The notation ceases to express the dependence on both z and t and states the dependence on z only. “The generalization” of the relation for the momentum-flux operator ˆ (−) (z, t) Bˆ (+) (z, t) + H.c.], gˆ (z, t) = c[ D where H.c. means the Hermitian conjugate to the previous term, to the form ˆ (−) (z, t) Eˆ (+) (z, t) + H.c.] gˆ (z, t) = [ D ˆ is not founded well. Its integration over T gives the momentum operator G(z), t0 +T ˆ G(z) ≡ gˆ (z, t) dt. t0 Momentum-Operator Approach In the case of a linear dielectric medium in contrast with Abram (1987), the ˆ electric-field operator E(z, t) is dependent on the refractive index n(ωm ) ˆ E(z, t) = ωm ˆ ωm )e−iωm t + H.c. , a(z, 20 cT n(ωm ) while in Abram (1987) the operator is independent of the medium. In Abram (1987) there is not pure Heisenberg picture, so that the equivalence of the two theories (for n independent of ω) is not excluded. From relation (3.3), the momentum operator is obtained ˆ lin (z) = ˆ ωm ), (km )aˆ † (z, ωm )a(z, (3.6) G m where km = n(ωmc )ωm is the wave vector in the (linear) medium. The equal-space commutation relations are conserved. For such a medium, the equal-time commutation relations can be derived ˆ ˆ t), − D(z ˆ , t) = iδ(z − z )1. A(z, Attempting at the quantization in a nonlinear medium, Huttner et al. (1990) have concentrated on the propagation of light in a multimode degenerate parametric amplifier. The postulated relation (3.3) then leads to the nonlinear part of the momentum-flux operator 2 gˆ nonlin (z, t) = χ (2) E (+) (z, t) Eˆ (−) (z, t) + H.c. , where E (+) (z, t) = |E|e−i(ωp t−kp z) is the positive-frequency part of the pump field, with the pump frequency ωp . From relations (3.4) and (3.8), the momentum operator is obtained ˆ nonlin (z) = λ(m ) aˆ † (z, ω0 + m )aˆ † (z, ω0 − m )eikp z + H.c. , G 4 m where m ≡ ωm − ω0 , ω0 = ωp 2 and λ(m ) ≡ χ (2) |E| 0 c ω0 + m ω0 − m n(ω0 + m ) n(ω0 − m ) is the coupling constant between the different modes. 3 Macroscopic Theories and Their Applications It is assumed that the phase-matching condition at ω0 , n(ωp ) = n(ω0 ), is satisfied. It is found that the phase-mismatch Δk(m ) is proportional to m2 . As far as |Δk(m )| < λ(m ), the Bogoliubov transformation for squeezing emerges and amplifying behaviour can be recognized. In the case |Δk(m )| > λ(m ), the evolution is not essentially different from that in a linear medium, the squeezing effect is band limited. For the equality |Δk(m )| = λ(m ), the amplifying is present, but the increase is only linear not exponential. For the nonlinear medium, the equal-time commutation relations ˆ (+) (z , t) ≈ i δ(z − z )1ˆ ˆ (−) (z, t), − D A 2 and relation (3.7) can be recovered only approximately. In relation to the experiment, a standard two-port homodyne detection scheme is assumed, where the light is mixed at a beam splitter with a strong local oscillator ε(z, t) of the frequency ω0 . For the correlation function g S (τ ) of the photocurrent difference and its Fourier transform (3.13) y(η) = g S (τ )e−iητ dτ, we refer to Huttner et al. (1990). It has been shown that the values of y(η) can be minimized uniformly enough by an adequate choice of the local oscillator phase. The result is comparable with Crouch (1988), where the usual interpretation of homodyne detection in terms of the field quadratures is used. 3.1.2 Slowly Varying Amplitude Momentum Operator Nevertheless, there is a class of problems, for which the modal approach is very convenient. It is the cavity quantum electrodynamics. Let us mention its use in the development of the input–output formalism for nonlinear interactions in cavity (Yurke 1984, 1985, Collett and Gardiner 1984, Gardiner and Collett 1985, Carmichael 1987). The modal approach can describe many of the features of travelling-wave phenomena, but, in principal, it mixes effects related to spatial progression of the beam with the spectral manifestations of the nonlinearity. For example, for the case of the travelling-wave parametric generation (Tucker and Walls 1969), a wave vector mismatch appears as energy (frequency) nonconservation. Several authors have tried to return the quantum-mechanical propagation to direct space. One technique (Drummond and Carter 1987) involves the partition of the box of quantization into finite cells. Another technique considers the spatial progression of the temporal Fourier components of the local electric field (Yurke et al. 1987, Caves and Crouch 1987). The propagation of light in a magnetic (dielectric) medium is not considered in quantum optics. We proceed with the field inside an effective (linear or nonlinear) medium and the direct-space formulation of the theory of quantum optics as presented by Abram and Cohen (1991). It is an alternative of the conventional reciprocal space approach to Momentum-Operator Approach quantum optics. Their approach relies on the electromagnetic momentum operator as well as on the Hamiltonian and is restricted to the dispersionless lossless nonmagnetic dielectric medium. They have derived an operatorial wave equation that relates the temporal evolution of an electromagnetic pulse to its spatial progression. The theory is applied to squeezed light generation by the parametric down-conversion of a short laser pulse as an illustration. This approach does not use the conventional modal description of the field. The appeal of the classical theory of optics may consist in its considering material as a continuous dielectric characterized by a set of phenomenological constants. In classical nonlinear optics the slowly varying amplitude approximation of the electromagnetic wave equation has arisen. An important simplification of quantum optics results when the microscopic description of the material is replaced by a macroscopic description, in terms of an effective linear or nonlinear polarization. In spite of the phenomenological treatment of the medium, such an effective theory still permits a quantum-mechanical description of the field (Jauch and Watson 1948, Shen 1967, Glauber and Lewenstein 1989, Glauber and Lewenstein 1991, Hillery and Mlodinow 1984, Drummond and Carter 1987). In propagation problems, the interactions undergone by a short pulse of light are examined. Abram and Cohen (1991) simplify the geometry for the electromagnetic field so that the electric field E is polarized along the x-axis, the magnetic field B along the y-axis, while propagation occurs along the z-axis. They use the Heaviside– Lorentz units and take = c = 1. In this simple geometry the Maxwell equations reduce to two scalar differential equations ∂B ∂E =− , ∂z ∂t ∂D ∂B =− , ∂z ∂t (3.14) (3.15) where the electric displacement field D is defined by D = E + P, with P being the polarization of the medium, which can be expressed as a converging power series in the electric field E, P = χ (1) E + χ (2) E 2 + · · · + χ (n) E n + · · · , where χ (n) is the nth-order susceptibility of the medium. The dispersion cannot be taken into account rigorously within a quantum-mechanical theory based on the effective (macroscopic) Hamiltonian formulation (Hillery and Mlodinow 1984), but it can be introduced phenomenologically (Drummond and Carter 1987). To impose the canonical structure on the field, they introduce the vector potential A and adopt the Coulomb gauge in which the scalar potential vanishes and A is transverse. In the assumed geometry, the vector potential is polarized along the x-axis and is related 3 Macroscopic Theories and Their Applications to the electric and magnetic fields by ∂A ∂t ∂A . ∂z E =− and B= The effective Lagrangian density has been chosen (Hillery and Mlodinow 1984, Drummond and Carter 1987), 1 1 1 1 2 (3.20) (E − B 2 ) + χ (1) E 2 + χ (2) E 3 + χ (3) E 4 + · · · , 2 2 3 4 which is known to be the most general density dependent only on the electric field and having the gauge invariance. Let us note that the theory with the effective Lagrangian density (3.20) is not renormalizable (Power and Zienau 1959, Woolley 1971, Babiker and Loudon 1983, Cohen-Tannoudji et al. 1989). The canonically conjugated momentum of A with respect to the Lagrangian density (3.20) is the electric displacement L= ∂L = −D. ˙ ∂A The Lagrangian density is then transformed to some components of the energy– momentum tensor of the electromagnetic field inside a nonlinear medium Θμν , namely the energy density Θtt = Π = ∂A −L ∂t 1 2 3 1 2 (B + A2 ) + χ (1) E 2 + χ (2) E 3 + χ (3) E 4 + · · · , 2 2 3 4 and the momentum density Θt z = −Π ∂A = D B. ∂z In setting up the Hamiltonian functional, the electric field E is to be expressed in terms of the electric displacement, which is the canonically conjugated momentum of A according to relation (3.21). It is assumed that E = β (1) D + β (2) D 2 + β (3) D 3 + · · · , where the β coefficients may be expressed in terms of the susceptibilities χ (n) through definition (3.16) and relation (3.17) (Hillery and Mlodinow 1984). Momentum-Operator Approach The Hamiltonian functional is then written as Θtt d3 r H= V 1 2 1 (1) 2 1 (2) 3 1 (3) 4 B + β D + β D + β D + . . . d3 r, = 2 2 3 4 V while the momentum has the form 3 G= Θt z d r = B D d3 r, V where the integration is over the cavity obeying periodic boundary conditions and lower and upper limits are supposed to converge to −∞ and ∞, respectively. The field can now be quantized by replacing each field variable by the corresponding operator and by replacing the Poisson bracket between the displacement D and the vector potential A by the equal-time commutator ˆ ˆ ˆ , t)] = iδT (r − r )1, [ D(r, t), A(r exactly by its (−i) multiple, where the transverse δT (r − r ) reduces to the ordinary δ function and where the three-dimensional position vector r can be replaced by the coordinate z. ˆ does not appear explicitly in The vector potential A replaced by the operator A the Hamiltonian (3.26) and momentum (3.27) operators, but rather in terms of its ˆ Taking the curl according to r of both the sides of the canonical spatial derivative B. commutation relation (3.28) and using relation (3.19) in the simple geometry, we obtain that ˆ ˆ ˆ , t)] = −iδ (z − z )1, [ D(z, t), B(z where δ (z − z ) = d δ(z − z ) dz is the derivative of the δ function. Not caring for divergencies, Abram and Cohen (1991) consider any product of noncommuting operators that appear in an expression as fully symmetrized, i.e. including all possible permutations of the individual field operators, such as B D 2 → ˆ Bˆ D ˆ +D ˆ 2 Bˆ ˆ2+D Bˆ D . 3 In contrast, more recently they carried out the renormalization, i.e. the normal ordering and an elimination of divergencies. 3 Macroscopic Theories and Their Applications The description of propagative optical phenomena has been discussed within the framework of a direct-space formulation of quantum optics and the operatorial (or better commutator) equivalent of the Maxwell equations and the electromagnetic wave equation (Abram and Cohen 1991). It is emphasized that in the Hamiltonian formulation of mechanics the time variable plays a particular role. The integrals in (3.26) and (3.27) and the equal-time commutator (3.28) correspond to the requirement that the field be specified over all space at one instant of time (e.g. at t = 0). Abram and Cohen (1991) use the Kubo (1962) notation for the commutator or more exactly for a corresponding superoperator. The superoperator assigns operators to operators. Respecting this, the Heisenberg equation can be written as follows ˆ ∂Q ˆ , Q] ˆ = iH ˆ × Q, ˆ = i[ H ∂t ˆ is any field operator and the superscript × denotes the superoperator, where Q namely the commutation of the operator it loads with another operator which follows. Equation (3.32) has the solution ˆ ˆ ˆ × ) Q(0) Q(t) = exp(it H ˆt ˆ ˆ iH −i H = e Q(0)e t . (3.33) (3.34) The Heisenberg-like equation involving the momentum can be considered ˆ ∂Q ˆ ˆ × Q. = −iG ∂z ˆ ˆ × ] Q(z ˆ 0 ). Q(z) = exp[−i(z − z 0 )G This equation has the solution Apart from the obvious similarity of equations (3.32) and (3.35), there is also a difference. The Hamiltonian of the electromagnetic field relates the desired spatial distribution of the field at the instant t + dt to its spatial distribution at t, but the ˆ relates the translated and nontranslated fields only (at the momentum operator G same instant of time). In analogy with the classical equations (3.14), (3.15) rather than with the classical equations ∂B ∂(β (1) D + β (2) D 2 + β (3) D 3 + · · · ) =− , ∂z ∂t (3.37) ∂D ∂B =− , ∂z ∂t Momentum-Operator Approach two commutator equations can be derived ˆ × B, ˆ ˆ × Eˆ = H G × × ˆ Bˆ = H ˆ D. ˆ G (3.38) (3.39) On assuming that the medium is homogeneous, so that the Hamiltonian and momentum operators commute with each other, that is ˆ ˆ = 0, ˆ ×H G Equations (3.38) and (3.39) may be combined into the commutator equivalent of the electromagnetic wave equation ˆ × Eˆ = H ˆ ×H ˆ × D. ˆ ˆ ×G G Let us note again that it is an analogue of the wave equation ∂2 E ∂2 D = , 2 ∂z ∂t 2 not of the more complicated equation ∂2 D ∂ 2 (β (1) D + β (2) D 2 + β (3) D 3 + · · · ) = . 2 ∂z ∂t 2 In Abram and Cohen (1991) the direct-space description of propagation (i.e. without resorting to a modal decomposition of a propagating pulse) is illustrated by examining the propagation of light (of the short light pulse) through a linear medium and through a vacuum–dielectric interface. For a linear medium the commutator wave equation (3.41) reduces to ˆ ˆ ×H ˆ × − H ˆ × ) Eˆ = 0, ˆ ×G (G where = 1 + χ (1) is the dielectric function of the medium. It is also convenient to define v = √1 , the velocity of an electromagnetic wave in a refractive medium. In the following exposition, the convention c = 1 will be dissolved or not used. Wave equation (3.44) enables one to rewrite equations (4.2a) and (4.2b) of Abram and Cohen (1991) in the form ˆ ˆ × ) B(z, ˆ 0), ˆ ˆ × ) E(z, 0) − iv sinh(−ivt G E(z, t) = cosh(−ivt G ˆ ˆ × ) B(z, ˆ 0). ˆ × ) E(z, ˆ t) = − i sinh(−ivt G 0) + cosh(−ivt G B(z, v (3.45) (3.46) Equations (3.45) and (3.46) indicate that the linear combination ˆ ˆ t) ˆ v+ (z, t) = E(z, t) + v B(z, W 3 Macroscopic Theories and Their Applications evolves in time as ˆ v+ (z, t) = exp(ivt G ˆ × )W ˆ v+ (z, 0) = W ˆ v+ (z − vt, 0). W Similarly, the linear combination ˆ ˆ t) ˆ v− (z, t) = E(z, t) − v B(z, W ˆ × )W ˆ v− (z, 0) = W ˆ v− (z + vt, 0). ˆ v− (z, t) = exp(−ivt G W evolves as To examine the problem of the interface, we now consider two half-spaces such that the z ∈ (−∞, 0) half-space is empty, while the z ∈ (0, +∞) half-space consists ˆ c+ (z, t), of a transparent linear dielectric. We then consider three waves, incident, W − + ˆ ˆ reflected, Wc (z, t), and transmitted, Wv (ζ, t), and the relations they obey. Abram and Cohen derive the commutator equivalent of the slowly varying amplitude wave equation, on which the classical theory of nonlinear optics is based (Abram and Cohen 1991). Not even in classical optics, the problem of propagation of a short pulse in a nonlinear medium can be solved in the general case. In classical nonlinear optics, the assumption of a weak nonlinearity makes the slowly varying amplitude (SVA) approximation of the electromagnetic wave equation possible (Shen 1984). In Abram and Cohen (1991) further a perturbative treatment of the time evolution of the field in a nonlinear medium is examined that corresponds to propagation within the slowly varying amplitude approximation. For simplicity, a single nonlinear susceptibility χ (n) is considered. In the perturbative treatment of the nonlinear propagation it is assumed that the optical nonlinearity of the medium is absent at t = −∞ and turned on adiabatically. In the absence of the nonlinearity, the electric and magnetic fields in the medium, ˆ 0, Eˆ 0 and Bˆ 0 , as well as the displacement field D ˆ 0 = Eˆ 0 D propagate under the energy operator ˆ0 = 1 H 2 1 ˆ2 Bˆ 02 + D 0 d3 r and the momentum operator ˆ0 = G ˆ 0 d3 r, Bˆ 0 D which are of zeroth order in the nonlinear susceptibility χ (n) . Following the standard perturbation theory (Itzykson and Zuber 1980), the exact field operators in the Momentum-Operator Approach ˆ and B, ˆ can be derived from the zeroth-order fields by the nonlinear medium, D unitary transformation ˆ ˆ 0 (z, t)Uˆ (t) D(z, t) = Uˆ −1 (t) D ˆ t) = Uˆ −1 (t) Bˆ 0 (z, t)Uˆ (t). B(z, Here Uˆ (t) is the unitary operator, which is the solution to the differential equation ∂ ˆ ˆ˜ (t)Uˆ (t), U (t) = −iλ H 1 ∂t ˜ˆ 1 (t) the nonlinear interaction part of the Hamiltonian, with H ˆ˜ (t) ≡ H 1 1 n+1 ˆ n+1 d3 r = − β (n) D 0 1 n+1 χ (n) Eˆ 0n+1 d3 r or ˆ˜ (τ ) = exp(iτ H ˆ ×) H ˆ 1, H 1 0 and obeys the initial condition , , Uˆ (t), ˆ = 1. ˆ 1 is first order in the nonlinear susceptibility χ (n) , which is The Hamiltonian H expressed also by λ, a dimensionless parameter, which has been introduced for the bookkeeping of this and higher powers of χ (n) . The exact Hamiltonian (3.26) can then be expressed perturbatively up to the first order in λ as ˆ =H ˆ 0 + λH ˆ 1S + O(λ2 ), H ˆ 1 , S stands for stationary, which commutes ˆ 1S is the “diagonal part” of H where H ˆ with the linear Hamiltonian H0 , (n) ˆ 1S ≡ H ˆ (n) = − χ H 1S n+1 Sˆ n+1 d3 r, with −n+1 Sˆ n = 2 [ n2 ] n! −m Bˆ 02m Eˆ 0n−2m , (n − 2m)!(2m)! m=0 3 Macroscopic Theories and Their Applications which leads to the decoupling of opposite-going fields. In the context of (3.61) and (3.62), a connection with the modal approach has been mentioned in Abram and Cohen (1991). The notion of the diagonal part belongs to the perturbation theory which was treated in Sczaniecki (1983). ˆ can be According to (3.33), the time evolution of the displacement operator D explained and cast in the form ˆ 0 (z, t) + λ D ˆ 1 (z, t), ˆ ˆ × )D D(z, t) ≈ exp(itλ H 1S where λ ≡ 1 and ˆ D1 (z, t) = i ˆ˜ (τ ) dτ H 1 × ˆ 0 (z, t) D is the first-order correction to the displacement field. The action of the superoperator ˆ 0 in relation (3.63) can be compared with the multiplication of the fast-varying on D (“carrier”) wave by a slowly varying envelope function. On introducing the nonlinear polarization ˆ 0n = χ (n) Eˆ 0n , Pˆ NL = −β (n) D it can be shown that the exact commutator wave equation (3.41) can be written up to order λ0 as ˆ ×H ˆ 0 = 0ˆ ˆ ×G ˆ × − H ˆ ×) D (G 0 0 0 0 ˆ×ˆ×ˆ ˆ×ˆ×ˆ ˆ×ˆ×ˆ ˆ ×H ˆ ×D ˆ 2 H 0 1S 0 = − H0 H0 D1 + G 0 G 0 D1 − G 0 G 0 PNL . and that to order λ1 as The nonlinear polarization Pˆ NL consists of two parts, Pˆ W and its complement, and Pˆ W obeys the zeroth-order wave equation ˆ ˆ ×H ˆ ×G ˆ × − H ˆ × ) Pˆ W = 0, (G 0 0 0 0 Pˆ W ≡ Pˆ W(n) = χ (n) Sˆ n . This partition again eliminates all terms that couple opposite-going waves in Pˆ NL . Relying on the relation ˆ ×G ˆ×ˆ× ˆ ˆ ˆ ×D ˆ1+G ˆ ×D ˆ ˆ ×H 0ˆ = − H 0 0 0 0 1 − G 0 G 0 ( PNL − PW ), we can derive the commutator equation ˆ ×H ˆ×ˆ×ˆ ˆ ×D ˆ 2 H 0 1S 0 = −G 0 G 0 PW , Momentum-Operator Approach which has been compared to the classical slowly varying amplitude (SVA) wave equation, which is written as Abram and Cohen (1991) 2ik ∂2 ∂ ˜ E = 2 PW , ∂z ∂t or, more often, in terms of the temporal Fourier components of E˜ and PW as ∂ ˜ iω E(ω) = √ PW (ω), ∂z 2 where E˜ is the envelope function of the electric field. Also concerning Pˆ W , the connection to the modal approach has been shown in Abram and Cohen (1991). The commutator equivalent of the slowly varying amplitude wave equation will be applied to the quantum-mechanical treatment of the propagation in a nonlinear medium. Let us consider equation (3.71) whose right-hand side does not contain ˆ 0 in contrast to the left-hand side. This problem can be remedied by defining an D effective “SVA” momentum operator such that it obeys 1 ˆ×ˆ ˆ ˆ× D G SVA 0 = G 0 PW . 2 ˆ SVA is the stationary part of the effective “interaction” momentum The solution G ˆ 1, operator G ˆ1 = 1 (3.75) Bˆ 0 Pˆ NL d3 r, G 2 namely (n) ˆ (n) = χ ˆ SVA ≡ G G SVA n+1 Rˆ n+1 d3 r, with −n+1 Rˆ n = 2 n [ 2 ]−1 n! −m Bˆ 02m+1 Eˆ 0n−2m−1 . (n − 2m − 1)!(2m + 1)! ˆ SVA , the connection to the modal Also, in the context of (3.76) and (3.77) for G approach can be shown. With definition (3.74), the commutator wave equation (3.71) can be written as ˆ ˆ× ˆ× ˆ ˆ× ˆ× G (G SVA 0 + H1S H0 ) D0 = 0. In this form, the commutator SVA equation relates directly the slow component ˆ 0 to the longof the temporal evolution of a short pulse of the displacement field D scale modulation of its spatial 3 Macroscopic Theories and Their Applications In order to clarify the role of Equation (3.78), the forward (+) and backward (−) polarization waves are defined in analogy with (3.47) and (3.49), ˆ ± Vˆ ± = D ˆ B, which in the absence of the nonlinearity have the form ˆ v± , Vˆ 0± = W where in accordance with the perturbation theory the forward and backward electromagnetic waves are defined as ˆ v± = Eˆ 0 ± v Bˆ 0 . W ˆ × ) W ˆ v+ (z, t) + Vˆ + (z, t), Vˆ + (z, t) ≈ exp(it H 1S 1 ˆ × ) W ˆ v− (z, t) + Vˆ − (z, t), Vˆ − (z, t) ≈ exp(it H 1S 1 Relation (3.63) now becomes where Vˆ 1± are the first-order corrections to Vˆ given by equations analogous to (3.64). Equation (3.78) simplifies to √ ˆ v+ = −G ˆ+ ˆ ×W ˆ× W H SVA v , √ 1S ˆ ×W ˆ× ˆ − ˆ− H 1S v = G SVA Wv , (3.84) (3.85) for the forward-going and backward-going waves, respectively. These equations provide a simple rule for converting the temporal evolution of the modulation envelope to the spatial progression, i.e. relations (3.82) and (3.83) can be written as follows ˆ × )W ˆ v+ (z − vt, 0) + Vˆ + (z, t), Vˆ + (z, t) = exp(−ivt G SVA 1 ˆ × )W ˆ v− (z + vt, 0) + Vˆ − (z, t). Vˆ − (z, t) = exp(ivt G SVA 1 (3.86) (3.87) ˆ v± can In most practical situations, the first-order terms Vˆ 1± may be neglected and W be introduced also on the left-hand sides of equations (3.86) and (3.87) using (3.80). Nevertheless, Vˆ 1± play an important role in that they incorporate the coupling to the wave going in the opposite direction and do give rise to the nonlinear reflection. As an illustration of the above quantum treatment, the travelling-wave generation of squeezed light by the parametric down-conversion of a short pulse is examined. For the case of a classical pump, this problem was treated through a modal analysis by Tucker and Walls (1969). More recently, Yurke et al. (1987) and Caves and Crouch (1987) treated this problem by using spatial differential equations for appropriately defined creation and annihilation operators. Momentum-Operator Approach ˆ can be written in terms of a carrier wave W ˆ as The full modulated wave W follows ˆ (z, t) = exp(−iz G ˆ × )W ˆ (z, t), W SVA where the subscript v and the superscript + have been omitted. For a medium that exhibits a second-order nonlinearity, the right-hand side of relation (3.84) can be expressed as ˆ× W ˆ = − 1 χ√ H ˆ 2. ˆ ×W −G SVA 2 0 (2) It is convenient to separate the field into its positive- and negative-frequency parts: ˆ = 2 Eˆ (+) + Eˆ (−) , (3.90) W where the factor of 2 arises, because it is not in definition (3.47). Similarly, the modˆ can be separated into its positive- and negative-frequency ulated wave solution W parts ˆ = 2 Eˆ (+) + Eˆ (−) . W (3.91) In the first-order perturbative treatment, the optical frequencies retain the same sign and relation (3.88) can be modified to (+) ˆ × ) Eˆ (+) (z, t), Eˆ (z, t) = exp(−iz G SVA with a similar equation holding for the negative-frequency part. In Abram and Cohen (1991), equations similar to familiar classical first-order differential equations have been derived (see Shen 1984). The two fields involved in parametric down-conversion are introduced: the pump field with the central pump frequency ω P and the signal field that oscillates at approximately ω S , ω S = ω2P . In (+) (+) fact, we introduce also the notation Eˆ (±) , Eˆ (±) , Eˆ , Eˆ , and we modify relation P (3.88) to (+) ˆ × ) Eˆ (+) (z, t), Eˆ P (z, t) = exp(−iz G P SVA (+) ˆ × ) Eˆ (+) (z, t), Eˆ S (z, t) = exp(−iz G S SVA and similar equations for the negative-frequency parts. The pump field consists of a short pulse whose duration TP is much longer than the optical period ω2πP . On this assumption, relation (3.89) for the signal field becomes ˆ × Eˆ (+) = −κ Eˆ (+) Eˆ (−) , G P S SVA S ˆ × Eˆ (−) G SVA S ˆ (+) κ Eˆ (−) P ES , 3 Macroscopic Theories and Their Applications where κ = χ (2) √ωS , ˆ × Eˆ (+) = −κ Eˆ (+) Eˆ (+) G S S SVA P ˆ × Eˆ (−) = κ Eˆ (−) Eˆ (−) . G S S SVA P ˆ = Eˆ (+) , Eˆ (−) , Eˆ (+) , Eˆ (−) into relation Let us observe that on the substitution Q S S P P (3.35) and comparison with (3.94), (3.95), (3.96), and (3.97), we have an analogue of the classical description of the spatial progression. Within the undepleted pump assumption, the solution of (3.93) becomes / ˆE (+) (z, t) = cosh κz Iˆ (z, t) Eˆ (+) (z, t) N P S S 9 : / ˆ (+) E (z, t) Eˆ S(−) (z, t), + i sinh κz Iˆ P (z, t) +P ˆI P (z, t) N where ˆ (+) Iˆ P (z, t) = Eˆ (−) P (z, t) E P (z, t) is essentially the intensity operator for the pump field and the subscript N denotes ˆ (−) the normal ordering of the operators Eˆ (±) P , which means that E P stands to the left from Eˆ (+) P . Deviating slightly from Abram and Cohen (1991), we formulate the interaction picture as follows: |(P + S)(t) = Uˆ (t)|P(t) , where |P(t) is the state of the field (pump and signal) in the remote past, |P(t) = |P(t) P ⊗ |0 S . We are now in a position to describe a travelling-wave experiment of parametric down-conversion. In such an experiment, a pump pulse expressed by the state |P(t) P initially traverses a nonlinear crystal that extends from z = 0 to L and generates a signal pulse in the course of its propagation. Using the previous approximations, we rewrite (3.100) as ˆ SVA vt)|P(t) . |(P + S)(t) = exp(iG Momentum-Operator Approach The dependence on z which has been introduced by the substitution t = vz in the previous exposition is not explicit here. The interaction picture is not needed for calculation of expectation values of the observables. For example, a measurement of the intensity profile of the signal pulse can be expressed by the equal-time function I S (t) (−) (+) I S (t) = P(t)| Eˆ S (L , t) Eˆ S (L , t)|P(t) (+) (+) Since Eˆ S (L , t) = Eˆ S (L − vt, 0), relation (3.103) after a simplification yields - , .,, " L L ,, L , I S (t) = P P t − ,sinh2N κ L Iˆ P 0, t − ,P t− , v , v v × S 0| Eˆ S(+) Eˆ S(−) |0 S , with Iˆ P 0, t − Lv = Iˆ P (L − vt, 0); relation (3.104) is simplified by omitting the terms without the antinormal ordering of signal field operators. This can be considered as legitimate, because the vacuum expectation value of the operator product ˆ (+) ˆ (−) S 0| E S E S |0 S diverges. There exist results for the two-time correlation function (1) g S (t2 , t1 ) and for the nth-order photon-coincidence rate for the signal pulse g S(n) . The numerical results have been obtained for a laser pulse that has an amplitude profile A P (t ) at z = 0, but have been restricted to the intensity profile measured at the exit of the crystal z = L. Let us assume that |P(t ) is a coherent state with the property Eˆ (+) P (0, t )|P(t ) = A P t U P t |P(t ) , , , A P t ≥ 0, ,U P t , = 1. , , * ) L ,, ˆ L ,, L 2 L = A , P t − 0, t − t − I P t − P P P v , v , v P v so that (after a renormalization) L I S (L , t) = sinh2 κ L A P t − . v 3 Macroscopic Theories and Their Applications 3.1.3 Space–Time Displacement Operators Serulnik and Ben-Aryeh (1991) have discussed a general problem of the electromagnetic wave propagation through nonlinear nondispersive media. They have used the four-dimensional formalism of the field theory in order to develop an extension of the formalism introduced by Hillery and Mlodinow (1984). The complications following from the common definitions for the vector and scalar potentials are indicated. It is shown that the scalar potential can be neglected only by using alternative definitions. First, it is shown that the conventional approach that uses the standard potentials A and V is not appropriate for treating the general case of nonlinear polarization when ∇ · P = 0, since for such cases V does not vanish. As a solution to this problem it is proposed to use vector potential ψ, D = −∇ × ψ, which fulfils the relation ∇ · D = 0. This choice enables Serulnik and Ben-Aryeh (1991) to work in the new Coulomb gauge, where ∇ · ψ = 0, so that from the condition ∇ · B = 0 it follows that the dual scalar potential ξ obeys the equation ∇ 2 ξ = 0. It is then consistent to assume ξ = 0 everywhere in a nonlinear medium and the dual scalar potential need not be taken into account. The Lagrangian and Hamiltonian densities are derived from the Maxwell equations by using nonconventional definitions for the scalar and vector potentials. The general form of the energy–momentum tensor is derived and explicit expression for its elements is given. The relation between this tensor and the space–time description of propagation is analysed. Further the quantization is performed and the properties of space–time displacement operators are presented. The space–time is described by a Lie transform (Steinberg 1985). The displacement operators are obtained from the energy–momentum tensor developed by Serulnik and Ben-Aryeh (1991) with an alternative definition for the vector potential. It has been possible to obtain explicit expressions for all the elements of the energy–momentum tensor and to discuss their physical meaning. In the following we will show that the relationship between the energy–momentum tensor and the space–time description of propagation is different from that derived by Serulnik and Ben-Aryeh (1991). Let us restrict ourselves to the usually treated one-dimensional case, where only the fields E 1 , D1 , B2 , and Λ2 are significant, where we use Λ2 = −ψ2 according to Drummond (1990, 1994). The arguments of these fields are x3 and ct. The corresponding quantum fields obey the commutation relation ˆ ˆ 2 (x3 , ct), Bˆ 2 (y3 , ct)] = icδ(x3 − y3 )1, [Λ ˆ ˆ 1 (x3 , ct), Bˆ 2 (y3 , ct)] = −icδ (x3 − y3 ) 1. [D (3.111) (3.112) Momentum-Operator Approach Considering for this case the Hamiltonian density ⎞ ⎛ ζ ;<=> ⎟ ⎜ 1 ˆ = 1⎜ ˆ 12 + Bˆ 22 ⎟ H D ⎟, ⎜ ⎠ 2⎝ where the right-hand side is symmetrically ordered (cf. Abram and Cohen 1991), we obtain the equations of motion in the Heisenberg picture as follows ˆ2 ∂Λ i ˆ ˆ = − Λ2 , H dx3 = c Bˆ 2 , ∂t ˆ1 ∂ 1 D i ˆ ∂ Bˆ 2 ˆ dx3 = −c . =− B2 , H ∂t ∂ x3 (3.114) (3.115) Relation (3.114) explains the role of the dual vector potential and relation (3.115) is essentially the second of the Maxwell evolution equations. We could obtain the first ˆ 1. of them as the equation of motion for the quantum field D It is a question whether the tensor element 1 ˆ ˆ Tˆ 03 = ( D 1 B2 )S c is a correct quantum density for generation of the displacement as Serulnik and Ben-Aryeh (1991) indicate. The presumable equations of the spatial progression are ˆ2 ∂Λ i ˆ ˆ 1 Bˆ 2 )S dx3 = − D ˆ 1, = Λ2 , ( D ∂ x3 c i ˆ ∂ Bˆ 2 ∂ Bˆ 2 ˆ ˆ = . B2 , ( D1 B2 )S dx3 = ∂ x3 c ∂ x3 (3.117) (3.118) Relation (3.117) expresses the role of the dual vector potential and relation (3.118) ˆ 1 . This failure is a mere tautology. The same is obtained for the quantum field D of the application of the ordinary presentations of quantum field theory has been published by Ben-Aryeh and Serulnik (1991). Considering, in contrast, the equal-space commutators ˆ ˆ 1 (x3 , ct )] = 0, ˆ 2 (x3 , ct), D [Λ ˆ ˆ 2 (x3 , ct), Bˆ 2 (x3 , ct )] = icδ(ct − ct )1, [Λ ˆ ˆ ˆ [ B2 (x3 , ct), B2 (x3 , ct )] = icδ (ct − ct )1, ˆ ˆ 1 (x3 , ct), Bˆ 2 (x3 , ct )] = 0, [D ˆ ˆ 1 (x3 , ct )] = icδ (ct − ct )1, ˆ 1 (x3 , ct), D [D (3.119) (3.120) (3.121) (3.122) (3.123) 3 Macroscopic Theories and Their Applications we obtain peculiar equation of the spatial progression ˆ2 ∂Λ i ˆ ˆ ˆ ˆ 1, = Λ2 , ( D1 B2 )S dt = − D ∂ x3 ˆ ˆ1 i ˆ ∂D ˆ 1 Bˆ 2 )S dt = − ∂ B2 . D1 , ( D = ∂ x3 ∂t (3.124) (3.125) Relation (3.124) expresses the role of the dual vector potential and relation (3.125) is essentially the second of the Maxwell evolution equations. We could obtain the first of them as the equation of the spatial progression for the quantum field Bˆ 2 . Since we must often guess the commutators never known before, the above example is a warning against excessive trust in the spatial progression technique. For a medium with nonlinear polarization, the global nature of creation and annihilation operators is lost. By consistently following this idea, Serulnik and Ben-Aryeh (1991) have introduced the shift operators which, by their definition, are based on the energy–momentum tensor. They have followed in their treatment Peierls’ solution of the problem of momentum conservation in matter (Peierls 1976, 1985) by which the atoms or the bulk matter is considered to be at rest while the electromagnetic field is propagating. They show that it is always possible to relate the external field in front of the medium to that behind it by the use of the shift operators, that is by a Lie transformation. As we can see from relations (3.114), (3.115), (3.124), and (3.125), there exists no space–time description which in addition to the so-called time displacement operators suggests the use of their space-displacement analogues. Leonhardt (2000) has determined an energy–momentum tensor of the electromagnetic fields in quantum dielectrics. The tensor is Abraham’s plus the energy– momentum of the medium characterized by a dielectric pressure and enthalpy density (Abraham 1909). While the consistency of this picture with the theory of dielectrics has been demonstrated, a direct derivation from the first principles has been announced only. The theory of the radiation pressure on dielectric surfaces (Loudon 2002) accepts the expression for the momentum density of an electromagnetic wave in a transparent material medium due to Abraham (1909). The force of the radiation pressure is obtained similarly using the Lorentz force density operator as using the momentum density according to Abraham (1909, 1910). A colloquium has been devoted to the momentum of an electromagnetic wave in dielectric media (Pfeifer et al. 2007). 3.1.4 Generator of Spatial Progression Theoretical methods for treating propagation in quantum optics have been developed in which the momentum operator is used in addition to the Hamiltonian. A successful quantum-mechanical analysis has been given for various physical systems which include amplification and coupling between electromagnetic modes Momentum-Operator Approach (Toren and Ben-Aryeh 1994). Distributed feedback lasers have been described, but the overarching generalization of both successful analyses has not been developed. The authors have drawn attention to the distributed feedback lasers (Yariv and Yeh 1984, Yariv 1989), in which contradirectional beams are amplified by an active medium and are coupled by a small periodic perturbation of a refractive index. The energy and momentum properties of the electromagnetic field can be described, in a four-dimensional form, by the energy–momentum tensor T jk , where j,k = 0,1,2,3 (Roman 1969), ⎡ T jk H ⎢ Sx =⎢ ⎣ Sy Sz gx σx x σ yx σzx gy σx y σ yy σzy ⎤ gz σx z ⎥ ⎥. σ yz ⎦ σzz The tensor element T 00 represents the energy density. The density of the vectorial momentum (proportional to D × B) is represented by the vector (gx ,g y ,gz ). Let us take further, for example, line 3. The tensor element T 30 is the component of the Poynting vector standing for the flux of energy in the z-direction. The vector (σzx ,σzy ,σzz ) refers to a flux of momentum in the propagation direction of z. In the conventional approach (Roman 1969), the four-vector p 0k is defined as p 0k = T 0k (x, y, z, t) dx dy dz. The energy p 00 is used as the Hamiltonian for the description of time evolution, but the momentum component p 03 rather merely translates in the z-direction. BenAryeh and Serulnik (1991) have shown that for the description of the spatial progression, the four-vector p 3k p 3k = T 3k (x, y, z, t)c dt dx dy can be used. The momentum component in the z-direction p33 can be used as the generator of the spatial progression, but the energy p 30 rather merely causes translation of the field in time. In Toren and Ben-Aryeh (1994), problems of propagation are treated by expanding the field operators in terms of mode operators associated with definite frequencies. Starting with Caves and Crouch (1987), the approach has been associated with the conservation of commutation relations for creation and annihilation operators, which are space dependent (cf. Huttner et al. 1990). Imoto (1989) has developed the basic equation of motion by using a modified procedure of canonical quantization in which time and space coordinates are interchanged in comparison with the conventional procedure. Ben-Aryeh et al. (1992) did the same by using a slightly different notation. Toren and Ben-Aryeh (1994) dissociate themselves from such an approach, but they are not very explicit about the point that the use of 3 Macroscopic Theories and Their Applications integrals (3.128) is compatible with the canonical quantization in which the time coordinate plays the usual role. Linear amplification is treated by the use of momentum for space-dependent amplification. Travelling-wave attenuators and amplifiers can be treated as continuous limits of an array of beam splitters (Jeffers et al. 1993, Ban 1994). According to Toren and Ben-Aryeh (1994), the propagating modes are coupled to a momentum reservoir. The Hamiltonian of this system is given by the relation † ˆ = ωaˆ † aˆ − ω jres bˆ j bˆ j H and the total momentum operator is † † ˆ = β aˆ † aˆ − G β jres bˆ j bˆ j + (κ j aˆ † bˆ j + κ ∗j aˆ bˆ j ) , j where the subscript res stands for the reservoir and κ j are appropriate coupling constants, aˆ and bˆ j represent (in the zeroth-order) modes which are propagating in the positive direction of the z-axis. The equations of motion obtained from the momentum operator (3.130) are i daˆ † ˆ = iβ aˆ + i ˆ G] κ j bˆ j , = [a, dz j † dbˆ j dz i ˆ† ˆ † [b , G] = −iκ ∗j aˆ + iβ jres bˆ j . j By using the spatial Wigner–Weisskopf approximation, the Heisenberg–Langevin equations can be obtained 1 daˆ = i(β − Δβ) + γ aˆ + Lˆ † , dz 2 Δβ = −V.p. ∞ −∞ |κ(βres )|2 ρ(βres ) dβres , βres − β γ = {2π|κ(βres )|2 ρ(βres )} |βres =β , (3.134) (3.135) with ρ(βres ) the density function of the reservoir wave propagation constants β jres , and Lˆ † = † iκ j bˆ j eiβ jres z . Momentum-Operator Approach The codirectional coupling is analysed. It is assumed that two modes are propagating in the same direction and they are coupled by a periodic change in the refractive index. For a classical description, we refer to Yariv and Yeh (1984) and Yariv (1989). The Hamiltonian is given by ˆ = ω[aˆ † aˆ 1 + aˆ † aˆ 2 ], ˆ0 = H H 1 2 where the classical relation ω1 = ω2 = ω has been used. The total momentum operator is ˆ = [β1 aˆ † aˆ 1 + β2 aˆ † aˆ 2 + κ˜ aˆ 1 aˆ † + κ˜ ∗ aˆ † aˆ 2 ], G 1 2 2 1 where β1 and β2 are components of the wave vectors of the two modes in the propagation direction of z and im2π z , κ˜ = κ exp − Λ with κ a coupling constant, m an integer, and Λ the “wavelength” of the spatial periodic change in the index of refraction (a perturbation in the dielectric constant). In this connection papers Peˇrinov´a et al. (1991) and Ben-Aryeh et al. (1992) are criticized that they do not take account of the spatial dependence (3.139). The equations of motion obtained from the momentum operator (3.138) are i daˆ 1 ˆ = iβ1 aˆ 1 + iκ˜ ∗ aˆ 2 , = [aˆ 1 , G] dz i daˆ 2 ˆ = iκ˜ aˆ 1 + iβ2 aˆ 2 . = [aˆ 2 , G] dz (3.140) (3.141) We define slowly varying operators of the form ˆ 1 (z) ≡ aˆ 1 (z)e−iβ1 z , A ˆ 2 (z) ≡ aˆ 2 (z)e−iβ2 z . A Substituting the operators (3.142) into equations (3.140), (3.141), we get ˆ1 dA ˆ 2 e−iΔβ z , = iκ ∗ A dz ˆ2 dA ˆ 1 eiΔβ z , = iκ A dz where Δβ ≡ β1 − β2 − m 2π Λ 3 Macroscopic Theories and Their Applications is the mismatch. A “field” mismatch may be cancelled by a medium component. For the input–output relations we refer to Yariv and Yeh (1984), Yariv (1989), and Peˇrinov´a et al. (1991). In Peˇrinov´a et al. (1991), m = 0 and 2δ = −Δβ has been introduced in an application to the codirectional coupler. In general, the solution to equations (3.143), ˆ j ↔ A∗ , j = 1, 2, where A1 , A2 are (3.144) coincides with the classical solution, A j the amplitudes of the waves propagating in the +z-direction. The counterdirectional coupling is analysed in Toren and Ben-Aryeh (1994). The total Hamiltonian is given by ˆ = ω[aˆ † aˆ 1 − aˆ † aˆ 2 ]. ˆ0 ≡ H H 1 2 The momentum operator is ˆ = [β1 aˆ † aˆ 1 − β2 aˆ 2 aˆ † + κ˜ aˆ 1 aˆ † + κ˜ ∗ aˆ † aˆ 2 ]. G 1 2 2 1 It is reasonable that the Hamiltonian and the zeroth-order operator are related, respectively, to the flux of energy and that of the component of momentum in the † z-direction. Compared to Toren and Ben-Aryeh (1994), the operators aˆ 2 and aˆ 2 have † ? ˆ which we obtained been exchanged. They criticize our assumption [aˆ 2 , aˆ 2 ] = −1, † in Peˇrinov´a et al. (1991) from the usual equal-space commutator [aˆ 2 , aˆ 2 ] = 1ˆ by this interchange. It is tempting to have the same alternation between the opposite-going modes as can be seen in comparison of (3.524) with (3.526) (cf. Abram and Cohen 1994). The equations of motion obtained from operator (3.147) are given by , i daˆ 1 ˆ, ˆ 1 + iκ˜ ∗ aˆ 2 , = [aˆ 1 , G † ] = iβ1 a aˆ 2 ↔aˆ 2 dz , i daˆ 2 ˆ, ˜ aˆ 1 + iβ2 aˆ 2 . = [aˆ 2 , G † ] = −iκ aˆ 2 ↔aˆ 2 dz (3.148) (3.149) Using the slowly varying operators (3.142), in contrast with Toren and Ben-Aryeh (1994), we obtain that ˆ1 dA ˆ 2 e−iΔβ z , = iκ˜ ∗ A dz ˆ2 dA ˆ 1 eiΔβ z , = −iκ˜ A dz where Δβ has been defined by equation (3.145). In Peˇrinov´a et al. (1991) still 2δ = −Δβ holds in an application to the counterdirectional coupler. The solution to equations (3.150) coincides with the classical ˆ j ↔ A∗ , j = 1, 2, where A1 , A2 are the amplitudes of the waves propsolution, A j agating in the +z- and −z-directions. In Yariv and Yeh (1984), the solution to the corresponding classical equations has been obtained for the boundary conditions Momentum-Operator Approach A1 (z)|z=0 = A1 (0), A2 (z)|z=L = A2 (L). First, however, one obtains the solution for the usual condition at z = 0. In Peˇrinov´a et al. (1991), the output operators have been obtained in terms of ˆ 2 (0) from two ˆ 1 (L), A the input ones. While we simply determine the operators A ˆ ˆ ˆ ˆ equations for the operators A1 (0), A2 (0), A1 (L), A2 (L), we observe that in this ? ˆ †] = ˆ 2, A −1ˆ must depend on both z and procedure the equal-space commutator [ A 2 † ˆ Since the commutators ˆ ˆ L in a complicated manner, and simplifies to [ A2 , A2 ] = 1. correspond to the Poisson brackets, much is illustrated by the appropriate classical theory (Luis and Peˇrina 1996b). One must be aware of the fact that in formulating the theory, Luis and Peˇrina (1996b) have avoided the above considerations on the z coordinate and the generator of spatial progression and they used in the bulk of their paper the usual time dependence and the Hamiltonian function. Although still obscure in the case of commutators, the situation is clear in the classical case, when the input–output transformation is characterized by the usual Poisson brackets and the solution for the usual boundary conditions at z = 0 requires the noncanonical transformation α2 ↔ α2∗ , with the complex amplitude α2 . The richness of their theory is due to nonlinearities, whereas it is shown that in the quantum case only a poor linear theory is possible. The difficulty lies in the formulation of an appropriate dynamical operator. Tarasov (2001) has defined a map of a dynamical nonlinear operator into a dynamical superoperator. He had in mind quantum dynamics of non-Hamiltonian and dissipative systems. A quantum-mechanical treatment of distributed feedback laser using the momentum operator in addition to the Hamiltonian is developed in Toren and Ben-Aryeh (1994). The authors start from the classical description based on two coupled equations 1 dA1 = γ A1 − iκ A2 eiΔβ z , dz 2 1 dA2 = iκ ∗ A1 e−iΔβ z − γ A2 , dz 2 where A1 , A2 are the amplitudes of the waves propagating in the +z- and −zdirections, respectively, κ is the coupling constant, γ the amplification constant, and Δβ is given by (3.145), with β1 = β, β2 = −β. The solution of the classical equations is well known (Yariv and Yeh 1984, Yariv 1989) and it shows that on a condition the amplification becomes extremely large. The classical theory does not include the quantum noise which follows from the amplification process. Unfortunately, Toren and Ben-Aryeh (1994) did not develop the overarching generalization of the analysis of amplification and that of the counterdirectional coupling. To the best of our knowledge, such a quantum-mechanical theory is not in hand. The treatment of parametric down-conversion and parametric up-conversion by Dechoum et al. (2000) is interesting with its use of the Wigner representation of optical fields, but it starts just from the Maxwell equations for the field operators for the lossless neutral nonlinear dielectric medium. With the use of common 3 Macroscopic Theories and Their Applications approximation of treating the laser pump as classical, classical equations of nonlinear optics are obtained. 3.1.5 Nonlinear Optical Couplers Optical couplers employing evanescent waves play an important role in optics, optoelectronics, and photonics where they may be conveniently used in the switching of light beams. They also provide a means for controlling light by light. Amplitude and intensity behaviour of linear couplers has been investigated extensively (Yariv and Yeh 1984, Solimeno et al. 1986, Saleh and Teich 1991). Substantial progress in controlling light beams has been achieved after nonlinear waveguides with both linear and nonlinear coupling have been taken into account (Finlayson et al. 1988, Townsend et al. 1989, Leutheuser et al. 1990, Weinert-Raczka and Lederer 1993, Assanto et al. 1994, Hatami-Hanza and Chu 1995, Hansen 1995, Weinert-Raczka 1996). This gave new possibilities of fast all-optical switching, including digital switching, and reduction of switching power. New ways of controlling optical beams in nonlinear couplers have been invented. Nonlinear waveguide materials used in composing nonlinear couplers provide new possibilities in constructing switching and memory elements for all-optical devices. These elements are necessary for further development of optical processing and computing. Classically, all-optical devices are analysed from the viewpoint of their amplitude or intensity dependences. However, they can be treated fully in quantum theory. Noise of light beams in nonlinear couplers is naturally included in this quantum treatment. Peˇrina, Jr., and Peˇrina (2000) in Section 2 indicate a consequential use of the momentum operator we have mentioned in Section 3.1.4. In nonlinear quantum systems where both directions of propagation are present this formalism confronts difficulties. The generator of spatial progression is related to commutators which do not lead to proper input (and output) commutators. The two ways of introducing photon annihilation and creation operators obeying boson commutation relations are not consistent mutually. Working with operators is not secure. The authors work with the two algebras transparently. The commutators which are preserved in spatial progression are exploited only for the derivation of evolution equations for operators. The proper input commutators are used to the other goals. In order to determine quantum-statistical properties of light beams, solutions of nonlinear operator equations have to be found first. One can apply a short-length approximation. Or pump modes can be assumed to be in strong coherent states and a parametric approximation can be used. The short-length approximation is used as a tool in the treatment of nonlinear quantum systems. Calculations with operators are safe when one direction of propagation is present, because the two algebras coincide. If both directions of propagation are present it is easy to use one of the two algebras in principle (cf. Peˇrinov´a et al. 1991). Paradoxes may occur. Momentum-Operator Approach For instance, the operator equations of spatial evolution differ from the Heisenberg equations in interaction picture only by one or a few changes of sign. But these changes entail, in the formalism of input commutators, that operator products on the right-hand sides of the equations may have an incidental order. Elas, this problem is not present in the formalism of the generator of spatial progression. We suppose it should be better to consider boundary-value problems for equations for c-numbers in the systems where both directions of propagation underlie to description and to quantize at the end. The parametric approximation is used as another instrument. It leads to linear evolution equations of operators. Also here a description of a quantum system is specific in which two directions of propagation are present. But linear equations comprise no products on the right-hand sides. A number of quantum descriptions are related to two modes and can be specified ˆ b, A ˆ a† , A ˆ † . The ˆ a, A as two linear equations (and their conjugates) for the operators A b right-hand sides of these equations are often independent of time. We may let λ1,2,3,4 denote the eigenvalues of the matrix of the right-hand sides. Introducing operators ˆ d as appropriate Bogoliubov transforms of the operators A ˆ a and A ˆ b , one can ˆ c, A A express any quantum description (a regularity is assumed) in one of the six normal forms (Williamson 1936) ˆ ˆc dA ˆ †c , d Ad = b A ˆ† = aA d dt dt ˆc ˆ dA ˆ d , d Ad = −b A ˆ† ˆ †c + b A ˆ c + aA = aA d dt dt for λ1,2,3,4 = ±a, ±b; for λ1,2,3,4 = ±a ± ib; ˆc ˆ dA ˆ c , d Ad = −iσ b A ˆ d , ρ, σ = ±1, = −iρa A dt dt for λ1,2,3,4 = ±ia, ±ib; ˆc ˆ dA ˆ †c , d Ad = −iρb A ˆ d , ρ = ±1, = aA dt dt for λ1,2,3,4 = ±a, ±ib; ˆc ˆ dA ˆ d ), d Ad = 1 ( A ˆ c) + a A ˆ† ˆ †c + 1 ( A ˆ† − A ˆ† + A = aA d d dt 2 dt 2 c for λ1,2,3,4 = ±a, ±a; ˆc ˆd dA dA ρ ˆ† ˆ ˆ ˆ d ), ρ = ±1 ˆ c + iρ (A ˆ† − A = i (A = aA c − Ac ) − a Ad , dt 2 dt 2 d 3 Macroscopic Theories and Their Applications ˆ c and A ˆ d are complicated and they are for λ1,2,3,4 = ±ia, ±ia. The formulae for A not presented here. Peˇrina and Peˇrina, Jr. (1995a) have studied a codirectional coupler composed of one linear and one nonlinear waveguide. Second-subharmonic mode (b1 ) and pump mode (b2 ) nonlinearly interact in the waveguide b. The second-subharmonic mode b1 also interacts linearly with mode a in the waveguide a. The corresponding momentum operator in interaction pictures is written in the form ˆaA ˆ † − Γb A ˆ 2b A ˆ † exp(iΔkb z) + H.c. , ˆ int = −κab1 A G b1 b2 1 where κab1 denotes the linear coupling constant of modes a and b1 and Γb is the nonlinear coupling constant between modes b1 and b2 . The nonlinear phase mismatch Δkb is defined as Δkb = 2kb1 − kb2 and kb1 (kb2 ) means the wave vector of mode b1 ˆ a, A ˆ b1 , and A ˆ b2 stand for optical-field operators of modes a, b1 , (b2 ). In (3.158), A and b2 in interaction pictures. The conservation law ˆ a (z) + A ˆ † (z) A ˆ b1 (z) + 2 A ˆ † (z) A ˆ b2 (z) = const. ˆ a† (z) A A b1 b2 is fulfilled by the solution of Heisenberg equations in the interaction picture. Peˇrina (1995a,b) and Peˇrina and Bajer (1995) have analysed squeezing of the light in a short-length approximation. The assumption of a strong coherent field in mode b2 with the amplitude ξb2 leads to the linearization of the operator equations of motion. The analysis leads to at least one positive eigenvalue. Amplification may occur dependent on the initial conditions. In the case where |Γb ξb | < |κab1 |, oscillations also occur in the spatial development of quantities characterizing the fields. Results on squeezing of the light have been obtained (Peˇrina and Peˇrina, Jr. 1995a). The assumption of a strong coherent field in mode b2 and the linearization are related here to three types of behaviour, which can be distinguished also using normal forms (3.152), (3.153), and (3.156). Peˇrina and Peˇrina, Jr. (1995b) have treated a contradirectional coupler composed again of one linear and one nonlinear waveguide. Mode a propagates against modes b1 and b2 . The appropriate conservation law reads as ˆ a† (z) A ˆ a (z) + A ˆ † (z) A ˆ b1 (z) + 2 A ˆ † (z) A ˆ b2 (z) = const, z = 0, L; −A b1 i.e. ˆ a† (0) A ˆ a (0) + A ˆ † (L) A ˆ b1 (L) + 2 A ˆ † (L) A ˆ b2 (L) A b1 b2 ˆ a (L) + A ˆ † (0) A ˆ b1 (0) + 2 A ˆ † (0) A ˆ b2 (0). ˆ a† (L) A = A b1 b2 A phase matching (Δkb = 0) is assumed. A formulation of short-length approximation seems to be obvious, but it uses equal-space products of field operators. We can return to the boundary-value problem for classical equations which have the same solutions in the case of Momentum-Operator Approach contradirectional propagation as the initial-value problem for codirectional coupler up to the second order in L provided we write Aa (L) in place of Aa (0). Quantization up to the second order in L can be done in the case of the contradirectional propagation by using the quantum input–output relations of the codirectional coupler where ˆ a (0). ˆ a (L) is written in place of A A The assumption of a strong coherent field in mode b2 leads to linear operator equations of motion as in the case of the codirectional coupler. Introducing sa = ±1, sa = 1 when mode a propagates along with modes b1 and b2 and sa = −1 when it propagates counter to the latter, we can write the eigenvalues as / λ1,2,3,4 = ± |Γb ξb2 | ± |Γb ξb2 |2 − sa |κab1 | . In the case of the contradirectional coupler oscillations cannot occur. Results on squeezing of the light have been obtained (Peˇrina and Peˇrina, Jr. 1995b,c). Let us note that one cannot assess input–output relations so easily in this case using only the eigenvalues. In the case of the codirectional coupler the input–output relations are just the solutions of the initial-value problem and their dependence on exp(λ1,2,3,4 L) is linear. In the case of the contradirectional coupler the input–output relations depend on exp(λ1,2,3,4 L) in a nonlinear way. Peˇrina and Bajer (1995) have studied also a codirectional coupler with four modes. A mode of frequency ω1 (b1 ) and a mode of frequency ω2 (b2 ) nonlinearly interact in the waveguide b. The pump mode b1 of frequency ω1 interacts linearly with mode a1 and the mode b2 of frequency ω2 is coupled linearly with mode c (ω2 = 2ω1 holds). The momentum operator (3.158) is modified to the form ˆaA ˆ † − κcb2 A ˆcA ˆ † + Γb A ˆ 2b A ˆ † + H.c. , ˆ int = −κab1 A G b1 b2 b 1 2 where κcb2 is the linear coupling constant of modes c and b2 , κab1 and Γb have the original meaning. The conservation law ˆ a (z) + A ˆ † (z) A ˆ b1 (z) + 2 A ˆ † (z) A ˆ b2 (z) + 2 A ˆ †c (z) A ˆ c (z) = const. ˆ a† (z) A A b1 b2 is obeyed by the solutions of Heisenberg equations in the interaction picture. Peˇrina and Bajer (1995) have treated squeezing of the light in the short-length approximation also in this case. Miˇsta, Jr. and Peˇrina (1997) have investigated a codirectional coupler with five modes. Second-subharmonic modes (a1 , c1 ) and pump modes (a2 , c2 ) nonlinearly interact in the respective parts (a, c). The second-subharmonic mode a1 also interacts linearly with mode c1 via mode b in the part b. On assuming linear and nonlinear phase matching, Peˇrina, Jr. and Peˇrina (2000) describe the coupler with the following momentum operator: ˆ a2 A ˆ a† + Γc A ˆ 2c A ˆ †c + κa1 b A ˆ a† A ˆ b + κbc1 A ˆ †c A ˆ b + H.c. . ˆ int = Γa A G 1 2 1 2 1 1 3 Macroscopic Theories and Their Applications The solutions of Heisenberg equations in the interaction picture obey the conservation law ˆ a† (z) A ˆ a1 (z) + 2 A ˆ a† (z) A ˆ a2 (z) A 1 2 ˆ † (z) A ˆ b (z) + A ˆ †c (z) A ˆ c1 (z) + 2 A ˆ †c (z) A ˆ c2 (z) = const. +A b 1 2 In a short-length approximation results on squeezing of the light have been obtained. Peˇrina and Peˇrina, Jr. (1996) have studied a codirectional coupler composed of two nonlinear waveguides. While they have used a parametric approximation from the very beginning (Peˇrina, Jr. and Peˇrina 2000), here we present a generalization of the momentum operator (3.158) ˆ a2 A ˆ a† exp(iΔka z) + Γb A ˆ 2b A ˆ † exp(iΔkb z) ˆ int = Γa A G b2 1 2 1 † ˆ a1 A ˆ + H.c. , (3.167) + κab A b1 where a1 ≡ a, κab ≡ κab1 , and Γa is the nonlinear coupling constant between modes a1 and a2 . The nonlinear phase mismatch Δka is defined as Δka = 2ka1 − ka2 and ka1 (ka2 ) means the wave vector of mode a1 (a2 ). The parametric approximation ˆ b2 → ξb2 exp(−iΔlz), where ˆ a2 → ξa2 exp(iΔlz), A has consisted in replacements A 1 Δl = 2 (kb2 − ka2 ), on assuming also some linear coupling between modes a2 and b2 . Korolkova and Peˇrina (1997a) have obtained and discussed solutions on some simplifying assumptions. Karpati et al. (2000) studied all-optical switching in this system. Peˇrina and Peˇrina, Jr. (1996) and Korolkova and Peˇrina (1997a) have considered a contradirectional coupler as well. Janszky et al. (1995) were first to investigate a coupler composed of two nonlinear waveguides with nondegenerate optical parametric processes. Pump (aP ), signal (aS ), and idler (aI ) modes in one waveguide are assumed to interact linearly with their counterparts (bP , bS , bI ) in the other waveguide. The coupler is described by the following momentum operator † † † ˆ ki aˆ i aˆ i + [Γa aˆ aP aˆ a†S aˆ a†I + Γb aˆ bP aˆ bS aˆ bI + H.c.] G= i=aP ,aS ,aI ,bP ,bS ,bI † † † + [κP aˆ aP aˆ bP + κS aˆ aS aˆ bS + κI aˆ aI aˆ bI + H.c.] , where ki denotes the wave vector of the ith mode along z-axis, Γa (Γb ) is the nonlinear coupling constant of modes aP , aS , aI (bP , bS , bI ) in waveguide a (b), and κP , κS , and κI stand for the linear coupling constants between the two pump, the two signal, and the two idler modes. Herec (1999) has used a short-length approximation to solve the Heisenberg equations in the interaction picture and he has obtained results on squeezing of the light. Miˇsta, Jr. (1999) has assumed strong coherent field in pump modes, κP = 0, and phase matching, discerned three (of five) regimes in spatial evolution of the coupler, and obtained various results on the squeezing. Momentum-Operator Approach Optical fibres and certain organic polymers with high third-order nonlinearities may be used for the construction of couplers based on the Kerr effect. The nonlinear directional couplers are interesting as they exchange energy periodically between the guides like linear ones for low total intensities while they trap the energy in the guide into which it has been launched initially for high intensities (Jensen 1982). A coupler with two nonlinear waveguides (denoted as a and b) with Kerr nonlinearities has been considered by Chefles and Barnett (1996). It is described by the ˆ int (ka = kb is assumed) (Korolkova and Peˇrina 1997b), momentum operator G ˆ a†2 A ˆ †2 A ˆ int = g A ˆ a2 + g A ˆ 2b + gab A ˆ a† A ˆaA ˆ†A ˆ G b b b ˆaA ˆ † + H.c. . + κab A b Here g means a Kerr nonlinear coupling coefficient which is the same in both waveguides. The real constant gab describes nonlinear coupling of the modes and the real ˆ a† A ˆ†A ˆa+A ˆ constant κab characterizes linear coupling of the modes. The operator A b b is a constant of motion. ˆ b are fast-oscillating ˆ a and A Korolkova and Peˇrina (1997b) have assumed that A operators due to the linear coupling and they have introduced the operators 1 ˆ ˆ Bˆ a = √ [ A a exp(−iκab z) + Ab exp(iκab z)], 2 1 ˆ ˆ Bˆ b = √ [ A a exp(−iκab z) − Ab exp(iκab z)]. 2 † On an approximation a solution has been obtained. Then the operators Bˆ a Bˆ a and † Bˆ b Bˆ b are conserved along z. We note that the solution is exact for gab = 2g. In the interaction picture a numerical solution of the Schr¨odinger equation may use invariant subspaces defined as eigenspaces of the constant of motion (Chefles and Barnett 1996) and it is exact for any initial state from a finite direct sum of such subspaces. This way, Fiur´asˇek et al. (1999a,b) have obtained interesting results on assuming also initial coherent states. The behaviour of the coupler may exhibit a bifurcation in dependence on the parameter 1 |2g − gab | (|Aa |2 + | Ab |2 ) 2κab in the classical regime. The threshold is at η = 1. The behaviour is more complicated in the quantum regime. An optimum energy exchange between modes a and b occurs if the difference of the phases of the complex amplitudes of modes a and b equals π2 or − π2 . The character of the evolution of mean photon numbers in the regions of revivals can be controlled by the z-dependent linear coupling constant κab (z) (Korolkova 3 Macroscopic Theories and Their Applications and Peˇrina 1997c). Switching of energy between the waveguides is achieved by a suitable profile of the coupling functions. Squeezing in a given waveguide also is preserved in such a way. In the classical regime, a nonlinear optical switching matrix has been considered (Liu et al. 2003). Peˇrina, Jr. and Peˇrina (1997) have paid attention to the couplers which are based on Raman and Brillouin scattering. Fiur´asˇek and Peˇrina (1998, 1999, 2000a,b) have continued this work. A codirectional coupler composed of two waveguides ˆ is described with the momentum operator G: ˆ = G † † † † k jl aˆ jl aˆ jl + g˜ jA aˆ jL aˆ jV aˆ jA + g˜ jS aˆ jL aˆ jV aˆ jS + H.c. † † + κS aˆ aS aˆ bS + κA aˆ aA aˆ bA + H.c. , where k jl are wave vectors of modes l, l = L (laser), S (Stokes), A (anti-Stokes), and V (phonon) in waveguides j, j = a, b. Here g˜ jS (g˜ jA ) describes the Stokes (anti-Stokes) nonlinear coupling in waveguide j, κS (κA ) is the linear coupling constant between the Stokes (anti-Stokes) modes in different waveguides. Vectors characterizing phase mismatches are defined as follows: Δk jS = k jL − k jV − k jS , Δk jA = k jL + k jV − k jA , ΔkS = kaS − kbS , and ΔkA = kaA − kbA . A parametric approximation consists in assuming strong coherent states in pump modes aL and bL . Fiur´asˇek and Peˇrina (1999) have used another approximation in solving Heisenberg equations in the interaction picture. The method utilized is based on linear operator corrections to a classical solution. Fiur´asˇek and Peˇrina (2000a) have considered a Raman coupler with broad phonon spectra. They have described the phonon systems of the waveguides with multimode boson fields. Then these fields have been eliminated from the description of the coupler using the Wigner–Weisskopf approximation (Peˇrina 1981a,b). Mogilevtsev et al. (1997) have treated one central waveguide (a) which interacts linearly with a greater number of mutually noninteracting waveguides (b j ) in its ˆ surroundings. It is described by the following momentum operator G ⎡ ˆ = ⎣ka aˆ a† aˆ a + G N j=1 kb j aˆ b j aˆ b j + ⎤ † gab j (aˆ b j aˆ a† + aˆ b j aˆ a )⎦ , where ka (kb j ) is the wave vector of mode a (b j ), gab j is the linear coupling constant between modes a and b j , and N denotes the number of surrounding waveguides. If the central waveguide a contains a second-order nonlinear medium, the momentum ˆ n is operator G ˆn =G ˆ + [ξa2 exp(2ika z)aˆ a†2 + H.c.], G 2 Dispersive Nonlinear Dielectric where ξa2 stands for the amplitude of the pump field in the central waveguide. The behaviour of the linear and nonlinear couplers agrees with the idea that the surrounding waveguides form a reservoir. A reservoir spectrum has been considered which has a gap. Mogilevtsev et al. (1996) have considered a coupler composed of one waveguide with χ (2) medium and the other one with χ (3) medium. Only a nonlinear coupling is ˆ int is present. The interaction momentum operator G ˆ a†2 + A ˆ a2 ) + gb A ˆ int = ga ( A ˆ †2 A ˆ 2b + gab A ˆ a† A ˆaA ˆ†A ˆb , G (3.175) b b where ga describes the process of second-subharmonic generation and includes the coherent pump amplitude, gb stands for the Kerr constant, and gab means the nonlinear coupling constant between modes a and b. Assuming the vacuum state in mode a and an incident pure state in mode b, one can distinguish two regimes essentially. The third type of behaviour could result using the superposition principle. When an initial Fock state with Nb photons in ˆ b (z) oscillates in z for 2ga < mode b is assumed, the solution for the operator A Nb gab , whereas it has an exponential character for 2ga > Nb gab . When an initial coherent state with the amplitude ξb in mode b is assumed, the mean number of photons in mode a oscillates in z and the exponential terms can be neglected for 2ga |ξb |2 gab . It increases exponentially in z and the oscillating terms do not contribute significantly for 2ga |ξb |2 gab . 3.2 Dispersive Nonlinear Dielectric The spatio-temporal quantum description has been adopted in optics in spite of its complexity due to quantum solitons. As known, the existence of the optical fields that do not change during propagation is conditioned by the frequency dispersion and the nonlinearity of the medium. A macroscopic quantization was to take into account both the properties. The nonlinearity would appear as an interaction of narrow-band fields. 3.2.1 Lagrangian of Narrow-Band Fields Drummond (1990) has presented a technique of canonical quantization in a general dispersive nonlinear dielectric medium. Contrary to Abram and Cohen (1991), Drummond creates an arbitrary number of slightly varied copies of the vacuum electromagnetic field for the nonlinear dielectric medium, essentially the number required by the classical slowly varying amplitude approximation. But Abram and Cohen (1991) work with a single field. The paradox of the validity of both the approaches can be resolved only by a detailed microscopic theory. Drummond (1990) generalizes the treatment of a linear homogeneous dispersive medium (Schubert and Wilhelmi 1986). Till 1990, papers by Kn¨oll (1987), Białynicka-Birula and Białynicki-Birula (1987), and Glauber and Lewenstein (1989) 3 Macroscopic Theories and Their Applications could be referred to as devoted to the theory of inhomogeneous nondispersive linear dielectric. Hillery and Mlodinow (1984) were attractive with their use of the idea due to Born and Infeld (1934) for the quantization of homogeneous nonlinear nondispersive medium. The macroscopic quantization is a route to the simplest quantum theory compatible with known dielectric properties unlike the microscopic derivation of the nonlinear quantum theory of electromagnetic propagation in a real dielectric. Drummond (1994) compares the quantum theory obtained via macroscopic quantization with the traditional quantum-field theory. He concedes that most model quantum field theories prove to be either tractable, but unphysical, or physical, but intractable. The tractable model quantum field theory ceases to be unphysical when it is tested experimentally in quantum optics. An excellent example of this is the fibre optical solitons whose quantization is given in detail in Drummond (1994). In agreement with theoretical predictions (Carter et al. 1987, Drummond and Carter 1987, Drummond et al. 1989, Shelby et al. 1990, Lai and Haus 1989, Haus and Lai 1990), experiments (Rosenbluh and Shelby 1991) led to evidence of quantum solitons. More recent experiments (Friberg et al. 1992) demonstrate that solitons can be considered to be nonlinear bound states of a quantum field. In addition to the quadrature squeezing in (Rosenbluh and Shelby 1991), quantum properties of soliton collisions were measured (Watanabe et al. 1989, Haus et al. 1989). Similar nonlinearities are encountered in photonic band-gap theory (Yablonovitch and Gmitter 1987), microcavity quantum electrodynamics (Hinds 1990), pulsed squeezing (Slusher et al. 1987), and quantum chaos (Toda et al. 1989). In description of a nonlinear dielectric medium, tensorial notation is used which will occur also elsewhere in this book. Let u, v, w, . . . be vectors. On using a tensorial product, a tensor of rank 2, e.g. uv, is formed, a tensor of rank 3, e.g. uvw, is constructed, etc. The scalar product denoted by the dot · is generalized to a contraction, i.e. a simple sum after the tensors are replaced by their components and products of corresponding components are formed. The correspondence of components is achieved by using the same notation for the last subscript of the tensor to the left as for the first subscript of the tensor to the right. Also the pieces of notation : and .. . mean contractions, i.e. a double sum and a triple sum of products of corresponding components. It is advantageous to begin with the treatment of a classical dielectric introducing the nonlinear response function in terms of the electric displacement field D. Contrary to the usual description (Bloembergen 1965), which uses the dielectric permittivity tensors, the inverse expansion is necessary here. For simplicity, the dielectric of interest is regarded as having uniform linear magnetic susceptibility. The charges are assumed to occur only in the induced dipoles of polarization. The field equations are therefore ∂B(x, t) , ∂t ∂D(x, t) , ∇ × H(x, t) = ∂t ∇ × E(x, t) = − Dispersive Nonlinear Dielectric ∇ · D(x, t) = 0, ∇ · B(x, t) = 0, D(x, t) = 0 E(x, t) + P(x, t), B(x, t) = μH(x, t). P(x, t) = χ (x, τ ) · E(x, t − τ ) dτ ∞ ∞ χ (2) (x, τ1 , τ2 ) : E(x, t − τ1 )E(x, t − τ2 ) dτ1 dτ2 + 0 0 ∞ ∞ ∞ . + χ (3) (x, τ1 , τ2 , τ3 )..E(x, t − τ1 )E(x, t − τ2 ) 0 × E(x, t − τ3 ) dτ1 dτ2 dτ3 + ··· , where the tensor of rank 2, in general, the (n + 1)th-rank susceptibility tensor read 1 χ˜ (x, ω)e−iωτ dω, 2π n 1 χ (n) (x, τ1 , ..., τn ) = 2π 1 n × ... χ˜ (n) (x, ω1 , ..., ωn )e−i(ω τ1 +···+ω τn ) dτ1 ... dτn , χ(x, τ ) = (3.179) respectively. After adding the vacuum electric displacement field 0 E(x, t) to both sides of (3.178), we express the electric vector in the form E(x, t) = ζ (x, τ ) · D(x, t − τ ) dτ ∞ ∞ ζ (2) (x, τ1 , τ2 ) : D(x, t − τ1 )D(x, t − τ2 ) dτ1 dτ2 + 0 0 ∞ ∞ ∞ . + ζ (3) (x, τ1 , τ2 , τ3 )..D(x, t − τ1 )D(x, t − τ2 ) 0 × D(x, t − τ3 ) dτ1 dτ2 dτ3 + ··· , 3 Macroscopic Theories and Their Applications 1 ζ˜ (x, ω)e−iωτ dω, ζ (x, τ ) = 2π n 1 (n) ζ (x, τ1 , ..., τn ) = 2π 1 n (n) × ... ζ˜ (x, ω1 , ..., ωn )e−i(ω τ1 +...+ω τn ) dτ1 ... dτn (3.181) and the tensors on the right-hand sides of (3.181) can be expressed from the equations (x, ω) · ζ˜ (x, ω) = 1, (2) (x, ω1 + ω2 ) · ζ˜ (x, ω1 , ω2 ) + χ˜ (2) (x, ω1 , ω2 ) : ζ˜ (x, ω1 )ζ˜ (x, ω2 ) = 0(2) , (3) (x, ω1 + ω2 + ω3 ) · ζ˜ (x, ω1 , ω2 , ω3 ) (2) +2χ˜ (2) (x, ω1 , ω2 + ω3 ) : ζ˜ (x, ω1 )ζ˜ (x, ω2 , ω3 ) . +χ˜ (3) (x, ω1 , ω2 , ω3 )..ζ˜ (x, ω1 )ζ˜ (x, ω2 )ζ˜ (x, ω3 ) = 0(3) , ... , with (x, ω) = 0 1 + χ˜ (x, ω) the usual frequency-dependent tensor of permittivity. In particular, ζ˜ (x, ω) = [(x, ω)]−1 . Here 1 and 0(n) are the second-rank unit tensor and the (n + 1)th-rank zero tensor, respectively. Introducing the Fourier components of the electric strength field and electric displacement field, ˜ E(x, ω) = E(x, t)eiωt dt, ˜ D(x, ω) = D(x, t)eiωt dt, (3.184) and performing the Fourier transform of both sides of (3.180), we obtain that ˜ ω) = ζ˜ (x, ω) · D(x, ˜ E(x, ω) (2) ˜ ˜ + ζ˜ (x, ω1 , ω − ω1 ) : D(x, ω1 )D(x, ω − ω1 ) dω1 . (3) ˜ ˜ + ζ˜ (x, ω1 , ω2 , ω − ω1 − ω2 )..D(x, ω1 )D(x, ω2 ) ˜ × D(x, ω − ω1 − ω2 ) dω1 dω2 + ··· . Dispersive Nonlinear Dielectric ˜ The inverse relation of D(x, ω) comprises χ˜ (x, ω) ≡ χ˜ (1) (x, −ω; ω), χ˜ (2) (x, ω1 , ω − (2) 1 1 ω ) ≡ χ˜ (x, −ω; ω , ω − ω1 ),... . A similar extension of notation is conceivable (2) also in tensors ζ˜ (x, ω), ζ˜ (x, ω1 , ω − ω1 ),... . ˜ ω) in Peˇrina (1991) comprises sums Let us note that the similar relation for P(x, instead of the integrals. Relation (3.185) can resemble relation (2.4) in Drummond (1990) on the condition that the integrals will be replaced by the sums. Such a (n) change does not affect only the meaning of the tensors χ˜ (n) and ζ˜ but also (and above all) the physical unit of their measurement. We will treat the time-averaged linear dispersive energy H for a classical monochromatic field at nonzero frequency ω. For a permittivity (x, ω), this can be written in terms of a complex amplitude E (Bloembergen 1965, Landau and Lifshitz 1960, Bleany and Bleany 1985), ∂ 1 [ω(x, ω)] · E(x) + B(x, t) · B(x, t) d3 x, (3.186) E ∗ (x) · H = ∂ω 2μ V where the angular brackets mean the time average and E(x, t) = 2 Re E(x)e−iωt . It is important to distinguish the monochromatic case from the case of quasimonochromatic fields. In the more general case, the displacement D is expanded in terms of a series of complex (envelope) functions, each of which has a restricted bandwidth. The relevant nonzero central frequencies are then ω−N , ..., ω N , thus D(x, t) = Dν (x, t), where D−ν = (Dν )∗ and in the monochromatic case ν Dν (x, t) = Dν (x)e−iω t . Here, our notation slightly differs from that in Drummond (1990). Again, the electric-field vector can be expanded as E(x, t) = Eν (x, t), where E−ν = (Eν )∗ 3 Macroscopic Theories and Their Applications and in the monochromatic case ν Eν (x, t) = E ν (x)e−iω t . In the case of quasimonochromatic fields, relations (3.190) and (3.193) should be replaced by ων +δ 1 ˜ D(x, ω)e−iωt dω, Dν (x, t) = 2π ων −δ ων +δ 1 ˜ ω)e−iωt dω. E(x, (3.194) Eν (x, t) = 2π ων −δ Bloembergen (1965) presented the relation (3.186) as sufficiently accurate for such a case. If relation (3.186) is exact for monochromatic fields, it must be modified for a quasimonochromatic field as follows: H (t ) (t) = 1 1 −ν ∂ E (x, t) · [ων (x, ων )] · Eν (x, t) 2 ν=−1 ∂ων 1 B(x, t ) · B(x, t ) (t) d3 x. + 2μ By modifying the summation, we obtain the energy integral in terms of the electric displacement fields H (t ) (t) = N ∂ 1 −ν D (x, t) · [ζ˜ (x, ων ) − ων ν ζ˜ (x, ων )] · Dν (x, t) 2 ν=−N ∂ω 1 (3.196) B(x, t ) · B(x, t ) (t) d3 x. + 2μ To achieve a completeness, we supplement relations (3.188) and (3.191) with the expansion of the magnetic induction field B(x, t) = Bν (x, t), where B−ν = (Bν )∗ , ων +δ 1 ˜ ω)e−iωt dω. B(x, Bν (x, t) = 2π ων −δ (3.198) (3.199) Next, ζ˜ (x, ω) can be approximated near ω = ων by a quadratic Taylor polynomial, 1 ζ˜ (x, ω) ≈ ζ˜ ν (x) + ωζ˜ ν (x) + ω2 ζ˜ ν (x) ≡ ζ˜ ν (x, ω), 2 Dispersive Nonlinear Dielectric so that 1 ∂ (3.201) ζ˜ (x, ω) ≈ ζ˜ ν (x) − ω2 ζ˜ ν (x). ∂ω 2 ∂ . Moreover, the Taylor For brevity, the prime stands for the partial derivative ∂ω polynomial is not in a standard form, which comprises the brackets (ω − ων ) and (ω − ων )2 . Further explanations can be found in Drummond (1990) if necessary. ˙ ≡ ∂ D, we rewrite relation (3.196) in the form Using the notation D ∂t N 1 H (t ) (t) = D−ν (x, t) · ζ˜ ν (x) · Dν (x, t) 2 ν=−N 1 −ν 1 ˙ −ν ν ν ˙ ˜ − D (x, t) · ζ ν (x) · D (x, t) + B (x, t) · B (x, t) d3 x. 2 μ (3.202) ζ˜ (x, ω) − ω Here we deviate slightly from Drummond (1990). Drummond speaks of time averages and he indicates the time average on the left-hand side and partially on the right-hand side in (3.196), but he does not remove the time dependence from the right-hand side. A canonical theory of linear dielectric will be obtained using the causal local Lagrangian. Drummond (1990) considers a Lagrangian L(Λ−N , ..., Λ N ), which is a functional of (components of) the dual vector potential. This is defined as Λ, D(x, t) = ∇ × Λ(x, t), ˙ B(x, t) = μΛ(x, t). We introduce also ˜ Λ(x, ω) = Λ(x, t)eiωt dt, ων +δ 1 ˜ Λ(x, ω)e−iωt dω. Λν (x, t) = 2π ων −δ (3.204) (3.205) Each quasimonochromatic field obeys the Maxwell equations ˙ ν (x, t), ∇ × Eν (x, t) = −B ˙ ν (x, t), ∇ × Hν (x, t) = D ν ∇ · D (x, t) = 0, ∇ · Bν (x, t) = 0, where ˙ ν (x, t) − 1 ζ˜ ν (x) · D ¨ ν (x, t), Eν (x, t) = ζ˜ ν (x) · Dν (x, t) + iζ˜ ν (x) · D 2 1 Hν (x, t) = Bν (x, t). (3.207) μ 3 Macroscopic Theories and Their Applications The components of the dual vector potential fulfil linear wave equations. On the basis of (3.202) we can infer the Hamiltonian function of the form 1 H = H0 = 2 [∇ × Λ−ν (x, t)] · ζ˜ ν (x) · [∇ × Λν (x, t)] 1 ˙ ν (x, t)] ˙ −ν (x, t)] · ζ˜ ν (x) · [∇ × Λ − [∇ × Λ 2 −ν ν ˙ ˙ + μΛ (x, t) · Λ (x, t) d3 x. In order to quantize the theory, a canonical Lagrangian must be found that corresponds to (3.208) while generating the Maxwell equations (3.206) as Hamilton’s equations. It is next necessary to derive a Lagrangian whose Lagrange’s variational equations correspond to obvious wave equations and whose Hamiltonian corresponds to (3.208). Since Λν can be specified to be transverse fields, the variations can also be restricted to be transverse. The use of restricted variations can be realized using transverse functional derivatives (Power and Zienau 1959, Healey 1982). Drummond (1994) derived the Lagrangian using the method of indeterminate coefficients in the form L = L0 = ˙ ν (x, t) ˙ −ν (x, t)Λ μΛ ν=−N −ν − [∇ × Λ (x, t)] · ζ˜ ν (x) · [∇ × Λν (x, t)] ˙ ν (x, t)] − i[∇ × Λ−ν (x, t)] · ζ˜ ν (x) · [∇ × Λ 1 ˙ ν (x, t)] d3 x. ˙ −ν (x, t)] · ζ˜ ν (x) · [∇ × Λ − [∇ × Λ 2 The canonical momenta are 1 ˙ −ν (x, t) , Πν (x, t) = B−ν (x, t) − ∇ × iζ ν (x) · D−ν (x, t) + ζ ν x) · D 2 where we introduce for brevity the fields (3.203) again. We can rewrite also the Lagrangian of Drummond in the form L = L0 = N 1 −ν B (x, t) · Bν (x, t) μ ν=−N ˙ ν (x, t) − D−ν (x, t) · ζ˜ ν (x) · Dν (x, t) − iD−ν (x, t) · ζ˜ ν (x) · D 1 ˙ −ν ˙ ν (x, t) d3 x. − D (x, t) · ζ˜ ν (x) · D (3.211) 2 Dispersive Nonlinear Dielectric ˙ ν in HamiltoOn the contrary, the Legendre transformation, i.e. a substitution of Λ ν ν nian (3.208) with an expression in Π and Λ , was not performed in Drummond ˙ ν is to be found from (3.210) considered as a par(1990). A reason is that each Λ tial differential equation. Also this theory simplifies a great deal if the plane wave one-dimensional propagation is considered. The local Lagrangian method is used as the foundation of a nonlinear canonical Lagrangian and Hamiltonian. The objective is the total Lagrangian and Hamiltonian of the form L = L 0 − U N (x, t) d3 x, (3.212) H = H0 + U N (x, t) d3 x, where U N (x, t) is a nonlinear energy density 1 ν1 U (x, t) = D (x, t) 3 ν =−N ν =−N ν =−N N (2) · ζ˜ (x, ων2 , −ων1 − ων2 ) : Dν2 (x, t)Dν3 (x, t)δ−ων1 ,ων2 +ων3 1 ν1 D (x, t) 4 ν =−N ν =−N ν =−N ν =−N N . (3) · ζ˜ (x, ων2 , ων3 , −ων1 − ων2 − ων3 )..Dν2 (x, t)Dν3 (x, t)Dν4 (x, t) × δ−ων1 ,ων2 +ων3 +ων4 + ··· . In order to give an example, a one-dimensional case is treated and the nonlinear refractive index as the lowest nonlinearity of most universal interest. For N = 1, U N (x, t) = 1 (3) 1 ζ˜ |D (x, t)|4 . 4 Drummond (1990) has presented the quantization of the nonlinear medium using a treatment of modes defined relative to the new Lagrangian. The canonical momenta have the form (3.210) also in the nonlinear case. In the corresponding ˆ ν are introduced, which obey the ˆ ν and Π quantum theory, the field operators Λ transverse commutation relations of the form ˆ ˆ μj (x , t)] = iδi⊥j (x − x ) δμν 1. ˆ iν (x, t), Π [Λ Since these operators are not Hermitian, it is also interesting to note that ˆ ν )† , Π ˆ −μ = (Π ˆ μ )† . ˆ −ν = (Λ Λ 3 Macroscopic Theories and Their Applications ˆ iν , (Π ˆ νj )† commute. Then, a set of Fourier transform fields is This entails that Λ defined and the annihilation operators aˆ kν and bˆ kν are introduced. The operators aˆ kν correspond to the normal modes while bˆ kν generate additional necessarily vacuum modes. This feature of the theory is due to the dependence of the Hamiltonian ˙ ν (x, t)). (3.208) on both the real and imaginary parts of the components Λν (x, t) (Λ Conversely, the right-hand side of relation (3.202) can be completed with terms which make up the Hamiltonian dependent solely on 2 Re{Λν (x, t)}. 3.2.2 Propagation in One Dimension and Applications Drummond (1994) discusses in detail a simplified model of a one-dimensional (n) dielectric, where ζ˜ (x, ω) = ζ˜ (ω)1, ζ˜ (x, ω1 , . . . , ωn ) = ζ˜ (n) (ω1 , . . . , ωn )1(n) , with (n) 1(n) the (n + 1)th-rank unit tensor for n odd and ζ˜ (x, ω1 , . . . , ωn ) = 0(n) for n even, n ≤ 3. Instead of the time average of energy (3.186), (3.195), Drummond (1994) has presented the total energy in the length L, L 1 2 ˙ E(x, τ ) D(x, τ ) dτ dx. (3.217) μH (x, t) + W (t) = 2 0 t0 Here H (x, t) = μ1 B(x, t) is the magnetic strength vector. In part of the exposition, single polarization components are considered only. The Hillery–Mlodinow theory which does not take account of dispersion (Ho and Kumar 1993) has the electricfield commutation relation with the magnetic field modified from its free-field value. Drummond (1994) points out that the solution of this commutator problem is the inclusion of the important physical property of a real dielectric. Traditionally, the description of the nonlinear medium assumes that the dispersion terms are negligible. Neglecting the unphysical modes, the dual vector potential has the expansion ∂ω ∂k 1 ˆ (x, t) = aˆ k eikx , (3.218) Λ 2Lk ζ˜ (ωk ) k where aˆ k , aˆ k have the standard commutators † ˆ [aˆ k , aˆ k ] = δk,k 1, and ωk are solutions to the equations ωk = k This enables one to write Hˆ 0 = ζ˜ (ωk ) . μ † ωk aˆ k aˆ k Dispersive Nonlinear Dielectric ˆ 1) and (reintroducing D Hˆ = † ωk aˆ k aˆ k + ˆ 1 ) dx. U N( D When there is a nonlinear refractive index or ζ˜ (3) term, the free particles interact via the Hamiltonian nonlinearity. It is this coupling that leads to soliton formation. It is also possible to involve other types of nonlinearity, such as ζ˜ (2) terms, that lead to second harmonic and parametric interactions. With respect to practical applications, it is necessary to define photon-density and photon-flux amplitude fields. The photon-density amplitude field reminds us of the so-called detection operator (Mandel 1964, Mandel and Wolf 1995). A polaritondensity amplitude field is simply defined as ˆ Ψ(x, t) = 1 i[(k−k 1 )x+ω1 t] aˆ k , e L k where k 1 = k(ω1 ) is the centre wave number for the first envelope field. This field has an equal-time commutator of the form ˆ ˜ 1 − x2 )1, ˆ † (x2 , t)] = δ(x ˆ 1 , t), Ψ [Ψ(x where δ˜ is a version (L-periodic) of the usual Dirac delta function ˜ 1 − x2 ) ≡ 1 eiΔk(x1 −x2 ) , δ(x L Δk where the range of Δk is equal to that of k − k 1 . The total polariton number operator is ˆ ˆ † (x, t)Ψ(x, t) dx. (3.226) Nˆ = Ψ A polariton-flux amplitude can also be approximately expressed as ˆ Φ(x, t) = v i[(k−k 1 )x+ω1 t] aˆ k , e L k where v is the central group velocity at the carrier frequency ω1 . This flux has an equal-time commutator of the form ˆ ˜ 1 − x2 )1. ˆ † (x2 , t)] = v δ(x ˆ 1 , t), Φ [Φ(x 3 Macroscopic Theories and Their Applications ˆ † (x, t)Φ(x, ˆ Operationally, Φ t) is the photon-flux expectation value in units of photons/second. ˆ A common choice is to define the dimensionless field φ(x, t) by the scaling vt0 ˆφ(x, t) = Ψ(x, ˆ , t) n¯ where n¯ is the photon-number scale and t0 is a timescale, defined so that the expectation value φˆ † (x, t)φˆ (x, t) is appropriate for the system. This scaling transformation is accompanied by a change to a comoving coordinate frame. The first choice of an altered space variable gives ξv = x v −t vt ,τ = . t0 x0 Here x0 is a spatial length scale introduced to scale the interaction times. An alternative moving frame transformation is ξ= t − vx x , τv = . x0 t0 The quantization technique developed by Drummond (1990) was applied to the case of a single-mode optical fibre (Drummond 1994). On simplified assumptions, the nonlinear Hamiltonian is (cf. (3.214)) 1 ˜ (3) † ˆ ˆ 4 (x) d3 x. ˆ dk + ζ D (3.232) H = ω(k)aˆ (k)a(k) 4 Here ω(k) are the angular frequencies of modes with wave vectors k describing ˆ the linear photon or polariton excitations in the fibre including dispersion. a(k) are corresponding annihilation operators defined so that, at equal times, ˆ ˆ ), aˆ † (k)] = δ(k − k )1. [a(k ˆ In terms of the waveguide, the electric displacement field D(x) is expressed as ˆ D(x) =i (k)kv(k) ˆ a(k)u(k, r)eikx dk + H.c., 4π where x = (x, r) and |u(k, r)|2 d2 r = 1. Here v(k) is the group velocity and (k) is the dielectric permittivity. The mode function u(k, r) is included here to show how the simplified one-dimensional quantum Dispersive Nonlinear Dielectric theory relates to vector mode theory. When the interaction Hamiltonian describing ˆ the evolution of the polariton field Ψ(x, t) in the slowly varying envelope and rotating-wave approximations is considered, the coupling constant χe is introduced χe ≡ 3[χ˜ (3) (ω1 )]2 [v(k 1 )]2 4(k 1 )c2 |u(r)|4 d2 r. After taking the free evolution into account, the following Heisenberg equation of motion for the field operator propagating in the +x-direction can be found 2 ∂ ∂ ˆ iω ∂ ˆ ˆ ˆ † (x, t)Ψ(x, + iχ t) Ψ(x, t), (3.237) + Ψ(x, t) = Ψ e ∂x ∂t 2 ∂x2 , , , 1 , ω = ∂ 2 ω2 ,, . In a comoving reference frame, this where v = v(k 1 ) = ∂ω ∂k k=k ∂k k=k 1 reduces to the usual quantum nonlinear Schr¨odinger equation v ∂ ˆ ω ∂ 2 ˆ † (xv , t)ψˆ 1 (xv , t) ψˆ 1 (xv , t), − χ ψ1 (xv , t) = − ψ e 1 ∂t 2 ∂ xv2 ˆ v + vt, t). In the case of anomalous dispersion which occurs where ψˆ 1 (xv , t) = Ψ(x at wavelength longer than 1.5 μm, allowing solitons to form, the second derivative ω can be expressed as ω = , m where m = ω is an effective mass of a particle. Similarly, the nonlinear term χe describes an interaction potential V (xv , xv ) = −χe δ(xv − xv ). This interaction potential is attractive when χe is positive as it is in most Kerr media. It is known that this potential has bound states and is one of the simplest exactly soluble known quantum field theories. The repulsive and attractive cases were investigated by Yang (1967, 1968). This theory is one-dimensional and tractable and does not need renormalization, while two- and three-dimensional versions do need renormalization. In calculations, it is preferable to substitute flux amplitude operator 1ˆ ˆ Φ(x, t) (3.241) Ψ(x, t) = v into equation (3.237). Drummond (1994) associates an idea of the spatial progression with the flux amplitude operator. Upon modifying the time variable, he obtains an “unusual form” of the quantum nonlinear Schr¨odinger equation, which he reduces to a more usual form again. Since the operators there have their standard 3 Macroscopic Theories and Their Applications meaning, they must have equal-time commutators. In contrast, the resulting equation (Drummond 1994) appears as the quantum nonlinear Schr¨odinger equation with time and space interchanged. Such an interpretation means that the operators have equal-space commutators. The problem is whether these commutators are well defined. An important physical effect in propagation is that from molecular excitations. For this reason, the nonlinear Schr¨odinger equation requires corrections due to refractive-index fluctuations for pulses longer than about 1 ps, especially when high enough intensities are present, and fails for pulse duration much shorter than this. The treatment of the quantum theory can start from the classical theory developed by Gordon (1986). The Raman interaction energy of a fibre is known to be [Carter and Drummond (1991)] WR = . ζ Rj ..D(¯x j )D(¯x j )δx j . Here D(¯x j ) is the electric displacement at the jth mean atomic location x j , δx j is the atomic displacement operator, and ζ Rj is a Raman coupling tensor. In order this interaction to be quantized, the existence of a corresponding set of phonon operators must be taken into account. The Raman effect can be included macroscopically through a continuum Hamiltonian term coupling photons to phonons of the form (Drummond and Hardman 1993) ∞ ∞ ˆ ω, t) + A ˆ † (z, ω, t)] dω dz ˆ ˆ ˆ † (z, t)Ψ(z, t)r (z, ω)[ A(z, Ψ HR = −∞ 0 ∞ ∞ ˆ † (z, ω, t) A(z, ˆ ω, t) dω dz, ωA (3.243) + −∞ where ˆ ˆ ω, t), A ˆ † (z , ω , t)] = δ(z − z )δ(ω − ω )1, [ A(z, and r (z, ω) is a macroscopic frequency-dependent coupling which can be assumed to be independent of z. Here the Raman excitations are treated as an inhomogeneously broadened continuum of modes localized at each longitudinal location z. The corresponding coupled set of nonlinear operator equations is ∂ ˆ iω ∂ ∂ † ˆ ˆ ˆ + iχe Ψ (z, t)Ψ(z, t) Ψ(z, t) Ψ(z, t) = v + ∂z ∂t 2 ∂z 2 ∞ † ˆ ˆ ˆ r (ω)[ A(z, ω, t) + A (z, ω, t)] dω Ψ(z, t) −i 0 (3.245) and ∂ ˆ ˆ ω, t) − ir (ω)Ψ ˆ ˆ † (z, t)Ψ(z, t). A(z, ω, t) = −i A(z, ∂t Dispersive Nonlinear Dielectric The phonon operators do not have white-noise behaviour, but a coloured noise property. In practical terms, the known exact solutions (Yang 1968) of the quantum nonlinear Schr¨odinger equation can be hardly utilized at typical photon numbers of 109 . It is often more useful to employ phase-space distributions or operator distributions such as the Wigner representation (Wigner 1932) and the Glauber–Sudarshan Prepresentation (Glauber 1963, Sudarshan 1963). In the review (Drummond 1994), the generalized P-representation is mentioned and the positive P-representation is used. Using this method, the operator equations are transformed to complex Itˆo stochastic equations which involve only c-number commuting variables. In other words, an operator equation can be transformed to an equivalent pair of c-number stochastic equations for Ψ(z, t) and A(z, ω, t). On the transformation (3.231), equations (3.245) and (3.246) are ready for the positive P-representation. Thus, equivalent stochastic differential equations are obtainable. Substituting the integrated phonon variables into the equations for the photon field gives the following equation for a new function φ(ζ, τv ): i ∂2 ∂ φ(ζ, τv ) φ(ζ, τv ) ≈ [i f φ † (ζ, τv )φ(ζ, τv ) ± ∂ζ 2 ∂τv2 τv h(τv − τv )φ † (ζ, τv )φ(ζ, τv ) dτv +i −∞ if Γ(ζ, τv ) + iΓR (ζ, τv )]φ(ζ, τv ). + n¯ There is a corresponding Hermitian conjugate equation for φ † obtained by making † † the substitutions φ → φ † , i → −i, Γ → Γ† , ΓR → ΓR , with Γ, Γ† , ΓR , ΓR being independent noise terms. In relation (3.247), dν |k |v 2 ν r , sin(ντv ) , n¯ = h(τv ) = 2 t0 χ χ t0 0 t2 χe ω z ζ = , f = , z 0 = 0 , k = − 3 . z0 χ |k | v (3.248) (3.249) The last terms appearing in Equation (3.247) are stochastic functions. Γ represents the quantum noise of a field introduced by the electronic nonlinearity and ΓR is the thermal noise due to the phonon coupling. In numerical simulation, the use of an enlarged nonclassical phase space can increase computation times. For this reason, the Wigner function defined on a classical phase space is useful. The Wigner function does not have an exact stochastic equation. This is because there are third-order derivative terms in the Fokker–Planck equation for the Wigner function that have no stochastic equivalent. In sufficiently intense fields, the disagreeable terms can be neglected. The Wigner function represents symmetrically ordered operators which 3 Macroscopic Theories and Their Applications have a diverging vacuum noise term, because a cutoff is not considered or is taken to be infinity. As the χ (3) nonlinearity in silica optical fibres is low, Drummond and He (1997) have proposed the investigation of a quantum soliton, which occurs in parametric waveguides. Such an object consists of a superposition of a second-harmonic photon with a localized pair of subharmonic photons. The system is analogous to the quark model of the meson. On the simplifying assumption that the medium is homogeneous and isotropic, Milonni (1995) has considered the classical expression for the field energy and lifted the restriction to the magnetic susceptibility independent on frequency (cf. (3.177)). Then he has applied the results to treat basic emission and absorption processes for atoms in dispersive dielectric host media. Using this simple approach to quantization, Milonni and Maclay (2003) have shown how radiative recoil, the Doppler effect, and spontaneous and stimulated radiation rates are set up when the radiator is embedded in a host medium having a negative index of refraction. Matsko and Kozlov (2000) have presented an approach which absorbs the results of two previous studies (Drummond and Carter 1987, Haus and Lai 1990). The two theories have been shown to provide similar outcomes of a homodyne measurement. It has been concluded that both equal-time and equal-space commutation relations are valid for the quantum soliton description. In Matsko and Kozlov (2000) the work with physical units could be amended. Korolkova et al. (2001) have studied a quantum soliton in a Kerr medium. They have simplified, implicitly, the classical propagation equation for the slowly varying electric-field envelope by introducing a new time measurement in dependence on a position. In changing to dimensionless variables, they make the new time a “position” and the position a “time” variable and then get a classical nonlinear Schr¨odinger equation. Raymond Ooi and Scully (2007) have studied three-level extended medium, which is utilized as an amplifier. They begin with a single three-level cascade atom and with a χ (2) crystal, which is described by coupled parametric amplifier equations (Boyd 2003, Yariv 1989, Shen 1984). Further they present the theory and the results of the three-level cascade scheme. They compare this model with the simple one. They calculate cross correlation of the idler Eˆ 1 (z, t) and the signal Eˆ 2 (z, t), † ˆ ˆ ˆ ˆ G (2) 21 (τ ) ≡ E 1 (z, t) E 2 (z, t + τ ) E 2 (z, t + τ ) E 1 (z, t) . The neglect of the Langevin noise seems to be admissible especially for large detuning of the pump. They calculate also reverse correlation, † ˆ ˆ ˆ ˆ G (2) 12 (τ ) ≡ E 2 (z, t) E 1 (z, t + τ ) E 1 (z, t + τ ) E 2 (z, t) . They complete the observation of antibunching and oscillations in the reverse correlation with an interesting physics of the three-level atomic system. Modes of Universe and Paraxial Quantum Propagation 3.3 Modes of Universe and Paraxial Quantum Propagation The laser physics and the optical engineering are typical of their calculation methods and the effort for their improvement is apparent. The laser cavity is coupled to the outer space and the mode coupling can be investigated in detail. The paraxial description of light propagation can be quantized. Detectors of radiation with a spatial resolution motivate the inclusion of the optical imaging in quantum optics. 3.3.1 Quasimode Description of Spectrum of Squeezing Toward the end of the 1980s, it had become clear that the use of squeezed states (Walls 1983, Loudon and Knight 1987) in the interferometry can lead to the enhancement of signal-to-noise ratios. Milburn and Walls (1981) have shown that the cavity of a degenerate parametric oscillator admits only a 50% amount of squeezing (in the steady state). Yurke was first to realize that the pessimistic conclusions do not hold as the noise reduction in the transmitted field can be quite different from that in the intracavity field (Yurke 1984). As a first step one had to relate the field operators inside and outside the cavity. Whereas it was obvious that the field operators inside the cavity remain the usual quantum-mechanical annihilation operators of one or a small number of harmonic oscillators, the connection of the field operators outside the cavity with the “Langevin-noise operators” was established as late as 1980s by Collett and Gardiner (1984), Gardiner and Collett (1985), and Carmichael (1987). These authors have cleared up the relation of this subtle property of squeezed light and its generation with the concept of light propagation. Not only the interpretation but also the derivation of the Langevin-like “noise” terms was presented by Lang and Scully (1973), after they introduced and studied the “modes of the universe” (Lang 1973, Ujihara 1975, 1976, 1977). It is in order to mention a book of Scully and Zubairy (1997), where the results of Gea-Banacloche et al. (1990a) are expounded or formulated as exercises. The modes of universe are discussed, which include the interior of the imperfect cavity of interest, and are used to define the intracavity quasimode, the incident external field mode, and the output field mode. The mutual coupling of these modes emerges naturally in this formalism. Following Lang (1973), the one-sided empty cavity is described also by the relation 2l (+) (+) ˆ ˆE cav (t) = r˜ E cav t − (3.252) + t˜ Eˆ in(+) (t), c where l is the cavity length, r˜ is a real amplitude reflection coefficient, and t˜ is √ a respective transmission coefficient, t˜ = 1 − r˜ 2 . Here Eˆ in(+) (t) is a positivefrequency part of the input field and it fulfils the commutation relation ˆ [ Eˆ in(+) (t), Eˆ in(−) (s)] = K δ(t − s)1, 3 Macroscopic Theories and Their Applications where Eˆ in(−) (t) = [ Eˆ in(+) (t)]† and K = Ω , 20 cA with Ω being a quasimode frequency. For the full Fox–Li quasimode (Fox and ˆ is Li 1961, Barnett and Radmore 1988), a single-mode annihilation operator a(t) defined 2l ˆ (+) ˆ = (3.255) E (t)eiΩt . a (t) K c cav (+) (+) It is convenient to use the slowly varying amplitudes Ein(+) (t), Eout (t), and Ecav (t) for the input, output, and cavity fields related to the cavity frequency Ω, respectively, Eˆ in(+) (t) = Ein(+) (t)e−iΩt , (+) (+) Eˆ out (t) = Eout (t)e−iΩt , (+) (+) (t) = Ecav (t)e−iΩt . Eˆ cav We propose to consider the definition t c ˆ = a(t) E (+) (t ) dt K 2l t− 2lc cav (3.256) (3.257) (3.258) instead of (3.255), which is in a better agreement with the quantum field theory. To approve this change we denote the rightward and leftward travelling positive(+) (t) frequency parts as Eˆ >(+) (z, t) and Eˆ <(+) (z, t) so that Eˆ in(+) (t) ≡ Eˆ >(+) (−0, t), Eˆ out (+) (t) ≡ Eˆ >(+) (+0, t). The factor at the delta function in the equal≡ Eˆ <(+) (−0, t), Eˆ cav time commutator of the field is K c and from this we can calculate the equal-space commutator (t > 0, s > 0 without loss of generality) [ Eˆ in(+) (t), Eˆ in(−) (s)] = [ Eˆ >(+) (−0, t), Eˆ >(−) (−0, s)] = [ Eˆ >(+) (−ct, 0), Eˆ > (−) (−cs, 0)] ˆ = K cδ(−ct + cs)1ˆ = K δ(t − s)1. Hence, definition (3.259) can be rewritten in the form, l (+) 1 ˆ = a(t) E> (z, t) + E<(+) (z, t) dz, K c2l 0 (3.260) (3.261) where (cf. (3.258)) E>(+) (z, t) and E<(+) (z, t) mean the slowly varying smooth amplitude with the property Ω E>(+) (z, t) = Eˆ >(+) (z, t)e−ik0 z+iΩt , k0 = , c E<(+) (z, t) = Eˆ <(+) (z, t)eik0 z+iΩt . (3.263) (3.264) Modes of Universe and Paraxial Quantum Propagation Performing the time integration on both sides of relation (3.252), we obtain that 2l ˆ ˆ = r˜ aˆ t − a(t) + t˜b(t), c where ˆ = b(t) c K 2l t− 2lc Ein(+) (t ) dt . ˆ Recalling the space integration, we see that the annihilation operator a(t) correˆ sponds to the same quasimode in all times, whereas b(t) is appropriate to many distinct modes. In the situation when it holds that 2l d 1 2 2l ˜ ˆ ˆ − a(t), (3.267) ≈ 1 − t a(t) r˜ aˆ t − c 2 c dt in the short cavity round-trip time limit we get √ d c ˆ ˆ = −Γa(t) ˆ + 2Γ a(t) b(t), dt 2l where Γ= c 2l 1 2 t˜ . 2 Noting that c ˆ b(t) = 2l = 1 c K 2l t t− 2lc Ein(+) (t ) dt 1 (+) E (t), K in where we replaced the average over the short-time interval by the value of the function (at the upper limit), we rewrite equation (3.268) as a quantum Langevin equation √ d ˆ = −Γa(t) ˆ + 2Γ a(t) dt 1 (+) E (t). K in Further, Gea-Banacloche et al. (1990a) define, for arbitrary measurement times, the spectrum of squeezing of the output field via the quadrature variances. They present a microscopic effective Hamiltonian model of balanced homodyne detection. 3 Macroscopic Theories and Their Applications They refer to the fundamental papers (Collett and Gardiner 1984, Gardiner and Collett 1985, Caves and Schumaker 1985, Yurke 1985), where this concept of spectral squeezing was originally treated. As an approximation, the operator is introduced N ˆ˜ (δω) = √ A out T (+) Eout (t)eiδωt dt, where N = 1 . K As shown also by Yurke (1985) and Carmichael (1987), with a balanced homodyne detector one measures the combinations Eoutθ (t) = 1 iθ (+) (−) (t) , e Eout (t) + e−iθ Eout 2 and from this the natural generalization of the single-mode quadrature concept is ˆ˜ (δω) = 1 eiθ A ˆ˜ (δω) + e−iθ A ˆ˜ † (−δω) . A outθ out out 2 We may wonder why a non-Hermitian operator is taken for such a generalization of the Hermitian operator. Finally, the connection between single quasimode squeezing and spectral squeezing is explored and the difference in the noise reduction inside and outside the cavity is clarified in a way that lends itself to a simple visualization. Gea-Banacloche et al. (1990b) have first analysed measurements of small phase or frequency changes for an ordinary laser and calculated the extra cavity phase noise for a phase-locked laser. These analyses are based on the mean values and the normally ordered variances of quantum operators for which classical Langevin equations may be written down. The classical Langevin formalism is further replaced by the alternative Fokker–Planck formalism for the calculation of the spectrum of squeezing. This general Fokker–Planck formalism was applied to the two-photon correlated-spontaneous-emission laser. It has been shown that without one-photon resonance and initial atomic coherences involving the middle level, the maximum squeezing of the ultracavity mode is 50% while the detected field can be almost perfectly squeezed. Almost the exact reverse holds, however, if one-photon resonance and initial atomic coherences involving the middle level are present. In particular, the intracavity field may be perfectly squeezed while the outside field is not only unsqueezed but has, in fact, increased noise in the conjugate quadrature. Finally, the effect of finite measurement time on the quadrature variances is briefly analysed. Dutra and Nienhuis (2000) have unified the concept of normal modes used in quantum optics and that of Fox–Li modes from semiclassical laser physics. Their one-dimensional theory solves the problem of how to describe the quantized radiation field in a leaky cavity using Fox–Li modes. In this theory, unlike conventional Modes of Universe and Paraxial Quantum Propagation models, system and reservoir operators no longer commute with each other, as a consequence of natural cavity modes having been used. Aiello (2000) has derived simple relations for an electromagnetic field inside and outside an optical cavity, limiting himself to one- and two-photon states of the field. He has expressed input– output relations using a nonunitary transformation between intracavity and output operators. Brown and Dalton (2002) have considered three-dimensional unstable optical systems. They have defined non-Hermitian modes and their adjoints in both the cavity and external regions. A number of concepts and properties resulting from the standard canonical quantization procedure have been suited to the non-Hermitian modes by the exact transformation method. The results are applied to the spontaneous decay of a two-level atom inside an unstable cavity. 3.3.2 Steady-State Propagation In Deutsch and Garrison (1991a) it is assumed that in the case of amplifier, one is usually interested in the spatial dependence of temporally steady-state fields. It is no attempt at a reformulation of one-dimensional propagation, cf. Abram and Cohen (1991), where the temporal evolution by the Hamiltonian is supplemented by the spatial progression with the momentum operator. An alternative proposal is made that the quantum-mechanical equivalent of the classical steady-state condition is the description of the system by a stationary state of a suitable Hamiltonian. There is a formal resemblance to a nonrelativistic many-body theory for a complex scalar field (Deutsch and Garrison 1991b), which helps determine the Hamiltonian. In this ˆ theory a non-Hermitian envelope-field operator Ψ(z, t), with the property ˆ ˆ ˆ † (z , t)] = δ(z − z )1, [Ψ(z, t), Ψ is introduced. In the application to the optical field, the vector potential (or electric field in the lowest order) corresponding to a carrier plane wave of a given polarization e is expressed as follows: Eˆ ω(+) (z, t) = e 2πω ˆ Ψ(z, t) exp[i(kz − ωt)], An 2 (ω) where A is the beam area and n(ω) is a dispersive index of refraction. In contrast to Deutsch and Garrison (1991a), we will make a simplification, i.e. we will not consider a carrier wave Hamiltonian. For the single wave interacting nonlinearly with matter, the total Hamiltonian can be written as ˆ int , ˆ =H ˆ env + H H 3 Macroscopic Theories and Their Applications ˆ env = − ic H 2n(ω) ˆ ˆ† ˆ ˆ † (z, t) ∂ Ψ(z, t) − ∂ Ψ (z, t) Ψ(z, t) dz, × Ψ ∂z ∂z ˆ env is the Hamiltonian governing the free progression of the envelope and where H ˆ int is a general interaction Hamiltonian. In fact, the generality will not be exerH cised and we will treat only the vacuum input and the case of degenerate parametric amplifier. In the standard Heisenberg picture, the equation of motion for the envelope-field operator reads ˆ i ˆ ∂ Ψ(z, t) ˆ ], = − [Ψ(z, t), H ˆ ˆ ∂ Ψ(z, t) c ∂ Ψ(z, t) =− . ∂t n(ω) ∂z ct ˆ ˆ ,0 . Ψ(z, t) = Ψ z − n(ω) or for a linear medium The solution is In the standard Schr¨odinger picture, the state |Φ evolves by the Schr¨odinger equation i ˆ ∂|Φ ˆ = − (H env + Hint )|Φ . ∂t The introducing of the carrier wave Hamiltonian has revoked the considering of the Schr¨odinger picture (Deutsch and Garrison 1991b), along with the envelope picture which we have confined ourselves to. Relation (3.277) is the positive-frequency component of the electric-field in the envelope picture similarly as relation (4.5b) in Caves and Schumaker (1985) is this component in the interaction picture. The envelope picture is essentially the modulation picture in Caves and Schumaker (1985). For application under consideration there will be exact frequency matching between the carrier frequencies of the various waves which interact so that the Hamiltonian in equation (3.283) will be independent of time, thus the steady state (ss) solutions are identified with the stationary solutions to equation (3.283): ˆ int |Φ ss = λ|Φ ss . ˆ env + H H For the stationary solutions, the label (ss) will be omitted. Modes of Universe and Paraxial Quantum Propagation (i) In the case ˆ ˆ int = 0, H i.e. in the case of vacuum propagation, the stationary solutions are the translationinvariant states. To have a unitary representation of the translation, we may consider either the limits −∞, ∞ in the integral on the left-hand side in (3.279) or the spatial periodicity. We prefer the latter possibility. As an example, we can consider a coherent state corresponding to a constant one-photon wave function. We define a functional displacement operator ˆ D[α] ≡ exp ˆ ˆ † (z) − α ∗ (z)Ψ(z)] dz [α(z)Ψ ˆ = exp(ρ aˆ † − ρ a), (3.286) (3.287) ρ= aˆ ≡ aˆ |α(z)|2 dz, α 1 ˆ α ∗ (z)Ψ(z) dz. ≡ ρ ρ (3.288) (3.289) The coherent state is defined as the displaced vacuum ˆ |{α} ≡ D[α]|0 . ˆ ˆ aˆ † ] = 1, [a, relation (3.286) can be rewritten in the normal ordering form 1 2 ˆ |α(z)| dz D[α] = exp − 2 ˆ † (z) dz exp − α ∗ (z)Ψ(z) ˆ × exp α(z)Ψ dz and the corresponding one-photon state is , * , ,1, α ≡ 1 α(z)Ψ ˆ † (z)|0 dz. , ρ ρ On substituting |Φ = |{α} into relation (3.284) and applying then the operator ˆ † [α] from left to both sides, we derive that α(z) and λ should make the vacuum D 3 Macroscopic Theories and Their Applications state the eigenstate of the operator ˆ ∂ Ψ(z) † + α(z)1ˆ i c ˆ env D[α] ˆ ˆ † [α] H ˆ (z) + α ∗ (z)1ˆ Ψ =− D 2 n(ω) ∂z † ∗ ˆ (z) + α (z)1ˆ ∂ Ψ ˆ − Ψ(z) + α(z)1ˆ dz. (3.294) ∂z The eigenvalue is λ again. On equating c † ˆ ˆ ˆ ˆ † (z) dα(z) |0 dz Ψ D [α] Henv D[α]|0 = −i n(ω) dz c dα(z) α ∗ (z) − i dz|0 n(ω) dz with λ|0 , we see that dα(z) =0 dz λ = 0. and hence Instead of a translation-invariant wave function, we may try one that is an eigenfunction of the translation operator. When the boson number operator commutes with the operator ˆ int , ˆ =H ˆ env + H on the left-hand side of (3.284), this problem can be generalized by the insertion of the number operator Nˆ , ˆ ˆ † (z)Ψ(z) dz, (3.299) Nˆ = Ψ behind λ on the right-hand side. The idea takes into account that the wave function ˆ of any number of particles should be the eigenfunction of the operator A ˆ exp(i = exp(iλ Nˆ )|Φ ˆ − λ Nˆ ))|Φ = |Φ . exp(i( A Condition (3.301) is equivalent to ˆ − λ Nˆ )|Φ = 0 (A or to the relation with the insertion. On substituting again Φ into the new relation, ˆ † [α], we obtain the right-hand side in the form and applying the operator D Modes of Universe and Paraxial Quantum Propagation ˆ † (z) + α ∗ (z)1ˆ Ψ(z) ˆ Ψ + α(z)1ˆ dz. Condition (3.296) is thus generalized, − i c dα(z) = λα(z). n(ω) dz The solution to equation (3.304) reads α(z) = α(0) exp iλn(ω) z , c where λ is any real number. Expression (3.305) for the complex amplitude is suffiˆ cient for the fulfilment of the relation A|Φ = λ Nˆ |Φ . (ii) In the case of the degenerate parametric amplifier, the interaction Hamiltonian can be written (Hillery and Mlodinow 1984) as follows: ˆ int = − 1 H 2 χ (2) (z)Ep∗ (z) exp[−i(kp z − ωp t)][ Eˆ ω(+) (z, t)]2 + H.c. dx dy dz, where ωp is the pump frequency, ωp = 2ω, χ (2) (z) is the second-order susceptibility coupling the pump to the degenerate signal and idler fields and Ep (z) is the pump amplitude. Substituting for Eˆ ω(+) (z, t) from relation (3.277) gives the interaction Hamiltonian in the envelope picture ˆ int = i c H 2 n(ω) ˆ 2 (z) − κ(z)Ψ ˆ †2 (z)] dz, [κ ∗ (z)Ψ with 1 g0 (z) exp[iφ(z)], 2 4π ω (2) g0 (z) = |χ (z)||Ep (z)|, n(ω)c π φ(z) = − + Δk z + β(z). 2 κ(z) = (3.308) (3.309) (3.310) Here g0 (z) is the standard power gain coupling constant (Yariv 1985), Δk = 2k − kp is the phase mismatch at the degenerate frequency, and β(z) is the remaining phase originating from the product χ (2) (z)Ep∗ (z). 3 Macroscopic Theories and Their Applications To solve the time-independent Schr¨odinger equation, Deutsch and Garrison (1991a) assume that the eigenstate is a squeezed vacuum state corresponding to a two-photon wave function. They define a functional squeezing operator ˆ ] ≡ exp 1 [ξ (z)Ψ ˆ 2 (z)] dz , ˆ †2 (z) − ξ ∗ (z)Ψ S[ξ 2 with z-dependent squeezing parameter ξ (z) = −r (z) exp[iθ (z)]. The squeezed vacuum is defined as ˆ ]|0 . |0 {ξ } ≡ S[ξ Similarly as in case (i), ξ (z) and λ should be solutions of the equation ˆ env + H ˆ int ) S[ξ ˆ ]|0 = λ|0 . Sˆ † [ξ ]( H ˆ Applying the operator Ψ(z) to both the sides of (3.313) and taking into account that ˆ env + H ˆ int ) S[ξ ˆ ]Ψ(z)|0 , ˆ ˆ λΨ(z)|0 = 0 = Sˆ † [ξ ]( H we rewrite the eigenvalue problem in the λ-independent form ˆ˜ ˆ env + H ˆ int ) S[ξ ˆ ], Ψ(z) ˆ Sˆ † [ξ ]( H |0 {ξ } = 0 = C|0 , where the commutator Cˆ˜ is ic d ˆ ˜ C= − exp[iθ(z)] exp[−iθ (z)] cosh[r (z)] exp[iθ (z)] sinh[r (z)] n(ω) dz d − cosh[r (z)] sinh[r (z)] dz ˆ † (z) + κ ∗ (z) exp[iθ (z)] sinh2 [r (z)] − κ(z) exp[−iθ (z)] cosh2 [r (z)] Ψ d cosh[r (z)] + cosh[r (z)] dz d exp[−iθ (z)] sinh[r (z)] − exp[iθ (z)] sinh[r (z)] dz + κ ∗ (z) exp[iθ (z)] sinh[r (z)] cosh[r (z)] ∂ ˆ ˆ − κ(z) exp[−iθ (z)] sinh[r (z)] cosh[r (z)] Ψ(z) + Ψ(z) . ∂z Modes of Universe and Paraxial Quantum Propagation The eigenvalue condition requires that the real and imaginary parts of the coefficient ˆ † vanish yielding the desired propagation equations at Ψ dr 1 = g0 cos(θ − φ), dz 2 dθ = −g0 coth(2r ) sin(θ − φ). dz (3.317) (3.318) On introducing the complex amplitude ζ (z) = − exp[iθ (z)] tanh[r (z)], we can write equations (3.317) and (3.318) in the compact form d(−ζ ) = κ − κ ∗ζ 2, dz which may be useful for guessing the boundary condition r (z) |z=0 = 0, θ(z) |z=0 = φ(0). When β(z) = 0 and the phase difference θ (z) − φ(0) is small, the squeezing parameter r (z) integrates values of experimental parameter 12 g0 (z ), z ∈ [0, z], when parameter θ (z) converges to the function moreover g0 (∞) > |Δk|, the squeezing of the experimental parameters φ(z) − arcsin g0Δk . (∞) A direct solution of (3.313) requires that the real and imaginary parts of the ˆ † (z) vanish yielding the propagation equations (3.317) and (3.318) coefficient at Ψ ˆ Ψ ˆ † (z) indicates that λ has no finite again. The presence of the singular operator Ψ(z) value in general. Resorting to the partition of the field into finite elements oflength 1 , Δz in each of which we can define local field operators, we find that λ = O Δz λ− 1 c Δz n(ω) sin(θ − φ) sinh3 r dz. cosh r 3.3.3 Approximation of Slowly Varying Envelope The macroscopic approach to the quantum propagation aims at a quantum version of the slowly varying envelope approximation. Such an envelope implies that the wave is paraxial and monochromatic. The problem of quantum propagation of paraxial fields was considered first by Graham and Haken (1968). The revived interest is indicated by Kennedy and Wright (1988). Deutsch and Garrison (1991b) begin with generalizing the results of Lax et al. (1974), which develop the classical theory of a strictly monochromatic wave in an inhomogeneous nonlinear (perhaps amplifying) medium. The generalization is made only to a quasimonochromatic wave and the 3 Macroscopic Theories and Their Applications quantum theory is presented in the simplest system of codirectional propagation considering only the free-field dynamics. In the Coulomb gauge, the positive-frequency component of the vector potential satisfies the free-field wave equation ∇ 2 A(+) (x, t) − 1 ∂ 2 (+) A (x, t) = 0. c2 ∂t 2 The approximation of the slowly varying envelope is introduced by expressing A(+) (x, t) as envelope modulating a carrier plane wave propagating in the z-direction with the wave number k0 and the frequency ω0 = ck0 , A(+) (x, t) = A0 Ψ(x, t) exp[i(k0 z − ω0 t)]. Here, Ψ(x, t) is a vector-valued function, henceforth referred to as an envelope field, and A0 is a normalization constant, which we will specify before relation (3.333). The initial positive-frequency component can be expressed as follows: A(+) (x, t = 0) = 1 (2π) c eλ (k)Fλ (k)eik.x d3 k. 2|k| λ=1,2 Here eλ (k) are the orthogonal polarization unit vectors and the reduced Planck constant is introduced in view of the possible later quantization. Fλ (k) are thus momentum-space wave functions. The intuitive notion of a paraxial field is that it is composed of rays making small angles with the main propagation axis. In other words, a paraxial wave function {Fλ (k)} is concentrated in a small neighbourhood k0 = k0 e3 of the wave vector of the carrier wave. We define f λ (q) by the relation f λ (q) = Fλ (q + k0 ), where q is the relative wave vector. Let us observe that q = (qT , qz ), where qT is the transverse part of q, q = qT + qz e3 . In contrast to Deutsch and Garrison (1991b), we stress that we express the concentration in a small neighbourhood of q0 = 0, by letting the wave function { f λ (q)} depend on a small positive parameter θ, f λ (q) ≡ f λ (q, θ ). Let us assume that f λ (qT , qz , θ ) = qT qz , 2 ,θ , θ k0 θ k0 V fλ where V= 1 θ 4 k03 Modes of Universe and Paraxial Quantum Propagation and we have introduced the notational convention that an overbar indicates a dimensionless function of the scaled variables (and perhaps θ). This relates to defining a dimensionless “momentum” vector η = (ηT , ηz ), where qT ηT = , (3.329) θ k0 qz (3.330) ηz = 2 . θ k0 The functions of interest are those that have a convergent power-series expansion in θ , f λ (η, θ ) = θ n f λ (η). In contrast to Deutsch and Garrison (1991b), we note that (0) f λ (η, θ ) = f λ (η). Relations (3.329) and (3.332) lead to the wave function being θ -dependent, a difference from Deutsch and Garrison (1991b). Substituting integral (3.325) for the envelope field defined by (3.324) at t = 0, with the / momentum-space wave function given by equation (3.326), and choosing A0 = c , 2k0 we find that Ψ(x, t = 0, θ ) = (2π) 2 k0 eλ (q + k0 ) f λ (q, θ )eiq.x d3 q. |q + k0 | λ=1,2 Here, the parameter θ has been introduced, which is not present in integral (3.325), where Fλ (k) ≡ Fλ (k, θ ), A(+) (x, t) ≡ A(+) (x, t, θ ). In Deutsch and Garrison (1991b), the integro-differential form of the wave equation for A(+) (x, t, θ ) is investigated, i ∂ (+) 1 A (x, t, θ ) = c(−∇ 2 ) 2 A(+) (x, t, θ ), ∂t where (−∇ 2 ) 2 is an integral operator defined by 1 1 ik.x 3 ˜ |k| F(k)e d k, (−∇ 2 ) 2 F(x) = 3 2 (2π) ∂ ˜ ). Substituting with F(k) being the Fourier transform. Let us note that ∇ = (∇T , ∂z from (3.324) into (3.334) gives ∂ Ψ(x, t, θ ) = (cΩ − ω0 )Ψ(x, t, θ ), ∂t 3 Macroscopic Theories and Their Applications where 1 ∂ ∂ ∂2 2 . Ω ≡ Ω ∇T , = k02 − 2ik0 − ∇T2 − 2 ∂z ∂z ∂z The scaled configuration-space variables ξ = (ξ T , ζ ) are ξ T = θ k0 xT , ζ = θ 2 k0 z, and the dimensionless time variable τ = θ 2 ω0 t. After expressing the envelope field in the form 1 Ψ(x, t, θ ) ≡ Ψ(xT , z, t, θ ) = √ Ψ(θ k0 xT , θ 2 k0 z, θ 2 ω0 t, θ ), V we can rewrite relation (3.336) as follows: i ∂ Ψ(ξ , τ, θ ) = H(θ )Ψ(ξ , τ, θ ), ∂τ where 1 [ Ω(θ ) − 1 ], 2 θ Ω θ k0 ∇ T , θ 2 k0 ∂ζ∂ H(θ ) = Ω(θ ) = (3.342) . This provides the possibility of expanding the differential operator H(θ ), H(θ ) = θnH , n=0 (n) where the differential operators H are just defined by the formal expression. The dimensionless amplitude Ψ(ξ , τ, θ ) has the expansion Ψ(ξ , τ, θ ) = θ n Ψ (ξ , τ ). It is evident that the terms satisfy the equations i n ∂ (n) (n−m) (m) Ψ (ξ , τ ) = H Ψ (ξ , τ ), n = 0, 1, 2, . . . . ∂τ m=0 Modes of Universe and Paraxial Quantum Propagation In Deutsch and Garrison (1991b), the discussion of the classical equation of motion is completed by considering the initial-value problem. We rewrite equation (3.333) as Ψ(x, θ ) = Kλ (q) f λ (q, θ )eiq·x d3 q, (2π) 2 where the function Kλ (q) is defined by k0 e˜ λ (q), |q + k0 | Kλ (q) = where e˜ λ (q) = eλ (q + k0 ). Reexpressing (3.347) in terms of the scaled variables gives Ψ(ξ , τ = 0; θ ) = (2π) 2 Kλ (η, θ ) f λ (η)eiη·ξ d3 η. The scaled kernel function is e˜ λ (η, θ ) Kλ (η, θ ) = √ , w(η, θ ) where w(η, θ ) = / 1 + θ 2 (2ηz + ηT2 ) + θ 4 ηz2 . Considering the expansion Kλ (η, θ ) = θ n Kλ (η), we obtain the initial expansion of the envelope fields (n) Ψ (ξ ) = 1 (2π) Kλ (η) f λ (η)eiη·ξ d3 η. (n) Deutsch and Garrison (1991b) claim that the preceding arguments can be used to identify the subspace of the photon Fock space consisting of the paraxial states 3 Macroscopic Theories and Their Applications of the field. They resorted to the space S, the infinitely differentiable functions that decrease, when |η| → ∞, faster than any power of |η|−1 . † Let us recall the standard plane wave creation and annihilation operators aˆ λ (k), aˆ λ (k), and the wave packet creation and annihilation operators † † aˆ [F ] = Fλ (k)aˆ λ (k) d3 k (3.355) λ=1,2 and its conjugate. We now define † cˆ [ f ] = † cˆ λ (q) f λ (q) d3 q, where † † cˆ λ (q) = aˆ λ (q + k0 ) are the creation operators corresponding to the envelope field. Before we generalize the definition of a state with exactly one photon present, | f ; 1 = cˆ † [ f ]|0 , where f is a normalized one-photon wave function, we return to the original operators. Let us proceed with the generalized commutation relations † ˆ ˆ ˆ ], aˆ [G] = (F , G)1 = a[F Fλ∗ (k)Gλ (k) d3 k 1. (3.359) λ=1,2 If F is normalized, then ˆ ˆ ], aˆ † [F ] = 1. a[F Let us observe that † † aˆ †2 [F ] = Fλ1 λ2 (k1 , k2 )aˆ λ1 (k1 )aˆ λ2 (k2 ) d3 k1 d3 k2 , λ1 where Fλ1 λ2 (k1 , k2 ) = Fλ1 (k1 )Fλ2 (k2 ) exemplifies a normalized symmetric function. We are led to the definition of the state with exactly two photons present 1 |F ; 2 = √ 2 Fλ1 λ2 (k1 , k2 )aˆ λ1 (k1 )aˆ λ2 (k2 )|0 d3 k1 d3 k2 , Modes of Universe and Paraxial Quantum Propagation where F is a normalized symmetric two-photon wave function. In general, 1 |F; m = √ m! Fλ1 ...λm (k1 , . . . , km ) × aˆ λ1 (k1 ) . . . aˆ λm (km )|0 d3 k1 . . . d3 km , m ≥ 1, where F is this time a normalized symmetric wave function of m photons and |F; 0 = F|0 , with F a complex unit. This almost completes the definition of the Fock space, since any element of this space has the form |Φ = |F (m) ; m where F (m) are (unnormalized) symmetric m-photon wave functions, m ≥ 1, and F (0) is a complex number. The pure state |Φ is normalized if and only if the functions F (m) are jointly normalized by ∞ (F (m) , F (m) ) = 1. Similarly, 1 | f ; m = √ m! † f λ1 ...λm (q1 , . . . , qm ) × cˆ λ1 (q1 ) . . . cˆ λm (qm )|0 d3 q1 . . . d3 qm , m ≥ 1, where f λ1 ...λm (q1 , . . . , qm ) = Fλ1 ...λm (q1 + k0 , . . . , qm + k0 ) | f ; 0 = f |0 , where f = F, and any element of the Fock space has the form |Φ = ∞ m=0 | f (m) ; m , 3 Macroscopic Theories and Their Applications where f λ(m) (q1 , . . . , qm ) = Fλ(m) (q1 + k0 , . . . , qm + k0 ) 1 ...λm 1 ...λm and f (0) = F (0) . For the subsequent analysis, a unitary operator Tˆ (θ ) is of interest such that Tˆ (θ )| f ; m = | f (θ ); m , where (cf. (3.327)) m f λ1 ...λm (qT1 , qz1 , . . . , qTm , qzm , θ ) = V 2 × f λ1 ...λm qTm qzm qz1 ,..., , 2 . 2 θ θ θ (3.374) In the description of the dynamics using the Schr¨odinger picture, the paraxial approximation means mainly the evolution of the initial state |Φ(t, θ ) , i ˆ ∂ |Φ(t, θ ) = − H |Φ(t, θ ) , ∂t |Φ(t, θ ) |t=0 = Tˆ (θ )|Φ (t = 0) . |Φ (t, θ ) = Tˆ † (θ )|Φ(t, θ ) , Defining the state for all times, we can rewrite (3.375) in the form ∂ i ˆ (θ )|Φ (t, θ ) , |Φ (t, θ ) = − H ∂t ˆ (θ ) = Tˆ † (θ ) H ˆ Tˆ (θ ). H Using the expansion ˆ (θ ) = H ˆ (m) , θm H we may expand equation (3.378) into the coupled equations for the coefficients of the series |Φ (t, θ ) = ∞ m=0 θ m |Φ(m) (t) . Modes of Universe and Paraxial Quantum Propagation Describing the dynamics in the Heisenberg picture, we should generalize relations ˆ (3.379) and (3.380) to an arbitrary operator M(t), ˆ Tˆ (θ ), ˆ (t, θ ) = Tˆ † (θ ) M(t) M ∞ ˆ (t, θ ) = ˆ (m) (t). M θm M (3.382) (3.383) We can then rewrite the equation of motion i ˆ ∂ ˆ ˆ (t)] M(t) = − [ M(t), H ∂t i ˆ ∂ ˆ ˆ (t, θ )]. (t, θ ), H M (t, θ ) = − [ M ∂t in the form We may expand this equation into coupled equations similarly as (3.378). Since ˆ M ˆ (t, 1) = M(t), ˆ Tˆ (1) = 1, ˆ relation (3.383) simplifies for θ = 1, or any operator M(t) can be expressed as ˆ M(t) = ˆ (m) (t). M In both the pictures, definition (3.339) can be used whenever it is advantageous. ˆ According to Deutsch and Garrison (1991b), we introduce the operator Ψ(x) by ˆ relation (3.333), with Ψ(x, t = 0, θ ) → Ψ(x) on the left-hand side and f λ (q, θ ) → cˆ λ (q) on the right-hand side. This operator can be expanded as ˆ Ψ(x) = ˆ (n) (x), Ψ ˆ (n) (q)ˆcλ (q)eiq·x d3 q. K λ where ˆ (n) (x) = Ψ 1 (2π) Using this expansion, we can compute the commutators between fields of different orders that are 1 (m)† (n) iq·(x−x ) 3 ˆ ˆ Kˆ λi(n) (q) Kˆ λ(m) d q. (3.390) [Ψi (x), Ψ j (x )] = 3 j (q)e (2π) 2 λ=1,2 3 Macroscopic Theories and Their Applications As expected, the nth-order commutator can be expressed as † ˆ i (x), Ψ ˆ j (x )](n) = [Ψ ˆ i(n−m) (x), Ψ ˆj [Ψ (x )]. The equal-time commutation relations are preserved by the dynamics in each order of the approximation scheme, ∂ ˆ ˆ ˆ †j (x , t)](n) = 0. [Ψi (x, t), Ψ ∂t In the zeroth order, the theory yields a quantized analogue of the classical paraxial wave equation, and formally resembles a nonrelativistic many-particle theory. This formalism is applied to show that Mandel’s local-photon-number operator and Glauber’s photon-counting operator reduce, in the zeroth order, to the same true number operator. In addition, it is shown that the O(θ 2 )-difference between them vanishes for experiments described by stationary coherent states. A nonperturbative quantization of a paraxial electromagnetic field has been achieved by forcing the plane waves involved in the expression for the vectorpotential operator to obey paraxial wave equations at the time origin (Aiello and Woerdman 2005). 3.3.4 Optical Imaging with Nonclassical Light In optical imaging with nonclassical light or quantum imaging it is important to know how quantum entanglement properties of light beams in the spatial domain can be exploited in order to improve the quality of processing of images and of parallel signals (Gatti 2003). In this section, we first expound some general concepts as spatially multimode squeezing and spatial entanglement, and describe some optical devices that are able to generate light beams with these properties (Kolobov 1999). Then we provide some references to interesting approaches in this field. Kolobov (1999) enriches exposition of the usual quantum optics by new facts. The time moments are completed with space points ρ. It is connected with existence of very small photodetectors or pixels. The observed quantity is the surface ˆ t). photocurrent density operator, an Hermitian operator, i(ρ, Let the photodetection plane be located at the point with longitudinal coordinate z normal to the z-axis. Let Eˆ (+) (z, ρ, t) mean the positive-frequency operator of the electric field of a quasiplane and a quasimonochromatic wave travelling in the +zdirection, where ρ is the position vector in the transverse plane of the wave. This operator can be written in terms of space- and time-dependent photon annihilation ˆ ρ, t) and aˆ † (z, ρ, t) as and creation operators a(z, Eˆ (+) (z, ρ, t) = i ω0 ˆ ρ, t). exp[i(k0 z − ω0 t)]a(z, 20 c Modes of Universe and Paraxial Quantum Propagation ˆ ρ, t) Here ω0 is the carrier frequency of the wave and k0 is its wave number. But a(z, and aˆ † (z, ρ, t) are not the standard-modal annihilation and creation operators. They obey the commutation relations ˆ ˆ ρ, t), aˆ † (z, ρ , t )] = δ(ρ − ρ )δ(t − t )1, [a(z, ˆ ρ, t), a(z, ˆ ρ , t )] = 0ˆ [a(z, ˆ ρ, t) determines the and are normalized so that the mean value aˆ † (z, ρ, t)a(z, mean photon-flux density in photons per cm2 per second at point ρ and time t. The quantum theory of photodetection provides the following expressions for ˆ t) , and its space–time the mean value of the photocurrent density operator i(ρ, ˆ t), δ i(ρ ˆ , t )}+ : correlation function 12 {δ i(ρ, ) ˆ t) = η Iˆ (ρ, t) , i(ρ, (3.395) * 1 ˆ ˆ , t )}+ = i(ρ, ˆ t) δ(ρ − ρ )δ(t − t ) {δ i(ρ, t), δ i(ρ 2 ˆ t) i(ρ ˆ , t ) . (3.396) + η2 : Iˆ (ρ, t) Iˆ (ρ , t ) : − i(ρ, ˆ ρ, t) is the photon-flux density operator. Here Iˆ (ρ, t) = aˆ † (z, ρ, t)a(z, The second contribution to the correlation function of the photocurrent density operator is proportional to the normal- and time-ordered space–time intensity correlation function G (2) (ρ, t; ρ , t ) = : Iˆ (ρ, t) Iˆ (ρ , t ) : . This correlation function is proportional to the probability of detecting a photon at time t and at the spatial point ρ under the condition that the previous detection happened at time t and point ρ. When the intensity of light is stationary in time and uniform in the transverse area of the light beam, this correlation function depends only on the time difference τ = t − t and the spatial difference ξ = ρ − ρ between two points, G (2) (ρ, t; ρ , t ) = G (2) (ξ , τ ). One can define the degree of second-order spatio-temporal coherence as g (2) (ξ , τ ) = : Iˆ (ρ, t) Iˆ (ρ + ξ , t + τ ) : . Iˆ (ρ, t) 2 If the correlation function g (2) (ξ , τ ) has its maximum at ξ = 0 and at τ = 0, g (0, 0) > g (2) (ξ , τ ), it is natural to speak of the bunching in space–time. Analogously, if g (2) (0, 0) < g (2) (ξ , τ ), one may speak of the antibunching in space–time. The antibunching in space–time is a purely quantum-mechanical phenomenon. Indeed, it follows from the Schwarz inequality that the correlation function g (2) (ξ , τ ) of a classical electromagnetic field stationary in time and uniform in space must satisfy (2) g (2) (0, 0) ≥ g (2) (ξ , τ ) 3 Macroscopic Theories and Their Applications for arbitrary ξ and τ . Since the antibunching in space–time means an exactly opposite inequality, it cannot be explained within the framework of semiclassical theory, i.e. when the light field is treated as a c-number. The photocurrent noise spectrum is defined as a Fourier transform of the photocurrent correlation function. As follows from relation (3.396), for a light field stationary in time and uniform in the transverse plane, the correlation function of ˆ t), δ i(ρ ˆ , t )}+ depends only on the time the photocurrent density operator 12 {δ i(ρ, difference τ and the spatial difference ξ . The noise spectrum of the photocurrent density operator is the spatio-temporal Fourier transform of this correlation function, * ) 1 ˆ A ˆ t)}+ ˆ 2 (q, Ω) = {δ i(0, 0), δ i(ρ, (δ i) 2 × exp[i(Ωt − q · ρ)] dt d2 ρ. Using the photodetection formula (3.396), we can write the noise spectrum A ˆ (δ i)2 (q, Ω) as follows A ˆ 2 (q, Ω) = i(ρ, ˆ t) + G ˜ (2) (q, Ω) − i(ρ, ˆ t) 2 δ(Ω)δ(q). (δ i) Here the first contribution comes from the shot-noise term in relation (3.396), the second one is the spatio-temporal Fourier transform of the intensity correlation function, ˜ (2) (q, Ω) = exp[i(Ωt − q · ρ)]G (2) (ρ, t) dt d2 ρ, (3.402) G and the last one from the space–time-independent product of two mean photocurrent densities. One can show that in semiclassical theory the sum of the second and third contributions is always nonnegative. Therefore the semiclassical minimum value of the photocurrent density noise is given by the shot noise in space–time, A ˆ t) . ˆ 2 (q, Ω) = i(ρ, (δ i) This formula is a generalization of the standard quantum limit for a single-mode ˆ field described by the photon annihilation and creation operators a(t) and aˆ † (t), respectively, * ) 1 ˆ A ˆ + exp(iΩt) dt = i(t) , ˆ ˆ 2 (Ω) = {δ i(0), δ i(t)} (3.404) (δ i) 2 ˆ ˆ {·, ·}+ indicates an anticommutator, from the temporal where i(t) = aˆ † (t)a(t), domain into the space–time one. In quantum theory the sum of the second and third terms in relation (3.401) can be negative and compensate partially or even completely for the shot-noise contribution for some frequencies Ω and spatial frequencies q. Modes of Universe and Paraxial Quantum Propagation An opinion of many workers in quantum optics is expressed in Kolobov (1999). The difficulties associated with the quantum-mechanical description of field propagation in free space or a nonlinear medium lie in the usual procedure of field quantization. Evolution of the quantized field due, for instance, to the interaction with an atomic medium is described in terms of the Heisenberg equations for annihilation and creation operators, i.e. as purely temporal evolution. Such a description of field dynamics is not well suited to the problem of field propagation in free space or a medium. At the start of such a study, it would be more appropriate to have a quantum-mechanical analogue of the classical wave-optical propagation and diffraction theory. Such a description for transparent nonlinear media when the field interaction with atoms is described in terms of an effective Hamiltonian is much appreciated. But the simpler question of quantized field propagation in free space is considered first. Let Eˆ (+) (r, t), where r = (x, y, z) is the spatial coordinate, be the positivefrequency operator of the electric field in a vacuum. In the continuum limit, this operator is written in the form of the modal decomposition Eˆ (r, t) = i 1 20 (2π)3 ˆ ω(k)a(k) × exp[i(k · r − ω(k)t)] d3 k. ˆ Here a(k) and aˆ † (k) are the photon annihilation and creation operators of a spatial mode with the wave vector k; the frequency ω(k) is given by the free-space ˆ dispersion relation ω(k) = kc, with k = |k|. The operators a(k) and aˆ † (k) obey the canonical commutation relations ˆ [a(k), ˆ ˆ ˆ )] = 0. ˆ a(k [a(k), aˆ † (k )] = (2π)3 δ(k − k )1, The factor (2π )3 is not usual, but such particularities appear consistently in the review article. Equation (3.405) determines the Heisenberg field operator Eˆ (+) (r, t) in all points r and t of the space–time as a solution of the initial-value problem, ˆ i.e. through the modal operators a(k) and aˆ † (k) given at time t = 0 as Schr¨odinger operators. For a complete quantum-mechanical description, we have to specify the density matrix of the field for the continuum set of modes k. In the Heisenberg representation (3.405), this density matrix remains constant as time evolves. For a wave travelling in the +z-direction, we would like to have a formula that determines the field operator at any point ρ in the transverse plane at coordinate z given the field operator over the plane z = 0. On comparison of relation (3.393) with relation (3.405), one sees that 1 ˆ ρ, t) = a(z, (2π)3 ω(k) ˆ a(k) k0 × exp{i[q · ρ + (k z − k0 )z − (ω(k) − ω0 )t]} d2 q dk z . 3 Macroscopic Theories and Their Applications The normalization of these operators is such that the free Hamiltonian of the electromagnetic field can be written as ˆ 0 = ω0 H c ˆ ρ, t) d3 r. aˆ † (z, ρ, t)a(z, The commutation relations (3.394) are not proved in this connection, but equal-time ones are derived, ˆ ˜ − r )1, ˆ ρ, t), aˆ † (z , ρ , t)] = δ(r [a(z, ˜δ(r − r ) ≈ 1 − i ∂ − 1 ∇⊥2 δ(r − r ), k0 ∂z 2k02 with ∇⊥2 being the transverse Laplacian with respect to ρ. Here we have reproduced only the expression derived in the quasimonochromatic and paraxial approximations from the literature. In this approximation, the equation for the slowly varying ˆ ρ, t) reads operator a(z, ∂ i 2 ∂ ˆ ρ, t) = −c + c ˆ ρ, t). a(z, ∇⊥ a(z, ∂t ∂z 2k0 An unpublished result of Sokolov, which is related to the equation for propagation of a quantized field in a nonlinear parametric medium, is reproduced in Kolobov (1999). In part, it is based on the book of Klyshko (1988). The positive-frequency operator of a quantized electric field in a transparent dielectric medium can be written in a form similar to that for a vacuum (Klyshko 1988), Eˆ (r, t) = i 1 20 (2π)3 + ˆ ξ (k) ω(k)a(k) × exp[i(k · r − ω(k)t)] d3 k. This differs from relation (3.405) in the factor ξ (k), which describes the strength of the field in the medium as compared to that in a vacuum. This constant is ξ 2 (k) = u(k)v(k) . c2 cos ρ(k) c is the phase velocity of light in the medium, u(k) = ∂ω(k) is the Here v(k) = n(k) ∂k group velocity, and ρ(k) is the so-called generalized anisotropy angle, that is, the angle between the electric field and the induction. Modes of Universe and Paraxial Quantum Propagation The fact that the electromagnetic field is a vector field is not emphasized in Kolobov (1999), but it is mentioned in respect of the book by Klyshko (1988). Relation (3.412) yet neglects the use of and summation over the appropriate parameter ν. The dispersion relation ω(k) is not single valued, but has at least two branches. These branches should be distinguished by a parameter μ. The annihilation operators correspond to these branches. Relation (3.412) neglects the use of and summation over the parameter μ too. ˆ ρ, t) of Using this notation one can introduce the slowly varying operator a(z, the quantized field in the medium (cf. equation (3.393)), ω0 ˆ ρ, t). (3.414) exp[i(k1 z − ω0 t)]a(z, Eˆ (+) (z, ρ, t) = iξ 20 c Here we have denoted by k1 the wave number of the wave in the medium. The slowly ˆ ρ, t) is given by an equation identical to relation (3.407), varying operator a(z, 1 ω(k) ˆ ρ, t) = ˆ a(z, a(k) (2π )3 k0 × exp{i[q · ρ + (k z − k0 )z − (ω(k) − ω0 )t]} d2 q dk z , but here ω(k) means a dispersion relation for the medium. One will describe the parametric interaction in the medium in terms of an effective Hamiltonian. It is assumed that a χ (2) nonlinear parametric medium fills a volume V . The medium is illuminated by a monochromatic plane wave, the pump. The pump wave propagates in the +z-direction and has the frequency ωp and wave number kp , Eˆ p (+) (z, ρ, t) = E p exp[i(k p z − ω p t)]. We choose the frequency ωp of the pump wave in the form ωp = 2ω0 and consider the amplitude E p as a c-number, i.e. we neglect the quantum fluctuations of the pump wave. Under usual assumptions the parametric interaction can be described by the following effective Hamiltonian ˆ int = i n 0 g exp[i(kp − 2k1 )z][aˆ † (z, ρ, t)]2 d3 r + H.c. (3.417) H c V Here n 0 gives the density of active atoms in the parametric medium, and g is the strength constant of the parametric interaction proportional to the amplitude E p of the pump wave and the susceptibility constant χ (2) of the medium. ˆ ρ, t) in the parametThe evolution of the slowly varying amplitude operator a(z, ric medium is described by the following equation: i ˆ ∂ ˆ ˆ ρ, t) = iω0 a(z, ˆ ρ, t) + [ H ˆ ρ, t)]. a(z, 0 + Hint , a(z, ∂t 3 Macroscopic Theories and Their Applications ˆ 0 is the free-field Hamiltonian in the medium. In terms of a(z, ˆ ρ, t) and Here H † aˆ (z, ρ, t) it is given by relation (3.408). One introduces the Fourier transform of ˆ ρ, t), the space–time photon annihilation operator a(z, B ˆ q, Ω) = ˆ ρ, t) exp(iΩt) exp(−isz) exp(−iq · ρ) dt d2 ρ dz a(s, a(z, (3.419) ˜ˆ q, Ω) dz. e−isz a(z, ˜ˆ q, Ω) with the aid of a new operator ˆ (z, q, Ω), We express the Fourier transform a(z, ˜ˆ q, Ω) = ˆ (z, q, Ω) exp{i[k z (q, Ω) − k1 ]z}, a(z, where k z (q, Ω) = k 2 (ω0 + Ω) − q 2 , with q = |q| a z-component of the wave vector with frequency ω0 + Ω and spatial ˆ q, Ω) it holds that frequency q. For B ˆ (s, q, Ω) similarly defined as B a(s, B ˆ + k z (q, Ω) − k1 , q, Ω). ˆ (s, q, Ω) = B a(s One introduces the mismatch function Δ(q, Ω), Δ(q, Ω) = k z (q, Ω) + k z (−q, −Ω) − kp . 1) One lets u = ∂ω(k mean the group velocity of the wave in the crystal. On appropri∂k1 ate derivations, Kolobov (1999) presents the equation of propagation for the operator ˆ (z, q, Ω), ∂ ˆ (z, q, Ω) = σ ˆ † (z, −q, −Ω) exp[iΔ(q, Ω)z], ∂z where σ = 2nu0 g is the coupling constant of the parametric interaction. When an active nonlinear medium is placed in a resonator, a description may employ discrete transverse modes of the cavity. Lugiato and Gatti (1993), Gatti and Lugiato (1995), and Lugiato and Marzoli (1995) have adopted this approach. Let fl (ρ) mean these eigenmodes. The set of functions fl (ρ) satisfies both the condition of orthonormality (3.425) fl∗ (ρ) fl (ρ) d2 ρ = δll and completeness l fl∗ (ρ) fl (ρ ) = δ(ρ − ρ ). Modes of Universe and Paraxial Quantum Propagation ˆ One can expand the slowly varying field operator a(ρ, t) over the eigenmodes fl (ρ), ˆ a(ρ, t) = fl (ρ)aˆ l (t), where aˆ l (t) are operator-valued expansion coefficients that have the meaning of photon annihilation operators for the lth mode. From the commutation relations † (3.394) together with relation (3.425) it is easy to see that aˆ l (t) and aˆ l (t) obey the commutation relation † ˆ [aˆ l (t), aˆ l (t )] = δll δ(t − t )1. It is noted that the derivation is valid for the field operators outside the cavity. The Fourier transforms B aˆ l (Ω) are defined as B aˆ l (Ω) = ∞ −∞ exp(iΩt)aˆ l (t) dt. Noise spectrum of the photocurrent density for the lth mode as an analogue of A ˆ 2 (q, Ω) is introduced in Kolobov (1999). It is not assumed the noise spectrum (δ i) that the photocurrent density is uniform in space, but that it is stationary in time and the photocurrent density fluctuations for different eigenmodes are uncorrelated. We can express the space–time correlation function of the photocurrent density A ˆ 2 l (Ω) of the individual eigenmodes operator (3.396) in terms of the noise spectra (δ i) fl (ρ) of the cavity, ) * 1 ˆ ˆ , t )}+ = fl (ρ) fl (ρ ) {δ i(ρ, t), δ i(ρ 2 l ∞ 1 A ˆ 2 l (Ω) exp[−iΩ(t − t )] dΩ. (δ i) × 2π −∞ (i) Generation of multimode squeezed states of light The generation of multimode squeezed states of light by a travelling-wave optical parametric amplifier was described by Kolobov and Sokolov (1989a,b). As a result of the parametric down-conversion, a pump photon ωp splits into signal and idler photons, with frequencies ω0 + Ω and ω0 − Ω, and wave vectors k(q, Ω) and k(−q, −Ω). Their transverse components are ±q and their z-components are k z (q, Ω) and k z (−q, −Ω), respectively, by relation (3.421). The evolution of the slowly varying operator ˆ (z, q, Ω) inside the crystal is described by equation (3.424). Solving this equation and respecting relation (3.420) between the operators ˜ˆ q, Ω) and ˆ (z, q, Ω), one arrives at the following transformation a(z, ˜ˆ q, Ω) = U (q, Ω)a(0, ˜ˆ q, Ω) + V (q, Ω)a˜ˆ † (0, −q, −Ω), a(l, 3 Macroscopic Theories and Their Applications with coefficients U (q, Ω) and V (q, Ω) equal to Δ(q, Ω) l U (q, Ω) = exp i k z (q, Ω) − k1 − 2 iΔ(q, Ω) × cosh(Γl) + sinh(Γl) , 2Γ Δ(q, Ω) l V (q, Ω) = exp i k z (q, Ω) − k1 − 2 σ × sinh(Γl), Γ where |σ |2 − [Δ(q, Ω)]2 . 4 The functions U (q, Ω) and V (q, Ω) have the property |U (q, Ω)|2 − |V (q, Ω)|2 = 1. ˜ˆ q, Ω) and a˜ˆ † (0, q, Ω) obey the free-field At the input to the crystal, operators a(0, commutation relation ˆ ˜ˆ q, Ω), a˜ˆ † (0, q , Ω )] = (2π)3 δ(q − q )δ(Ω − Ω )1. [a(0, The broad-band squeezing in a three-wave interaction was discussed by Caves and Crouch (1987) and in a four-wave interaction by Levenson et al. (1985) in the case of co-propagation and by Yurke (1985) for counter-propagation. Equation (3.431) involves the spatial frequency q. It is assumed that, along with the pump wave, a monochromatic plane wave of frequency ω0 is incident normal to the input surface of the crystal. Upon leaving the crystal, this wave will serve as a local oscillator wave with the complex amplitude β, which first enters as a wave with complex amplitude α and q = 0. Relation (3.431) is used, β = |β| exp(iϕβ ) = αU (0, 0) + α ∗ V (0, 0). The type of noise modulation of the resultant field in space–time is determined by the angle θ (q, Ω) = ψ(l, q, Ω) − ϕβ . Phase modulation predominates for θ (q, Ω) = ± π2 and amplitude modulation for θ (q, Ω) = 0, π. The mean of the photocurrent density operator and its noise spectrum is found from relations (3.395), (3.396), and (3.400). The mean of the photocurrent density operator has the forms ˆ l + i ˆ s, ˆ = η|β|2 + η |V (q, Ω)|2 d2 q dΩ ≡ i (3.438) i Modes of Universe and Paraxial Quantum Propagation where the subscript l refers to the local oscillator field and the subscript s indicates the spontaneous parametric down-conversion. One introduces the function δ(q, Ω) = |V (q, Ω)|2 . ˆ s can be written as The mean of the photocurrent density operator i δs , Tc Sc δ(q, Ω) d2 q dΩ ˆ s=η i where 1 δs = 2 qc Ωc is the degeneracy parameter for spontaneous parametric down-conversion (Mandel and Wolf 1995), Tc = 2π is its coherence time, Sc = ( 2π )2 the coherence area, and Ωc qc Ωc and qc the widths of the frequency and spatial frequency spectra of spontaneous parametric down-conversion. The noise spectrum of the photocurrent density has the form A ˆ 2 (q, Ω) = i ˆ + 2η2 |β|2 δ(q, Ω) + Re{exp(−2iϕβ )g(q, Ω)} (δ i) η2 [δ(q , Ω )δ(q − q , Ω − Ω ) + (2π)3 + g ∗ (q , Ω )g(q − q , Ω − Ω )] d2 q dΩ , (3.442) where g(q, Ω) = U (q, Ω)V (−q, −Ω). Under homodyne detection, the down-conversion waves (q, Ω) and (−q, −Ω) modulate the local oscillator wave in space and time. With any angle ψ(z, q, Ω) for a while, slow quadrature components a˜ˆ μλ (z, q, Ω), λ = c, s, are introduced with the property a˜ˆ 1c (z, q, Ω) + ia˜ˆ 1s (z, q, Ω) ˆ˜ q, Ω) + exp[iψ(z, q, Ω)]a˜ˆ † (z, −q, Ω), = exp[−iψ(z, q, Ω)]a(z, a˜ˆ 2c (z, q, Ω) + ia˜ˆ 2s (z, q, Ω) ˜ˆ q, Ω) − exp[iψ(z, q, Ω)]a˜ˆ † (z, −q, Ω) . = −i exp[−iψ(z, q, Ω)]a(z, In other words, “complex quadrature components” are first defined. When the complex components are used, transformation (3.431) simplifies to the form a˜ˆ μc (l, q, Ω) + ia˜ˆ μs (l, q, Ω) = exp[iκ(q, Ω)] exp[±r (q, Ω)][a˜ˆ μc (0, q, Ω) + ia˜ˆ μs (0, q, Ω)], 3 Macroscopic Theories and Their Applications where + corresponds to the component with μ = 1 and − to the component with μ = 2. The components a˜ˆ μλ (0, q, Ω) at the input surface to the crystal are defined in the coordinate system with ψ(0, q, Ω) and the components a˜ˆ μλ (l, q, Ω) at the output surface of the crystal are defined with ψ(l, q, Ω). These angles are 1 arg[V (q, Ω)U −1 (q, Ω)], 2 1 ψ(l, q, Ω) = arg[U (q, Ω)V (−q, −Ω)]. 2 ψ (0, q, Ω) = In relation (3.446), κ(q, Ω) and r (q, Ω) are two other squeezing parameters, 1 arg[U (q, Ω)U −1 (q, Ω)], 2 exp[±r (q, Ω)] = |U (q, Ω)| ± |V (−q, −Ω)|. κ(q, Ω) = Leaving out the integral term in the noise spectrum of the photocurrent density (3.442), one can rewrite it in the form A ˆ 2 (q, Ω) = i ˆ 1 − η + η cos2 [θ (q, Ω)] exp[2r (q, Ω)] (δ i) + sin2 [θ (q, Ω)] exp[−2r (q, Ω)] . (3.449) Maximum squeezing occurs at frequencies qm , Ωm , which fulfil the condition Δ(qm , Ωm ) = 0. They are said to belong to such a phase-matching surface. To reduce shot noise to the highest extent, one chooses a complex amplitude of the local oscillator wave with θ (qm , Ωm ) = ± π2 . If the phase-matching condition is not perfectly met, the phase θ (q, Ω) is affected to the first order in Δ(q, Ω)lamp and the squeezing parameter is not yet influenced to the first order. In Kolobov (1999), the case of the frequency and angle-degenerate phase matching, Δ(0, 0) = 0, is considered. In the case of degenerate phase matching and < 0 one infers that in the region of frequencies Ω < Ωm and spatial frequencies kΩ q < qm the noise of the photocurrent density operator is reduced below the shotnoise level. In space–time language this can be said as follows. The frequencies Ωm and qm determine the minimum time Tm and the minimum area of photodetector Sm , which are necessary for reducing fluctuations in the number of photoelectrons below the Poissonian limit. In the case of nondegenerate phase matching in a crystal, when Δ(0, 0) > 0, one must pay attention to nonzero carrier frequencies q and Ω. In fact, Δ(q, Ω) ≈ 0, when (a) q = 0, Ω = 0, (b) q = 0, Ω = 0, and (c) q = 0, Ω = 0. Three kinds of measurement can be distinguished based on the three types of phase matching. Yuen and Shapiro (1979) were the first to propose a degenerate mixing process as a possible source for squeezed light. A four-wave mixer can be set up either in a backward geometry, as proposed by Yuen and Shapiro (1979), or in a forward geometry, according to Kumar and Shapiro (1984). A backward four-wave mixer can produce multimode squeezed light with a much larger spatial bandwidth than a forward four-wave mixer and another scheme. Modes of Universe and Paraxial Quantum Propagation The exposition is restricted to the backward four-wave mixing. The process occurs in a transparent χ (3) nonlinear medium. It is assumed that the medium has the form of a plane slab the thickness of which (distance between two surfaces parallel to the ρ plane) is equal to l. Two counterpropagating plane monochromatic pump waves E 1 and E 2 of angular frequency ω0 and wave vectors k1 and k2 , respectively, illuminate the slab at a small angle to the z-axis. A quasiplane and quasimonochromatic probe wave of carrier frequency ω0 enters the medium from the left and propagates in the +z-direction. In the nonlinear interaction between the two pump waves and the probe wave, a phase conjugate wave is generated in the medium that propagates in the opposite direction to the probe wave (Fisher 1983). One describes the probe and conjugate waves by two corresponding slowly varying operators ˆp (z, ρ, t) and ˆc (z, ρ, t). Let kμ (q, Ω), μ = p, c, mean the wave vectors of the probe and conjugate waves, respectively. One introduces the Fourier transforms of these space–time operators, ˜ˆμ (z, q, Ω), μ = p, c, as follows: ˜ˆμ (z, q, Ω) = ˆμ (z, ρ, t) exp[i(Ωt − q · ρ)] dt d2 ρ. These operators evolve in the nonlinear medium according to the equations ∂ ˜ † ˆ p (z, q, Ω) = −iκ ˜ˆ c (z, −q, −Ω) exp[−iΔ(q, Ω)z], ∂z ∂ ˜ † ˆc (z, q, Ω) = iκ ˜ˆ p (z, −q, −Ω) exp[−iΔ(q, Ω)z]. ∂z (3.451) (3.452) Here κ is a coupling constant proportional to the product of the two pump wave amplitudes and to the nonlinear susceptibility χ (3) of the medium, Δ(q, Ω) is a phase-mismatch function given by Δ(q, Ω) = kpz (q, Ω) + kcz (−q, −Ω) − k1z − k2z , with kp,cz (q, Ω) being the projections of the probe and conjugate wave vectors onto the positive z-direction and k1,2z the corresponding projections for the pump waves. The solution of equations (3.451) and (3.452) with the boundary conditions ˜ˆp (z = 0, q, Ω) = ˜ˆp (0, q, Ω) and ˜ˆc (z = l, q, Ω) = ˜ˆc (l, q, Ω) may follow the classical one (Fisher 1983). The input–output transformation, which yet differs from the solution by the inclusion of the incoupling and outcoupling beam splitters, is presented in Kolobov (1999), † aˆ out (q, Ω) ∝ U (q, Ω)aˆ in (q, Ω) + V (q, Ω)aˆ in (−q, −Ω), † V (q, Ω)bˆ in (−q, −Ω). bˆ out (q, Ω) ∝ U (q, Ω)bˆ in (q, Ω) + Notably one obtains two independent processes of multimode squeezing. Any of them can be compared with the optical parametric amplifier. 3 Macroscopic Theories and Their Applications The investigation of the difference is concentrated on comparison between the phase-mismatch functions Δ(q, Ω). (Here Δ ≡ ΔOPA , ΔFFWM , ΔBFWM .) This function depends on the spatial frequency in the optical parametric amplification and the forward four-wave mixing and does not depend on it in the backward fourwave/mixing. From these properties one determines “spatial” squeezing bandwidth q = kl1 and q = ∞ in the paraxial approximation. The frequency dependence can be used to filter the probe signal. One is led to the idea of a “realistic” medium and its “quantum nature”. Obviously, a nonlinearity of the equations is meant, but also a better quantization desired, which just as we saw at the beginning has not yet been employed. Another source is based on a cavity, and it still can generate multimode squeezed states. It is a subthreshold optical parametric oscillator, concretely this scheme in a cavity with spherical mirrors (Lugiato and Marzoli 1995). Here we are provided by the literature with another example, in which situation the paraxial approximation has been used. Even though here a cavity is investigated, not a travelling wave, the cavity modes still seem to have been determined in the paraxial approximation. For a cavity-based geometry a more natural language for the description of multimode squeezing is that of discrete eigenmodes of the resonator. In the case of the cavity with spherical mirrors, such a discrete eigenset is given by the Gauss– Laguerre modes cos(lφ) for i = 1, ˜ (3.456) f pli (r, φ) = f pl (r ) × sin(lφ) for i = 2, p! 2r 2 l 2r 2 r2 ˜f pl (r ) = √ 2 Lp exp − 2 , (3.457) w2 w 2δl,0 π w 2 ( p + l)! w 2 where w is the waist of the beam + and p, l = 0, 1, 2, , . . . are the radial and angular indices, respectively, r = x 2 + y 2 is the radial and φ is the angular variable. The functions L lp are the Laguerre polynomials. The functions f pli (r, φ) satisfy the conditions of orthonormality, 2π ∞ f pli (r, φ) f p l i (r, φ)r dr dφ = δ pp δll δii . (3.458) 0 The eigenfrequencies of these modes are given by ω pl = ω00 + (2 p + l)ζ, where ω00 is the lowest eigenfrequency of the resonator and the parameter ζ depends on the curvature of mirrors and the distance between them (Yariv 1989). The source is described by the master equation, 1 ˆ ∂ ˆˆ ρ, ˆ + ρˆ = [ H Λ int , ρ] pli ˆ ∂t i i=1 p,l 2 Modes of Universe and Paraxial Quantum Propagation ˆˆ ρ, where the term Λ pli ˆ ˆˆ ρˆ = γ 2aˆ ρˆ aˆ † − aˆ † aˆ ρˆ − ρˆ aˆ † aˆ Λ pli pli pli pli pli pli pli describes the damping of the mode pli due to cavity decay through the outcoupling mirror with the rate γ . The interaction Hamiltonian Hint is given by Lugiato and Marzoli (1995), 2π ∞ †2 ˆ (r, φ) − A ˆ 2 (r, φ) r dr dφ, ˆ int = i γ Ap A (3.462) H 2 0 0 where Ap is the coupling constant proportional to the nonlinear susceptibility χ (2) of the medium and the amplitude of the pump wave. Instead of solving the master equation (3.460), we can write a set of independent Langevin equations (Walls and Milburn 1994) for the annihilation and creation † operators aˆ pli (t) and aˆ pli (t) inside the cavity, † aˆ pli (t) = −γ [(1 + iΔ pl )aˆ pli (t) − Ap aˆ pli (t)] + 2γ cˆ pli (t), where Δ pl = ω pl − ωs , γ with ω pl and ωs the eigenfrequencies of the eigenmodes of the resonator and the frequencies of signal photons, respectively. Every mode is damped and the rate constant for each of the modes is the same. Such a simplification should still be explained. The method of description seems to be known in quantum optics. The † operators cˆ pli (t) and cˆ pli (t) correspond to the operator-valued Langevin forces and describe the vacuum fluctuations entering the cavity through the outcoupling mirrors. These operators obey the commutation relations † ˆ [ˆc pli (t), cˆ pli (t )] = δ pp δll δii δ (t − t )1. In relation (3.462), a coupling constant is expressed as the product γ Ap , from which we conclude that 0 < Ap < 1. This property says that a subthreshold oscillator is being investigated. Kolobov (1999) refers to Collet and Gardiner (1984) for the input–output relations for the field operators bˆ pli (t) in the wave outgoing from the cavity, cˆ pli (t) of the vacuum fluctuations entering it, and aˆ pli (t) inside the cavity, + bˆ pli (t) = 2γ aˆ pli (t) − cˆ pli (t). (3.466) Using the Fourier transform B gˆ pli (Ω) = ∞ −∞ exp(iΩt)gˆ pli (t) dt, g = a, b, c, 3 Macroscopic Theories and Their Applications to equations (3.463), we arrive at the following squeezing transformation between the Fourier transforms of the incoming and outgoing operators: † B cˆ pli (Ω) + V pl (Ω)B cˆ pli (−Ω), bˆ pli (Ω) = U pl (Ω)B with the coefficients U pl (Ω) = V pl (Ω) = [1 − iΔ pl (−Ω)][1 − iΔ pl (Ω)] + A2p [1 + iΔ pl (Ω)][1 − iΔ pl (−Ω)] − A2p 2Ap , [1 + iΔ pl (Ω)][1 − iΔ pl (−Ω)] − A2p , which is a discrete equivalent of the multimode squeezwith Δ pl (±Ω) = Δ pl ∓ Ω γ ing transformation (3.431). The calculation of the photocurrent noise spectrum is not included, but the following relation is presented (Lugiato and Marzoli 1995): ) * 1 ˆ ˜f pl (r ) ˜f pl (r ) cos[l(φ − φ )] ˆ , t )}+ = {δ i(ρ, t), δ i(ρ 2 p,l ∞ 1 A ˆ 2 pl (Ω) exp[−iΩ(t − t )] dΩ, (3.470) (δ i) × 2π −∞ which is an analogue of relation (3.430). A relation holds A ˆ 2 pl (Ω) = i(ρ, ˆ t) 1 + (δ i) 4Ap ˜ 2 )2 + 4Ω ˜2 (1 + Δ2pl − A2p − Ω ˜ 2 − 2iΔ pl )}] , × [2Ap + Re{exp(−2iϕβ )(1 − Δ2pl + A2p + Ω (3.471) ˜ = Ω is the dimensionless frewhere ϕβ is the phase of the local oscillator and Ω γ quency and η = 1 (Collett and Walls 1985, Savage and Walls 1987). (ii) Free propagation and diffraction of multimode squeezed light With respect to the free propagation and diffraction of multimode squeezed light it is shown that propagation in free space in general deteriorates the resolving power of low-noise measurements with squeezed light. A lens allows one to compensate for this deterioration and even further to improve the resolving power. We will assume that the plane of photodetection lies at a distance L from the exit plane of the nonlinear crystal and is parallel to it. The slowly varying operators ˜ˆ q, Ω) at the exit plane of the crystal and a(l ˜ˆ + L , q, Ω) at the photodetection a(l, plane are for the free propagation related as (cf. equation (3.420)) follows: Modes of Universe and Paraxial Quantum Propagation ˜ˆ + L , q, Ω) = exp{i[k z(0) (q, Ω) − k0 ]L}a(l, ˜ˆ q, Ω), a(l where k z(0) (q, Ω) is the z-component of the wave vector in free space. Along with the free propagation one is interested in the dependence of the field operator at the plane of photodetection on that at the input to the nonlinear crystal, ˜ˆ + L , q, Ω) = U˜ (q, Ω)a(0, ˜ˆ q, Ω) + V˜ (q, Ω)a˜ˆ † (0, −q, −Ω), a(l with the coefficients U˜ (q, Ω) = exp{i[k z(0) (q, Ω) − k0 ]L}U (q, Ω), and the like for V˜ (q, Ω). It is a simple generalization of the above description. New quantities are provided with a tilde. Relation L θ˜ (q, Ω) = θ(q, Ω) + [k z(0) (q, Ω) + k z(0) (−q, −Ω) − 2k0 ] 2 q2 L ≈ θ(q, Ω) − , 2k0 where a paraxial and quasimonochromatic approximation is assumed, says that the orientation angle (i.e. phase) of the squeezing changes more rapidly in dependence on the spatial frequency than on the output from the crystal. The minimum area lamp Sm of low-noise detection is proportional to k + 2L , where l amp is the amplification 1 length. The increase is related to the diffraction. The resolving power of the lownoise observation has decreased. The deterioration is reversible. Even the phase shifts produced during wave propagation inside the nonlinear crystal can be compensated for. For a lens of focal length f , provided that the object plane has the position −2 f relative to the lens and the image plane has the position 2 f relative to the lens, the optical imaging is represented by the field operators 1 ρ2 k0 ρ 2 ˆ + 4 f, ρ, t) = exp −i aˆ z, −ρ, t − 4f + c . a(z 2f c 2f A ˆ 2 (q, Ω) has been conserved. From this relation it follows that the noise spectrum (δ i) From the results of the analyses performed it is natural to choose z = l, i.e. the output plane of the crystal. Concerns with correct quantization call attention to the proposal of imaging some plane inside the crystal onto the detection plane. This imaging is understood as a general choice z = l + L, where L is negative. It suffices to choose L = −lamp k0 , 2kl 3 Macroscopic Theories and Their Applications for the phase θ˜ (q, Ω) in the vicinity of the matching surface to become independent of spatial frequency. Thus geometrical imaging of the plane inside the crystal at the distance L given by (3.477) onto the photodetection plane broadens the range of spatial frequencies at which one has a noise reduction below the shot-noise level. Kolobov (1999) remarks on what follows. An improvement of the frequency behaviour of the noise spectrum can be achieved by inserting into the light beam a slab of a dispersive medium with wave number k (1) (Ω). The length of the slab will be L (1) = −lamp kΩ (1) 2kΩ (1) has the opposite sign to kΩ . if kΩ In order to assess physical possibilities for low-noise measurements, the photoelectron number collected by a pixel with the area Sd during the time interval Td is considered as an example. If it holds that Sd ≥ Sc and Td ≥ Tc , the result is independent of Sd and Td . For high quantum efficiency, η ≈ 1, the statistics of photoelectrons is sub-Poissonian, when squeezing is significant. Here we concede the efficiency of models simplified to several modes: The average number of photons necessary for a single low-noise measurement is given by a quantity, which we could obtain on choosing as “average” model simplified to two modes. Whereas the coherence time Tc limits the number of images, which can be transmitted in a time interval T , the coherence area Sc limits the number of modes on an illuminated spot of area S on the input to the nonlinear crystal, even though a statistical definition of the mode is peculiar. The scheme of homodyne detection is closely related to holographic measurements. (iii) Noiseless control of multimode squeezed light With respect to the noiseless control of multimode squeezed light, the detection of faint phase objects as proposed by Kolobov and Kumar (1993) is described. The sub-shot-noise microscopy utilizes a Mach–Zehnder interferometer. The outgoing light from the two ports of the second beam splitter is detected by two photodetector arrays. As a natural generalization of the analysis in Caves (1981), the minimum detectable spatially varying phase change is defined. The amplitude modulation in space is not advantageous for creation of optical images with a regular (sub-Poissonian) photon statistics. One example of nondestructive modulation in space is an opaque screen with apertures larger than the coherence area of squeezed light Sc . A number of references for interference mixing, which provides such nondestructive modulation in time are presented. For generalization and analogy in the case of spatial modulation, see Sokolov (1991a,b). (iv) Spatially noiseless optical amplification of images Modes of Universe and Paraxial Quantum Propagation Noiseless amplification has been defined in phase-sensitive amplifiers (see, for example, Caves (1982)) and it should be extended to the spatial domain. Many areas of physics would benefit from the possibility of noiseless amplification of faint optical images. Astronomy and microscopy come to mind. Kolobov (1999) refers to Kolobov and Lugiato (1995) for such a proposal. One considers a ring-cavity degenerate optical parametric amplifier and monochromatic images. In general, spectral bandwidth of images should be within the bandwidth of the cavity employed. The optical parametric amplifier is combined with input and output lenses, which perform the spatial Fourier transformation and broaden a narrow region of transverse vectors q, which is an analogue of the band of (temporal) frequencies. The exposition is self-contained. ˆ Let a(ρ, t) and aˆ † (ρ, t) mean the photon annihilation and creation operators in the object plane P1 and let eˆ (ρ, t) and eˆ † (ρ, t) stand for the photon annihilation and creation operators in the image plane P4 . Let bˆ in (ξ , t) and bˆ out (ξ , t) mean the field operators in the input and the output planes of the optical parametric oscillator, ˆ t) in the respectively. The operator bˆ in (ξ , t) is expressed through the operator a(ρ, object plane by the following transformation performed by the lens L 1 : 1 bˆ in (ξ , t) = λf 2π ˆ a(ρ, t) exp −i ξ · ρ d2 ρ, λf where f is the focal length of the lens and λ is the wavelength of the light. In a ˆ , t) of the cavity mode paraxial approximation, the slowly varying field operator b(ξ closest to resonance with input signal is described by the equation c ∂ ˆ ˆ , t) = −(κ + iΔ)b(ξ ˆ , t) b(ξ , t) − i ∇⊥2 b(ξ ∂t 2k √ + σ bˆ † (ξ , t) + 2κ bˆ in (ξ , t). Here κ is the cavity decay constant equal to κ= cT , 2L where T is the intensity transmission coefficient of the cavity outcoupling mirror, L is the perimeter of the cavity, and c is the light velocity in a vacuum; the detuning parameter is defined as Δ = ωc − ωs , where ωc is the longitudinal cavity frequency closest to the frequency ωs of the signal field. In (3.480) σ is the constant of parametric interaction proportional to the pump amplitude and k is the wave number of the travelling wave inside the cavity, . k = 2π λ 3 Macroscopic Theories and Their Applications The output field operator bˆ out (ξ , t) is the sum of two waves, one of which is reflected from and another transmitted through the outcoupling mirror of the cavity, bˆ out (ξ , t) = √ ˆ , t) − bˆ in (ξ , t). 2κ b(ξ To express the output field operators in terms of the input operators one takes the ˆ , t), spatio-temporal Fourier transform of b(ξ B ˆ Ω) = b(q, ˆ , t) exp[i(Ωt − q · ξ )] d2 ξ dt. b(ξ The spatio-temporal Fourier transforms of bˆ in (ξ , t) and bˆ out (ξ , t) are similar. The transformation of the field amplitude from the object plane P1 to the input plane P2 given by relation (3.479) is equivalent to the following relation between the spatio-temporal Fourier transform B bin (q, Ω) and the temporal Fourier transform ˜ˆ a(ρ, Ω) B ˆbin (q, Ω) = λ f a˜ˆ − λ f q, Ω , 2π where we have used ˜ˆ a(ρ, Ω) = ˆ a(ρ, t) exp(iΩt) dt. Since the lens L 2 has the same focal length as L 1 , we have an identical relationship between the Fourier transforms B bˆ out (q, Ω) and e˜ˆ (ρ, Ω) in the output plane P3 and in the image plane P4 , λf e˜ˆ (ρ, Ω) = λ f B bˆ out − ρ, Ω , 2π e˜ˆ (ρ, Ω) = eˆ (ρ, t) exp(iΩt) dt. (3.487) (3.488) It can be derived that ˜ˆ e˜ˆ (ρ, Ω) = u(ρ, Ω)a(ρ, Ω) + v(ρ, Ω)a˜ˆ † (−ρ, −Ω), with u(ρ, Ω) and v(ρ, Ω) given by [1 − iδ(ρ, Ω)][1 − iδ(ρ, −Ω)] + |g|2 , [1 + iδ(ρ, Ω)][1 − iδ(ρ, −Ω)] − |g|2 2g . v(ρ, Ω) = [1 + iδ(ρ, Ω)][1 − iδ(ρ, −Ω)] − |g|2 u(ρ, Ω) = Modes of Universe and Paraxial Quantum Propagation Here one has introduced the dimensionless coupling strength g of the parametric interaction, g= σ , κ and the dimensionless mismatch function δ(ρ, Ω), Δ Ω δ(ρ, Ω) = − + κ κ ρ ρ0 2 , with ρ0 defined as ρ0 = f λT . 2π L To ensure a linear amplification regime, the coupling strength g must be |g| < 1. In Kolobov (1999) it has been shown that multimode squeezed states of light come about as a natural generalization of single-mode squeezed states. They can be produced in experiments when just one spatial mode of the field is cut out by means of a high-Q optical cavity. Travelling-wave configurations are most convenient for the generation of multimode squeezed states. To observe them, one must employ a dense array of photodetectors. Many new physical phenomena are connected with multimode squeezing. Multimode squeezed states offer a few applications including optical imaging with sub-shot-noise sensitivity, sub-shot-noise microscopy, and noiseless amplification of optical images. There are some other phenomena related to multimode squeezing, such as the similarity of homodyne detection of multimode squeezed states to the scheme of optical holography, an application of these states to optical image recognition with photon-limited images (Morris 1989), a possibility to improve a quantum limit in optical resolution with the use of nonclassical light (den Dekker and van den Bos 1997). Multimode squeezed states can be applied in the field of optical pattern formation, which studies the spatial and spatio-temporal phenomena that arise in the structure of the electromagnetic field in the plane orthogonal to the direction of propagation. For instance, the filamentation of a laser beam initiated by quantum fluctuations of light in its transverse area (Nagasako et al. 1997, Lugiato et al. 1999). Bj¨ork et al. (2004) have shown that the use of entangled photon pairs in an imaging system can be simulated with a classically correlated source sometimes. They have considered two schemes with “bucket detection” of one of the photons. In contrast, entangled two-photon imaging may exhibit effects that cannot be mimicked by any classical source when bucket detection is not used (Strekalov et al. 1995). Caetano and Souto Ribeiro (2004) have investigated theoretically and experimentally the transfer of the angular spectrum of the pump beam to the down-converted beams. They have demonstrated that the image of a given object placed in the pump 3 Macroscopic Theories and Their Applications can be formed in the twin beams by manipulating the entangled angular spectrum and performing coincidence detection. Gatti et al. (2004) have analytically shown that it is possible to perform coherent imaging by using the classical correlation of two beams obtained by splitting thermal light. They have presented a formal analogy between two such classically correlated beams and two entangled beams produced by parametric down conversion. The classical beams can qualitatively reproduce all the imaging properties of the entangled beams. These classical beams are spatially correlated both in the near field and in the far field even though to an imperfect degree. Bache et al. (2004) have presented a theoretical study of ghost imaging which uses balanced homodyne detection to measure signal and idler fields arising from parametric down conversion. They have used a general model describing the threewave quantum interaction with respect to finite size and duration of the pump pulse. They have shown that the signal–idler correlations contain the full amplitude and phase information about an object located in the signal arm, both in the near-field (object image) and the far-field (object diffraction pattern) cases. One may pass from the far-field result to the near-field result by simply performing inverse Fourier transformation. The analytical results are confirmed by numerical simulations. Bennink et al. (2004) have reported two distinct experimental demonstrations of coincidence imaging. They have shown that uncertainties of distance and mean direction of two classical fields must obey an inequality. With the use of entangled photons they formed two images whose resolution had a product that was three times better than is possible according to classical diffraction theory. For the sake of comparison, a similar experiment was performed with light in a classical mixture of states (cf. Gatti et al. 2003). While the resolution of the image was good in the far field, the uncertainty product obeyed the classical inequality in the near field. Valencia et al. (2005) presented the first experimental demonstration of twophoton ghost imaging with a pseudothermal source. They have introduced the concepts of two-photon coherent and two-photon incoherent imaging. Similar to the case of entangled states, a two-photon Gaussian thin lens equation connects the object plane and the image plane. Specifically, the thermal source acts as a phase conjugated mirror. Altman et al. (2005) have probed the quantum image produced by parametric down-conversion with a pump beam carrying orbital angular momentum. With one detector fixed and the other scanning, the usual single-spot coincidence pattern is predicted (Monken et al. 1998) to split into two spots, which has been demonstrated. Mosset et al. (2005) have presented the first experimental demonstration of noiseless amplification of images that yielded spatially integrated intensity (of the photodetection process) for different lateral detector sizes. Achieving two-beam and single-beam conditions, they have compared phase-insensitive and phase-sensitive schemes with theory. Quantum imaging is a branch of quantum optics that investigates the ultimate performance limits of optical imaging imposed by quantum mechanics. The use of quantum-optical methods enables one to solve the problems of image formation, processing and detection with sensitivity and resolution which exceeds the limits Optical Nonlinearity and Renormalization of classical imaging. The most important theoretical and experimental results in quantum imaging can be found in Kolobov (2007). 3.4 Optical Nonlinearity and Renormalization Abram and Cohen (1994) mainly applied a travelling-wave formulation of the theory of quantum optics to the description of the self-phase modulation of a short coherent pulse of light. They seem to have been first to use a renormalization (Kubo 1962, Zinn-Justin 1989). The renormalized theory successfully describes the nonlinear chirp that the pulse undergoes in the course of its propagation and permits the calculation of the squeezing characteristics of self-phase modulation. The description of the propagation of a short coherent pulse of light inside a medium that exhibits an intensity-dependent refractive index (Kerr effect) has become relevant to optical fibre communications, all-optical switching, and optical logic gates (Agrawal 1989). Neglect of dispersion and the Raman and Brillouin scattering leads to the description of self-phase modulation. In classical theory it is derived that, in the course of its propagation, the pulse becomes chirped (i.e. different parts of the pulse acquire different central frequencies), which influences also its spectrum. Abram and Cohen (1994) have pointed out many difficulties in the investigation of the quantum noise properties of a light pulse undergoing self-phase modulation. The traditional cavity-based formalism truncates the mutual interaction among the spatial modes to a self-coupling of a single mode (or only a few modes) and cannot give a reasonable approximation to the frequency spectrum produced by self-phase modulation. In spite of the difficulties, papers based on a single-mode description of a field indicated that the slowly varying approximation can produce squeezed light (Kitagawa and Yamamoto 1986, Shirasaki et al. 1989, Shirasaki and Haus 1990, Wright 1990, Blow et al. 1991) and others treated squeezing in solitons (Drummond and Carter 1987, Shelby et al. 1990, Lai and Haus 1989), an effect that was verified experimentally (Rosenbluh and Shelby 1991). Blow et al. (1991) have shown the divergence of nonlinear phase shift, which Abram and Cohen (1994) treat through the process of renormalization. Let us review the basic features of the quantization of the electromagnetic field in a Kerr medium and discuss the relevance of the renormalization procedure to the treatment of divergences of effective medium theories. We consider a transparent, homogeneous isotropic, and dispersionless dielectric medium that exhibits a nonlinear refractive index. We examine the situation similar to Abram and Cohen (1991). The Hamiltonian for the electromagnetic field in a Kerr medium is ˆ (t) = H 3 1 ˆ2 B (z, t) + Eˆ 2 (z, t) + χ Eˆ 4 (z, t) dz, 2 4 where the integration along the direction of propagation z is denoted explicitly, but integration over the transversed directions x and y will be implicit. We use the Heaviside–Lorentz units for the electromagnetic field without = c = 1, χ is the 3 Macroscopic Theories and Their Applications nonlinear (third-order) optical susceptibility. From the perspective of the substitution of (3.25), the Hamiltonian (3.494) can be written as (Hillery and Mlodinow 1984) ˆ (t) = H 1 ˆ2 1 χ ˆ4 1 ˆ2 (z, t) − (z, t) dz, B (z, t) + D D 2 4 4 where the displacement field ˆ ˆ D(z, t) = E(z, t) + χ Eˆ 3 (z, t). The canonical equal-time commutators are (cf. (3.28)) ˆ ˆ 1 , t), D(z ˆ 2 , t)] = −icδ(z 1 − z 2 )1, [ A(z ˆ ˆ 2 , t)] = −icδ (z 1 − z 2 )1. ˆ 1 , t), D(z [ B(z (3.497) (3.498) It is convenient to adopt a slowly varying operator picture in which the zerothorder dynamics of the field governed by the linear medium Hamiltonian are already taken into account exactly, while the optical nonlinearity can be treated within the ˆ framework of perturbation theory. In such a picture, a field operator Q(z, t) evolving inside a nonlinear medium is related to the corresponding linear medium operator ˆ 0 (z, t) by Q ˆ ˆ 0 (z, t)Uˆ (t). Q(z, t) = Uˆ −1 (t) Q The unitary transformation Uˆ (t) is given by i t ˆ ˆ H1 (τ ) dτ , U (t) = T exp − −∞ ← − where T ≡ T denotes the time ordering and ˆ 1 (t) = − 1 χ H 4 4 ˆ 04 (z, t) dz D is the interaction Hamiltonian. The full nonlinear Hamiltonian (3.495) can be written as follows: ˆ 0 (t) H ; <= > ˆ 0 , Bˆ 0 ) + H ˆ 1 (t), ˆ 0 , Bˆ 0 ) = H0 ( D H (D ˆ 0 (t) is the linear medium Hamiltonian. Following the traditional modal where H approach to relation (3.499), Kitagawa and Yamamoto (1986) developed a singlemode treatment of the self-phase modulation. Clearly, such an investigation is valid Optical Nonlinearity and Renormalization only inside an optical cavity with a sparse mode structure. In this situation, the time evolution cannot be interpreted as space progression. In developing a travellingwave theory for self-phase modulation, Blow et al. (1991) obtained a solution similar to the single-mode solution. They encountered a nonintegrable singularity upon normal ordering, a thing what is termed in quantum field theory as “ultraviolet” divergence. To avoid an infinite nonlinear phase shift due to the Kerr interaction so simply described, Blow et al. (1991, 1992) introduced a finite response time for the nonlinear medium as regularization in the Heisenberg picture. Alternatively, Haus and K¨artner (1992) considered the group velocity dispersion for the propagation of pulses in the medium as a regularization. At any rate, the regularization is a subsequent sophistication of the simple model as known in classical theory. The response time and the group velocity dispersion are necessary ingredients of a complete description of the propagation of the electromagnetic excitation in fibres. But they have no influence on the effects associated with the vacuum fluctuations under study. A systematic way of dealing with the vacuum fluctuations in quantum field theory is the procedure of renormalization (Itzykson and Zuber 1980, Zinn-Justin 1989). The renormalization is known also in classical field theory. In order to obtain finite results, the procedure of renormalization redefines all the quantities that enter the Hamiltonian. The renormalization point of view is that the new Hamiltonian is the only one we have access to. It contains the observable consequences of the theory and the parameters are the ones we obtain from experiments. The bare quantities are only auxiliary parameters that should be eliminated exactly from the description (Stenholm 2000). The re-defined (renormalized) quantities are able to incorporate the (infinite) effects of the vacuum fluctuations. We will provide the definitions of broad-band electromagnetic field operators and treat the propagation of light in a linear medium. The normal ordering is considered as the simplest renormalization, e.g. in the case of the effective linear Hamiltonian 1 1 ˆ2 2 ˆ ˆ (3.503) B0 (z, t) + D0 (z, t) dz. H0 (t) = 2 ˆ 0 (t) is to be written in terms of the creation and To this end, the Hamiltonian H annihilation operators. The normal ordering allows us to subtract the vacuum-field energy up to the first order from the effective Kerr Hamiltonian (3.495). However, when this Hamiltonian is used to describe propagation disregarding the richness of the quantum field theory, the normal ordering gives rise to additional divergences that can be attributed to the participation of the vacuum fields. Upon renormalization, involving also the refractive index, the divergences are removed. ˆ t), it As the Kerr nonlinearity involves the fourth power of the derivative ∂t∂ A(z, cannot be in the general case renormalized to all orders with a finite number of corrections. Inspired by nonlinear optics, the slowly varying amplitude approximation decouples counterpropagating waves and the renormalization to all orders becomes possible. All types of optical nonlinearity χ (n) give rise to divergences which require the renormalization. In the treatment of parametric down conversion (Abram and 3 Macroscopic Theories and Their Applications Cohen 1991), the problem of divergences and the need of normalization were not formulated. In Abram and Cohen (1994), the broad-band electromagnetic field operators are defined and the propagation of light in a nonlinear medium is treated. In the absence of the optical nonlinearity, χ = 0 and the linear-medium displacement field has the usual proportionality relationship to the electric field, ˆ 0 (z, t) = Eˆ 0 (z, t). D The magnetic field and the displacement field in the linear medium obey the equaltime commutation relation ˆ ˆ 0 (z 2 , t)] = −icδ (z 1 − z 2 )1. [ Bˆ 0 (z 1 , t), D The operators Vˆ 0± (cf. (3.80), (3.81) of Abram and Cohen (1991)) reappear as the operators 1 ψˆ + (z, t) = √ Vˆ 0+ (z, t), 2 1 ψˆ − (z, t) = √ Vˆ 0− (z, t). 2 (3.506) (3.507) For ψˆ ± (z, t), the equations of motion in the Heisenberg picture may be calculated by the use of commutator (3.505) as follows: i ˆ ˆ ∂ ∂ ˆ ψ± (z, t) = H0 , ψ± (z, t) = ∓v ψˆ ± (z, t), ∂t ∂z where v = √c is the speed of light inside the dielectric exhibiting the refractive index . Their solutions are ψˆ + (z, t) = ψˆ + (z − vt, 0), ψˆ − (z, t) = ψˆ − (z + vt, 0). (3.509) (3.510) The equal-time commutators of the copropagating field operators can be obtained from the definition and commutator (3.505) as follows: ˆ [ψˆ + (z 1 , t), ψˆ + (z 2 , t)] = −ivδ (z 1 − z 2 )1, ˆ [ψˆ − (z 1 , t), ψˆ − (z 2 , t)] = ivδ (z 1 − z 2 )1. (3.511) (3.512) For the counter propogating fields, the corresponding operators commute with each other, ˆ [ψˆ + (z 1 , t), ψˆ − (z 2 , t)] = 0. Optical Nonlinearity and Renormalization Operators (3.506) and (3.507) permit us to express the linear medium Hamiltonian (3.503) as 2 ˆ 0 (t) = 1 ψˆ + (z, t) + ψˆ −2 (z, t) dz, (3.514) H 2 thus separating it into a sum of two mutually commuting partial operators, one for each direction of propagation. In the homogeneous medium it is possible to separate the electromagnetic field operators ψˆ ± (z, t) into positive- and negative-frequency parts † ψˆ ± (z, t) = φˆ ± (z, t) + φˆ ± (z, t), defined as ∞ ˆ ψ± (z , t) ˆφ± (z, t) = 1 ψˆ ± (z, t) ± i V.p. dz , 2 π −∞ z − z † where V.p. denotes the Cauchy principal value. The operators φˆ ± (z, t) and φˆ ± (z, t) can be considered as creation and annihilation operators, respectively, for a right (or left)-moving electromagnetic excitation which at time t is at point z. The equal-time commutators of φˆ ± (z, t) are somewhat complicated, v ∂ 1 † ˆ ∓ iπ δ(z 1 − z 2 ) 1, P [φˆ ± (z 1 , t), φˆ ± (z 2 , t)] = 2 ∂z 1 z1 − z2 † ˆ (3.517) [φˆ + (z 1 , t), φˆ − (z 2 , t)] = 0, where P refers to the familiar generalized function P 1z . Nevertheless, an important simplification results when only unidirectional propagation is considered. On introducing the operators ˆ (+) ˆ (−) (z, t) = [ D ˆ (+) (z, t)]† , (3.518) ˆ [φ+ (z, t) + φˆ − (z, t)], D D0 (z, t) = 0 0 2 1 Bˆ 0(+) (z, t) = √ [φˆ + (z, t) − φˆ − (z, t)], Bˆ 0(−) (z, t) = [ Bˆ 0(+) (z, t)]† (3.519) 2 and considering the relations ˆ [ψ+ (z, t) + ψˆ − (z, t)], 2 1 Bˆ 0 (z, t) = √ [ψˆ + (z, t) − ψˆ − (z, t)] 2 ˆ 0 (z, t) = D and relation (3.515), we verify that ˆ (+) (z, t) + D ˆ (−) (z, t), ˆ 0 (z, t) = D D 0 0 Bˆ 0 (z, t) = Bˆ 0(+) (z, t) + Bˆ 0(−) (z, t). 3 Macroscopic Theories and Their Applications Using the new operators, the equal-time commutation relation (3.505) can suitably be modified as ˆ ˆ (−) (z 2 , t)] = − i cδ (z 1 − z 2 )1. [ Bˆ 0(+) (z 1 , t), D 0 2 For a right-moving electromagnetic excitation, we observe that √ (+) φˆ +(+) (z 1 , t) = 2 Bˆ 0(+) (z 1 , t), 2 ˆ (+) φˆ +(+) (z 2 , t) = D (z 2 , t), 0(+) where the subscript (+) refers to k > 0. Using (3.523), we obtain that † [φˆ + (z 1 , t), φˆ + (z 2 , t)](+) = −ivδ (z 1 − z 2 )1ˆ (+) . Similarly, for a left-moving electromagnetic excitation, we note that √ (+) (z 1 , t), φˆ −(−) (z 1 , t) = − 2 Bˆ 0(−) 2 ˆ (+) D (z 2 , t), φˆ −(−) (z 2 , t) = 0(−) where the subscript (−) refers to k < 0. From this † [φˆ − (z 1 , t), φˆ − (z 2 , t)](−) = ivδ (z 1 − z 2 )1ˆ (−) . The electromagnetic creation and annihilation operators allow us to speak of the normal order, for instance, when we write Hamiltonian (3.514) in the form † † ˆ 0 (t) = (3.527) H φˆ + (z, t)φˆ + (z, t) + φˆ − (z, t)φˆ − (z, t) dz. We can define annihilation and creation wave-packet photon operators ˆF+ (¯z , t) = F(z − z¯ )φˆ + (z, t) dz and † Fˆ + (¯z , t) = † F ∗ (z − z¯ )φˆ + (z, t) dz, respectively. Here F(z) is a complex function: 1 ˜ exp(iK z) F(z), F(z) = √ vK ˜ where v K is the central (carrier) frequency and F(z) is the wave-packet envelope function peaked at z = 0 and k = 0. Optical Nonlinearity and Renormalization On the usual assumption of narrow bandwidth and F (z)F ∗ (z) dz = 1, † where F denotes the spatial derivative, we obtain that the operators Fˆ + and Fˆ + follow the boson commutation relation † ˆ (3.532) Fˆ + (¯z , t), Fˆ + (¯z , t) = 1. Let us remark that the commutation relation (3.532) is relation (A5) in Milburn et al. (1984), where the formalism of the counterdirectional coupling was derived or rather this pitfall underestimated. Now a coherent pulse can be considered whose shape is described by ρ F(z) with a scaling factor ρ. A coherent state appropriate to ρ F is defined as † |ρ F = exp ρ Fˆ + − Fˆ + |0 . It satisfies the “single-mode” eigenvalue equation Fˆ + (¯z , t)|ρ F = ρ|ρ F and, at the same time, it obeys the approximate quantum field eigenvalue equation √ φˆ + (z, t)|ρ F = v K ρ F˜ ∗ (z − z¯ ) exp[−iK (z − z¯ )]|ρ F . The approximation made in the derivation of (3.535) has kindled the interest in the Glauber factorization conditions and the theory of coherence (see also Ledinegg (1966)). When we examine right-moving pulses, we can introduce moving-frame coordinate η = z − vt ˆ ˆ 0) ≡ φ(η), φˆ + (z, t) = φ(η, and simplify relation (3.509), dropping the subscript +, whenever we use the coordinate η explicitly. Similarly, the commutation relation (3.524) can be modified. The right-moving narrow-bandwidth wavepacket operators (3.528), (3.529) can now be written as ˆ η) F( ¯ = ˆ F(η − η) ¯ φ(η) dη, 3 Macroscopic Theories and Their Applications where η¯ = z¯ − vt and ¯ = Fˆ † (η) F ∗ (η − η) ¯ φˆ † (η) dη. ˆ η, ˆ η, In the moving-frame representation F( ¯ t) = F( ¯ 0). It is feasible to find a connection with the approaches leading to narrow-band ˆ contained in Shirasaki and Haus (1990), Drummond (1990), and field operators (a) Blow et al. (1990) and used in papers by Blow et al. (1991) and Shirasaki and Haus (1990). An important feature of these operators is that their commutator is a δ function † ˆ [aˆ k0 (z 1 ), aˆ k0 (z 2 )] = vk0 δ(z 1 − z 2 )1. Under the same narrow-bandwidth condition, the commutator of the Abram–Cohen operators, which is a delta function derivative, can be approximated by δ (z 1 − z 2 ) ≈ −ik0 δ(z 1 − z 2 ). Abram and Cohen (1994) have analysed the approximations that enter the quantum treatment of propagation in a Kerr medium and outline the corresponding renormalization procedure. The slowly varying amplitude approximation according to Abram and Cohen (1991) is used in Abram and Cohen (1994). For a Kerr medium, ˆ 1 (t) is expressed as the interaction Hamiltonian H ˆ 1 (t) = − χ H 4 4 ˆ 04 (z, t) dz. D According to (3.520), the interaction Hamiltonian can be written as ˆ 1 (t) = H χ 16 2 ψˆ + (z − vt) + ψˆ − (z + vt) The exact Hamiltonian (3.495) may be written up to the first order in χ as ˆ 1S+ (t) + H ˆ 1S− (t) + O(χ 2 ), ˆ (t) = H ˆ 0 (t) + H H where ˆ 1S± (t) = − χ H 16 2 ψˆ ±4 (z ∓ vt) dz ˆ 0 (t). are the parts of the Hamiltonian (3.543) that commute with H In terms of application of (3.537), no coordinate transformation has been explained. In this case, the transformation leaves the time coordinate unchanged. Optical Nonlinearity and Renormalization In view of approximation (3.544), the equation of motion of a right-moving field operator can be written in the interaction picture as follows: i ˆ ∂ ˆ ˆ t) . φ(η, t) = H1S+ (t), φ(η, ∂t This first-order approximation to the equation of motion can be solved formally using the corresponding time-evolution operator (cf. (3.499)) i t ˆ ˆ H1S+ (τ ) dτ . U S+ (t) = T exp − −∞ The classical slowly varying approximation has its quantum counterpart on a double assumption: (1) the initial state of the field is a narrow-bandwidth state and (2) the nonlinearity is weak enough so that the full nonlinear Hamiltonian (3.543) may be ˆ 1S , approximated by its first-order stationary component H ˆ 1S+ + H ˆ 1S− . ˆ 1S = H H ˆ 1S+ will be referred to as the slowly varying amplitude Hamiltonian. Therefore, H Now we turn to the renormalization. In the framework of the rotating-wave approximation, we obtain that ˆ 1S+ = − χ H 16 2 6 φˆ † φˆ † φˆ φˆ S dz, where 6 φˆ † φˆ † φˆ φˆ S = φˆ † φˆ † φˆ φˆ + φˆ † φˆ φˆ † φˆ + φˆ † φˆ φˆ φˆ † + φˆ φˆ † φˆ † φˆ + φˆ φˆ † φˆ φˆ † + φˆ φˆ φˆ † φˆ † . (3.550) Upon the normal ordering, the perturbative Hamiltonian (3.544) for the electromagnetic field in a Kerr medium can be written as ˆ t) dz − κ ˆ t)φ(z, ˆ t) dz φˆ † (z, t)φˆ † (z, t)φ(z, φˆ † (z, t)φ(z, 2 ˆ t) dz, (3.551) − κ Z φˆ † (z, t)φ(z, ˆ (t) = H where κ = 1994) 3χ . 4 2 Z is a function we give only asymptotically (cf. Abram and Cohen v 2 Λ, 2π 3 Macroscopic Theories and Their Applications where Λ is a high-frequency cutoff. Whereas the first two terms in equation (3.551) are familiar, the third term, which is divergent, arises in the normal ordering procedure. For Λ fixed, this last term vanishes if → 0. In the renormalization procedure, a formal series (in ) of “counterterms” is added to the Hamiltonian in order to remove the divergences that arise upon normally ordering the results of calculation (Itzykson and Zuber 1980). The Hamiltonian itself exemplifies that it is not sufficient for removing divergences, but at the same time renormalized parameters and renormalized field operators are introduced. ˆ R (t) may be defined by introducing In particular, a renormalized Kerr Hamiltonian H a counterterm of order as follows: ˆ (t) + 2κ Z ˆ R (t) = H H ˆ t) dz. φˆ † (z, t)φ(z, The third term in equation (3.551) exchanges the sign and for Λ → ∞ it is an infinite change in the inverse of the refractive index. The renormalized field operators √ ˆ t), φˆ R (z, t) = 1 + κ Z φ (z, √ † φˆ R (z, t) = 1 + κ Z φˆ † (z, t) (3.554) (3.555) are further quantities which, or at least whose Hermitian parts, etc. would relate to an experiment. Such a relationship is no more required from the bare quantities. At the same time, a renormalized refractive index is defined nR = n . 1 + κ Z The renormalized Kerr Hamiltonian can be written in terms of the renormalized field operators as ˆ 0R (t) + H ˆ 1S+,R (t), ˆ R (t) = H H ˆ 0R (t) = H ˆ 1S+,R (t) = H † φˆ R (z, t)φˆ R (z, t) dz, ( j) ˆ j κ j+1 H 1S+,R (t), (3.558) (3.559) where j+1 ˆ ( j) (t) = (−1) ( j + 1)Z j H 1S+,R 2 † † φˆ R (z, t)φˆ R (z, t)φˆ R (z, t)φˆ R (z, t) dz; Optical Nonlinearity and Renormalization ˆ ( j) (t) is the jth quantum ˆ (0) (t) is the “usual” Kerr term and j κ j+1 H here κ H 1S+,R 1S+,R correction. Abram and Cohen (1994) have calculated the quantum noise properties of a coherent pulse undergoing self-phase modulation in the course of its propagation by eliminating the vacuum divergences through the renormalization procedure. The one-point averages were first determined. The detection of a light pulse by a balanced homodyne detector can be expressed in a moving frame through the measured quantum operator + ˆ θ (η) = exp(iθ ) K LO FLO (η¯ − η)φ(η) ˆ + H.c., (3.561) M where FLO (η) is the coherent amplitude of the local oscillator (LO) pulse peaked at η = η¯ and θ is the phase difference between the local oscillator and signal pulses. In homodyne detection, the central frequency of the local oscillator is the same as that of the incident pulse, K LO = K . This set-up measures simultaneously the expectaˆ π (η). For simplicity, we have neglected ˆ 0 (η) and M tion values of the operators M 2 ˆ z) = φ(η, ˆ the Kerr medium here, but it is included when we write φ(η; t = vz ) ˆ instead of φ(η). The instantaneous intensity of the signal pulse peaked at η = 0 is introduced as ˜ I (η) = v Kρ 2 F˜ ∗ (η) F(η). ˆ z) has the expectation value in the coherent It can be obtained that the operator φ(η; state ˆ z)|ρ F F ∗ (η; z) = ρ F|φ(η; = v Kρ F ∗ (η) exp[−iΘ(η; z)], Θ(η; z) = κv K z I (η) is the nonlinear phase shift produced by self-phase modulation. The nonlinear phase Θ has exactly the same value as in classical nonlinear optics. The fact that equation (3.563) obtained by invoking renormalization corresponds directly to what is observed experimentally in a propagative configuration underlines the validity of this approach. Two-point correlation functions were examined, in order to obtain the quantum noise spectrum S0 (k) = ˆ θ (η1 )Δ M ˆ θ (η2 )|ρ F dη2 dη1 , exp[ik(η1 − η2 )] ρ F|Δ M where ˆ θ (η) − ρ F| M ˆ θ (η)|ρ F ˆ θ (η) = M ΔM 3 Macroscopic Theories and Their Applications and k is the spatial frequency at which the quantum noise is measured. In experiment, such a noise is detected at frequencies several orders of magnitude below the carrier optical frequency and its spectrum is considered to be flat throughout the typical bandwidth. The low-frequency noise spectrum Sθ (0) can be decomposed into four terms Sθ (k) ≈ Sθ (0) = S1 + S2 + exp(−2iθ)S3 + exp(2iθ )S4 , ILO (η¯ − η)Θ2 (η; z) dη, S1 = ILO (η¯ − η)[1 + Θ2 (η; z)] dη, S2 = ILO (η¯ − η)[iΘ(η; z)][1 + iΘ(η; z)] exp[2iΘ(η; z)] dη, S3 = ILO (η¯ − η)[−iΘ(η; z)][1 − iΘ(η; z)] exp[−2iΘ(η; z)] dη, S4 = (3.568) (3.569) (3.570) (3.571) where ILO (η¯ − η) is the local oscillator instantaneous intensity of the local oscillator pulse. This result is similar to that obtained by linearizing the self-phase modula† tion exponential operator exp(iγ aˆ k0 aˆ k0 ) around the mean field (Shirasaki and Haus 1990). It is appropriate to give a physical interpretation of the above results and to discuss the case of squeezing that can be observed in the propagation of a coherent pulse. For narrow-bandwidth signals, the phase properties of quantum noise in equation (3.567) can be visualized by examining the field fluctuations. First, the quantum characteristic function is defined as ˆ ˆ π C(u, v) = α| exp i [u M 2 (η) + v M0 (η)] |α , where |α is the continuous-wave coherent state. Using the linked cluster theorem, the characteristic function (3.572) can be written in terms of connected averages. Two lowest order connected averages are feasible and describe the moments of the Wigner distribution W (q, p) = 1 4π 2 C(u, v) exp[−i( pu + qv)] du dv. According to equation (3.563), the expectation value of the Wigner distribution is q 0 + i p0 = I eiθ , Optical Nonlinearity and Renormalization while the principal squeeze variances of the quantum noise are 1 2(BS ∓ |C|) = √ 1 + Θ2 ± Θ + = 1 + Θ2 ∓ Θ, (3.575) (3.576) where Θ ≡ Θ(η; z) and we have used the characteristics of quantum noise (Peˇrinov´a et al. 1991) BS = 1+ 1 1 + Θ2 , |C| = Θ. 2 2 3π . 2 Moreover, arg C = 2Θ + arctan(Θ) − In the Abram–Cohen theory, for the case of a coherent beam of central frequency 2×1015 Hz, bandwidth 100 GHz, and intensity 1 W propagating in a silica fibre, the Gaussian noise is a good approximation up to nonlinear phase shifts of the order 103 rad. When the phase of the local oscillator is constant along the pulse profile, particularly when the principal quadrature is measured at the peak of the pulse, the variance will not be the same everywhere, the quadrature will not always be the principal quadrature due to the chirp. To circumvent this problem, the use of a matched local oscillator has been proposed such that its phase Θ(η) varies in a way that matches the signal chirp. Besides the Kerr effect, the Sagnac interferometer can be used for this purpose (Shirasaki and Haus 1990, Blow et al. 1992, Bergman and Haus 1991). Alternatively, a local oscillator pulse that is much shorter than the signal pulse can sample only the central portion of the signal in order to measure the appropriate squeezing. Bespalov et al. (2002) have investigated the propagation of light in the (1 + 1)dimensional approximation. They have paid attention to the two series expansions of the index of refraction of an isotropic optical medium in Born and Wolf (1968). On these expansions they have based two wave equations, both with and without the second space derivative term. They have presented a method to derive the nonlinear wave equations suitable for describing dynamics of extremely short pulses. Although this analysis is completely classical, it does not exclude that a quantization of the field and of the equations will be necessary in the near future. Restriction to the transverse components and, finally, to the scalar wave equation is common. Lu et al. (2003) have studied the propagation of ultrashort pulsed beam beyond the paraxial approximation in free space. The nonparaxial corrections to an arbitrary paraxial solution are given in a series form. A comparison with rigorous nonparaxial results obtained by numerical method is carried out. Spatial and temporal distributions are considered. 3 Macroscopic Theories and Their Applications In this book the “fully” relativistic quantum electrodynamics is not treated. Nevertheless, its importance for quantum optics is going to be appreciated in the nearest future. Shukla et al. (2004) have considered the nonlinear propagation of randomly distributed intense short photon pulses in a photon gas. Fragmentation of incoherent photon pulses in astrophysical contexts and in forthcoming experiments using very intense short laser pulses has been predicted. The renormalization and the Bogoliubov renormalization group are different concepts (Shirkov and Kovalev 2001). Kovalev et al. (2000) and Tatarinova and Garcia (2007) have expounded the renormalization-group approach to the problem of lightbeam self-focusing. Tatarinova and Garcia (2007) set the problem in the framework of the classical nonlinear optics and so the renormalization is not needed. The propagation of a laser beam of intensity I in a nonlinear medium with a refractive index n 0 + n(I ), where n 0 is the linear refractive index, n(I ) is such that n(0) = 0, arbitrary in other respects, is studied. The case of nonlinear self-focusing accompanied by multiphoton ionization has been explicitly analysed. The procedure of analytical solution begins with an approximate transformation of the nonlinear Schr¨odinger equation onto eikonal equations. Irrespective of these and some other approximations, the appropriate, easy, calculation provides results which are in good agreement with numerical simulations. 3.5 Quasimode Theory Glauber and Lewenstein (1991) have developed quantum optics of inhomogeneous media with linear susceptibilities. The topics treated have included the normal-mode expansion and the plane-wave expansion. The authors have shown that plane-wave photons can be related to the normal-mode ones within the framework of scattering theory. They have used the quantization schemes discussed to determine the fluctuation properties of various field components. They have considered excited atoms and changes in the spontaneous emission rates for both electric and magnetic dipole transitions of the atoms within or near dielectric media. They have provided a quantum description of the transition radiation emitted by a charged particle in passing from one dielectric medium to another. Dalton et al. (1996) have carried out canonical quantization of the electromagnetic field and radiative atoms in passive, lossless, dispersionless, and linear dielectric media. The quantum Hamiltonian has been derived in a generalized multipole form. Dalton et al. (1999b) have presented a macroscopic canonical quantization of the electromagnetic field and radiating atom system in dielectric media based on expanding the vector potential in terms of quasimode functions. The quasimode functions approximate the true mode functions of a classical optics device when they are obtained on the assumption of an ideal electric permittivity function and the permittivity function describing the device does not deviate much from the ideal one. Plane waves in Glauber and Lewenstein (1991) are such quasimodes. In the Quasimode Theory coupled-mode theory (see, e.g. Section 6.2) the “ideal” waveguide modes are also such quasimodes. Here we present part of the theory (Dalton et al. 1999b), which will be completed below. It is assumed that a classical linear optics device is described with the spatially dependent electric permittivity (R) (and the magnetic permeability μ(R)). The generalized Coulomb gauge condition for the vector potential A(R, t) ∇ · [(R)A(R, t)] = 0 is used. In a generalization of Helmholtz’s theorem (Dalton and Babiker 1997), a vector field F(R, t) can be decomposed uniquely in the generalized transverse and () longitudinal components F() ⊥ (R, t), F (R, t) in the form () F(R, t) = F() ⊥ (R, t) + F (R, t), () ∇ · [(R)F() ⊥ (R, t)] = 0, ∇ × F (R, t) = 0. The macroscopic Lagrangian is given by the relation L (t) = Lc (R, t) d3 R, where the Lagrangian density Lc (R, t) is given by the relation Lc (R, t) 1 1 ∂A(R, t) 2 = (R) − [∇ × A(R, t)]2 . 2 ∂t 2μ(R) The conjugate momentum field Π(R, t) is obtained from the Lagrangian density Lc (R, t) as Π(R, t) = (R) ∂A(R, t) . ∂t The Hamiltonian is H (t) = [Π(R, t)]2 [∇ × A(R, t)]2 + 2(R) 2μ(R) d3 R. We have the electric displacement field D(R, t), D(R, t) = −Π(R, t). The true mode functions satisfy the generalized Helmholtz equation ∇× 1 [∇ × Ak (R)] = ωk2 (R)Ak (R), μ(R) 3 Macroscopic Theories and Their Applications where ωk are real and positive angular frequencies and satisfy the generalized Coulomb gauge condition ∇ · [(R)Ak (R)] = 0. The true mode functions satisfy the orthogonality and normalization conditions respecting (R) as a weight function, (R)A∗k (R) · Al (R) d3 R = δkl . Generalized coordinates qk (t) and generalized momenta pk (t) can be introduced by the relations (3.590) qk (t) = (R)A∗k (R) · A(R, t) d3 R, (3.591) pk (t) = A∗k (R) · Π(R, t) d3 R. These variables are complex. Expansions of the vector potential and conjugate momentum field in terms of the true modes Ak (R) are A(R, t) = qk (t)Ak (R), pk (t)(R)Ak (R). Π(R, t) = It is assumed that the exact electric permittivity and magnetic permeability functions do not deviate much from artificially chosen functions ˜ (R), μ(R). ˜ It is assumed that these functions produce quasimode functions Uα (R), idealized versions of the true mode functions Ak (R). Let λα denote the angular frequency of the quasimode. One has equations ∇× 1 [∇ × Uα (R)] = λ2α ˜ (R)Uα (R), μ(R) ˜ ∇ · [˜ (R)Uα (R)] = 0, ˜ (R)U∗α (R) · Uβ (R) d3 R = δαβ , (3.594) (3.595) (3.596) which are the generalized Helmholtz equation, the gauge conditions, and orthonormality conditions, respectively. As the vector potential A(R, t) satisfies the generalized Coulomb gauge condition A(R, t) fulfils the generalized Coulomb gauge condition (3.579), the field (R) ˜ (R) ∇ · [˜ (R)F(R, t)] = 0, Quasimode Theory where F(R, t), any field, can be chosen in the form based on the gauge transformation (R) A(R, t). ˜ (R) Another choice is ˜ A(R, t) = A(R, t) − ∇ψ(R, t), where ψ(R, t), an arbitrary function of R and t, must be specified (Glauber and Lewenstein 1991), and the scalar potential ˙ ˜ Φ(R, t) = ψ(R, t) need not be considered. Therefore, the expansion (R) Q α (t)K αβ Uβ (R) A(R, t) = ˜ (R) α,β C exists, where the complicated form of the coefficients α Q α (t)K αβ may and may not involve ˜ t) d3 R (3.601) Q α (t) = ˜ (R)U∗α (R) · A(R, and matrix elements K αβ of a suitable matrix K. The vector potential is given as A(R, t) = Q α (t)K αβ ˜ (R) Uβ (R). (R) Using expansion (3.602), one can write the Lagrangian (3.582), (3.583) as L (t) = 1 ˙∗ ˙ β (t) − 1 Q α (t)(W−1 )αβ Q Q ∗ (t)Vαβ Q β (t), 2 α,β 2 α,β α W = (KT )−1 M−1 (K∗ )−1 , V = K HK , T ˜ (R) ∗ ˜ (R) Uα (R) · Uβ (R) d3 R, (R) (R) 1 ˜ (R) ˜ (R) ∗ = ∇× U (R) · ∇ × Uβ (R) d3 R. μ(R) (R) α (R) Mαβ = Hαβ (3.606) (3.607) Making a specific choice for K (Dalton et al. 1999b) K = (M∗ )−1 , 3 Macroscopic Theories and Their Applications one has W = M, −1 (3.609) −1 V = M HM . The generalized momentum coordinates Pα (t) for the electromagnetic field are Pα (t) = ˙ β (t). (M−1 )αβ Q The Hamiltonian is given by the relation H (t) = 1 ∗ 1 ∗ Pα (t)Wαβ Pβ (t) + Q (t)Vαβ Q β (t). 2 α,β 2 α,β α As the same Lagrangian is used and definition (3.584) does not depend on the gauge condition (3.588), the conjugate momentum field Π(R, t) is still expressed by equation (3.584). As from (3.602) it follows that A(R, t) = Q α (t)(M−1 )βα ˜ (R) Uβ (R), (R) respecting (3.584) and exchanging α ↔ β, we obtain that Π(R, t) = Pα (t)˜ (R)Uα (R). The classical generalized coordinates Q α (t) and generalized momenta Pα (t) are replaced by quantum operators according to the prescriptions ˆ α (t), Q ∗α (t) → Q ˆ †α (t), Q α (t) → Q Pα (t) → Pˆ α (t), Pα∗ (t) → Pˆ α† (t). (3.615) (3.616) The nonzero equal-time commutators are ˆ †α (t), Pˆ β (t)]. ˆ α (t), Pˆ β† (t)] = iδαβ 1ˆ = [ Q [Q The vector potential A(R, t) and conjugate momentum field Π(R, t) now become ˆ ˆ field operators A(R, t), Π(R, t). Let us imagine the replacements (3.615) ((3.616)) ˆ ˆ in the expression (3.602) ((3.614)) when the quantum operator A(R, t) (Π(R, t)) is introduced. Similarly, the Hamiltonian given through equation (3.612) now becomes ˆ (t). a quantum Hamiltonian H Quasimode Theory ˆ (t) = H ˆ Q (t) + Vˆ Q−Q (t), where It has the form H ˆ α (t) , ˆ Q (t) = 1 ˆ †α (t)Vαα Q H Pˆ α† (t)Wαα Pˆ α (t) + Q 2 α 1 ˆ † ˆ †α (t)Vαβ Q ˆ β (t) . Pα (t)Wαβ Pˆ β (t) + Q Vˆ Q−Q (t) = 2 α,β (3.618) (3.619) Dalton et al. (1999b) have defined an effective quasimode angular frequency μα as follows: μα = Wαα Vαα . It may differ from λα in (3.594). As usual with quantum harmonic oscillators, annihilation and creation operators for each of the quasimodes are introduced as usual linear combinations η 1 ˆ α ˆ α (t) + i ˆ α (t) = Pα (t), (3.621) Q A 2 2ηα η 1 ˆ† α ˆ† ˆ †α (t) = (3.622) Q α (t) − i A P (t), 2 2ηα α where ηα = Vαα . Wαα ˆ Definiˆ †β (t)] = δαβ 1. ˆ α (t), A The nonzero equal-time commutators are standard, [ A ˆ †−α (t), ˆ −α (t), A tions (3.621) and (3.622) can be completed with the equations for A where −α denotes that the operators are associated with the quasimode function ˆ ∗α (R). U † The relationship between the annihilation, creation operators aˆ k , aˆ k for the true modes (see Dalton et al. (1996)) and the quantities just introduced can be obtained from the expansions of two sets of functions (R)Ak and ˜ (R)Uα (R) in terms of the other. It is shown to involve a Bogoliubov transformation (Dalton et al. (1999c). From Equations (3.621) and (3.622) one expresses the generalized coordinates ˆ α (t), Pˆ α (t) as follows: and momenta Q ˆ α (t) = Q ˆ ˆ †−α (t) , Aα (t) + A 2ηα 3 Macroscopic Theories and Their Applications 1 Pˆ α (t) = i ηα ˆ ˆ †−α (t) . Aα (t) − A 2 On substituting (3.624) and (3.625) into the Hamiltonians (3.618) and (3.619), the Hamiltonians become ˆ α (t) + 1 1ˆ μα , ˆ †α (t) A ˆ Q (t) = (3.626) A H 2 α non−RWA RWA (t) + Vˆ Q−Q (t), Vˆ Q−Q (t) = Vˆ Q−Q RWA (t) means the rotating-wave contribution, where Vˆ Q−Q 0 1 √ V αβ RWA ˆ †α (t) A ˆ β (t), Vˆ Q−Q A (t) = ηα ηβ Mαβ + √ 2 α,β ηα ηβ non−RWA (t) stands for the nonrotating wave correction term, and Vˆ Q−Q 0 1 Vα,−β √ non−RWA ˆ ˆ †α (t) A ˆ †β (t) + H.c. VQ−Q A (t) = − ηα ηβ Mα,−β + √ 4 α,β ηα ηβ ˆ ˆ Similarly, the field operators A(R, t) and Π(R, t) should be expressed in terms of annihilation and creation operators. One finds that ˜ (R) ∗ ˆ† ˆ ˆ α (t)Uβ (R) + K αβ Aα (t)U∗β (R) , (3.630) A (R, t) = K αβ A 2ηα (R) α,β 1 ηα ˆ †α (t)U∗α (R) . ˆ α (t)Uα (R) − A ˆ (3.631) ˜ (R) A Π(R, t) = i 2 α ˆ (t) = The quantum Hamiltonian in the rotating wave approximation is H RWA RWA ˆ ˆ HQ (t) + VQ−Q (t). Dalton et al. (1999b) have considered also an approximate form of this Hamiltonian. A simplification is related to the approximation μα ≈ λ α μα ≈ λα + vαα , where we have added a term. Further, RWA Vˆ Q−Q (t) ≈ α,β α=β ˆ †α (t) A ˆ β (t), vαβ A Quasimode Theory where vαβ (M1 )αβ (H1 )αβ 0 1 1 + (H1 )αβ λα λβ (M1 )αβ + + = , 2 λα λβ ˜ (R) = ˜ (R) − 1 U∗α (R) · Uβ (R) d3 R, (R) 1 ˜ (R) ∗ = − 1 Uα (R) ∇× μ0 (R) ˜ (R) · ∇× − 1 Uβ (R) d3 R. (R) (3.635) (3.636) 3.5.1 Relation to Quantum Scattering Theory The phenomenon of scattering occurs in various situations in optics. In the classical particle mechanics, the scattering theory is a natural continuation and generalization of the analysis of collisions. It must be and has been reconstructed for wave functions in quantum mechanics. So the formulation in the Schr¨odinger picture is appropriate. The methods developed apply also to optical and acoustic scattering in classical physics (Reed and Simon 1979). In quantum field theory, the scattering theory resembles its simplified form for quantum mechanics, but with wave functions replaced by field operators. Accordingly, it is formulated in the Heisenberg picture. In spite of simplifications this graduation is present in quantum optics. First we will review basics of the Schr¨odinger picture approach to the scattering theory and then outline the Heisenberg picture approach to this theory (Dalton et al. 1999a). The single-channel scattering theory may be adequate for many applications in ˆ (t, t ) is written as a sum of an quantum optics. In this theory the Hamiltonian H ˆ unperturbed Hamiltonian H0 (t, t ) and an interaction term Vˆ (t, t ), t = 0, t. We ˆ (t, t) in the Heisenberg picture and H ˆ (t, 0) in the should denote more exactly H Schr¨odinger picture. The first variable denotes the explicit time dependence of the ˆ (t, t ) are ˆ 0 (t, t ), Vˆ (t, t ), and H Hamiltonian. For simplicity it is assumed that H ˆ ˆ ˆ t-independent and the notation H0 (t ), V (t ), and H (t ), t = 0, t, is used. The state vector |ψ(t) evolves as |ψ(t) = Uˆ (t)|ψ(0) , where the evolution operator Uˆ (t) given by i ˆ ˆ U (t) = exp − H (0)t is unitary. 3 Macroscopic Theories and Their Applications When a scattering experiment is described, the state vector |ψ(t) should approach freely evolving state vectors as t → ±∞, which are based on the so-called input states |ψin and output states |ψout . So with the unitary free evolution operator Uˆ 0 (t) given by i ˆ ˆ (3.640) U0 (t) = exp − H0 (0)t , we have the so-called asymptotic conditions (Taylor 1972, Newton 1966) Uˆ 0 (t)|ψin −∞, − |ψ(t) → 0 as t → +∞. Uˆ 0 (t)|ψout The conditions (Taylor 1972, Newton 1966) that are sufficient for the asymptotic conditions to hold are that ( · · · are the norms of the state vectors) Vˆ Uˆ 0 (τ )|ψin dτ < ∞ for a dense set of |ψin , Vˆ Uˆ 0 (τ )|ψout dτ < ∞ for a dense set of |ψout . −∞ ∞ 0 ˆ ± exist which If the asymptotic conditions hold, then the Møller wave operators Ω map |ψin and |ψout onto the state vector at t = 0: ˆ + |ψin |ψ(0) = Ω ˆ − |ψout and are defined through the relation ˆ + = lim [Uˆ † (t)Uˆ 0 (t)] Ω t→−∞ i ˆ i ˆ (0)t H (0)t exp − H = lim exp 0 t→−∞ and ˆ − = lim [Uˆ † (t)Uˆ 0 (t)] Ω t→+∞ i ˆ i ˆ H (0)t exp − H0 (0)t . = lim exp t→+∞ ˆ − ) the relation ˆ =Ω ˆ + or Ω The Møller wave operators are isometric. They satisfy (Ω ˆ ˆ = 1, ˆ †Ω Ω ˆΩ ˆ † = 1ˆ (see below). So they may not be unitary. but may not satisfy Ω Quasimode Theory The scattering operator Sˆ maps the input vector |ψin onto the output vector |ψout , ˆ in , |ψout = S|ψ and from equations (3.644) and (3.647) it is obvious that it involves two Møller operators ˆ †− Ω ˆ +. Sˆ = Ω It could be easily verified that the scattering operator Sˆ is unitary, ˆ Sˆ † Sˆ = Sˆ Sˆ † = 1. |ψin = Sˆ † |ψout . Mapping (3.648) can be inverted, The Møller wave operators satisfy the important intertwining relation ˆ ±H ˆ± =Ω ˆ 0 (0). ˆ (0)Ω H ˆ 0 (0) commute, From this relation and its Hermitian conjugate it follows that Sˆ and H ˆ 0 (0) S. ˆ ˆ 0 (0) = H Sˆ H In other words, the unperturbed Hamiltonian is invariant under the unitary transformation Sˆ or the unperturbed energy is conserved in a scattering process. The Møller wave operators are not unitary if there exist bound energy eigenstates ˆ . It can be derived that for the Hamiltonian H ˆΩ ˆ † = 1ˆ − Λ, ˆ Ω ˆ is called unitary deficiency and is a sum of all the projectors onto the bound states Λ ˆ (0). of H Some of the previous results simplify in the interaction picture. In this picture state vector is defined through the equation † |ψI (t) = Uˆ 0 (t)|ψ(t) . In physical systems the limits |ψI (∓∞) may exist. On this assumption the simplification occurs. Namely equations (3.641) and (3.648) become |ψI (∓∞) = |ψin , |ψout 3 Macroscopic Theories and Their Applications and ˆ I (−∞) . |ψI (+∞) = S|ψ Schr¨odinger picture operators are transformed to interaction-picture ones via the equation ˆ Uˆ 0 (t), ˆ I (t) = Uˆ † (t) A A 0 ˆ is any Schr¨odinger operator. A possible time dependence of the operator A ˆ where A has not been designated. If this operator is time independent it is found that i ˆ I (t) dA ˆ 0I (t)], ˆ I (t), H = [A dt ˆ 0 (0)Uˆ 0 (t). Especially, the Møller wave operators are associˆ 0I (t) = Uˆ † (t) H where H 0 ˆ I± (t), ated with the interaction-picture operators Ω i ˆ i ˆ † I ˆ ˆ ˆ ±, ˆ ˆ Ω± (t) = U0 (t)Ω± U0 (t) = exp (3.660) H0 (0)t exp − H (0)t Ω where we have used the intertwining relation. Taking the limits as t → ±∞ one sees that (Taylor 1972, Newton 1966) ˆ ˆ Ω ˆ I+ (−∞) = 1, ˆ I+ (+∞) = S, Ω I I ˆ Ω ˆ − (+∞) = 1, ˆ − (−∞) = Sˆ † . Ω ˆ I+ (t) equation (3.659) becomes In the case of Ω i ˆ I+ (t) dΩ ˆ I+ (t), = Vˆ I (t)Ω dt ˆ I (t) − where we have used both the intertwining relation and the equation Vˆ I (t) = H ˆ 0I (t). The formal solution of the problem consisting in this equation with the H boundary conditions (3.661) provides one with a Dyson expression (Taylor 1972, Newton 1966) for the scattering operator, i ∞ ˆ (3.663) VI (t1 ) dt1 , Sˆ = T exp − −∞ where T means the time ordering. In the Schr¨odinger picture, scattering processes are often spoken of in terms of ˆ 0 (0). So an initial transitions between initial and final states that are eigenstates of H state |i and a final state | f have the properties ˆ 0 (0)| f = ω f | f , ˆ 0 (0)|i = ωi |i , H H Quasimode Theory where ωi and ω f are frequencies. As conservation of the unperturbed energy holds, ˆ is zero unless ω f = ωi . This fact is expressed using the the matrix element f | S|i ˆ transition operator T (z) (Taylor 1972, Newton 1966), where z is a complex energy variable. So ˆ = f |i − 2πi δ(ω f − ωi ) f |Tˆ (ωi + i0)|i . f | S|i The Tˆ (z) operator is defined through the relation ˆ Vˆ (0), Tˆ (z) = Vˆ (0) + Vˆ (0)G(z) ˆ ˆ (0)]−1 G(z) = [z 1ˆ − H is the resolvent operator. It obeys the Lippmann–Schwinger integral equation ˆ 0 (z)Tˆ (z), Tˆ (z) = Vˆ (0) + Vˆ (0)G ˆ 0 (0)]−1 ˆ 0 (z) = [z 1ˆ − H G ˆ 0 (0). is the resolvent operator associated with H The Møller wave operators can also be related to the Tˆ (z) operator. So 0 1 ∞ ˆ 0 (0) H ˆ + = 1ˆ + ˆ 0 (ω + i0)Tˆ (ω + i0)δ ω1ˆ − G dω, Ω −∞ 0 1 ∞ ˆ 0 (0) H ˆ − = 1ˆ + ˆ 0 (ω − i0)Tˆ (ω − i0)δ ω1ˆ − Ω dω. G −∞ Glauber’s theory of photodetection assumes multitime quantum correlation functions of the form (Glauber 1965) G(t1 , . . . , tn ) = ψ(0)|bˆ 1 (t1 )bˆ 2 (t2 ) . . . bˆ n (tn )|ψ(0) . We will define the input and output operators through the relations (Dalton et al. 1999a) ˆ †+ bˆ k (t)Ω ˆ +, bˆ kin (t) = Ω ˆ †− bˆ k (t)Ω ˆ −. bˆ kout (t) = Ω Therefore, we read relations (8) and (10) in Dalton et al. (1999a) as follows: G(t1 , . . . , tn ) = ψin |bˆ 1in (t1 )bˆ 2in (t2 ) . . . bˆ nin (tn )|ψin , G(t1 , . . . , tn ) = ψout |bˆ 1out (t1 )bˆ 2out (t2 ) . . . bˆ nout (tn )|ψout . 3 Macroscopic Theories and Their Applications Further from the relations † ˆ + bˆ kin (t)Ω ˆ +, ˆ =Ω ˆ bˆ k (t)(1ˆ − Λ) (1ˆ − Λ) ˆ †− bˆ k (t)Ω ˆ− =Ω ˆ †− (1ˆ − Λ) ˆ −, ˆ bˆ k (t)(1ˆ − Λ) ˆ Ω bˆ kout (t) = Ω it holds on substitution that ˆ + bˆ kin (t)Ω ˆ −. ˆ †− Ω ˆ †+ Ω bˆ kout (t) = Ω Using definition (3.649), we may write relation (3.675) in the form bˆ kout (t) = Sˆ bˆ kin (t) Sˆ † . We may summarize that † bˆ kin (t) − Uˆ 0 (t)bˆ k (0)Uˆ 0 (t) → 0ˆ as t → −∞, † bˆ kout (t) − Uˆ 0 (t)bˆ k (0)Uˆ 0 (t) → 0ˆ as t → +∞. Let us recall that operators in the interaction picture are introduced as † bˆ kI (t) = Uˆ 0 (t)bˆ k (0)Uˆ 0 (t). The input and output operators are related to the interaction-picture operators for long times, bˆ kin (t) − bˆ kI (t) → 0ˆ as t → −∞, bˆ kin (t) − Sˆ † bˆ kI (t) Sˆ → 0ˆ as t → +∞, bˆ kout (t) − Sˆ bˆ kI (t) Sˆ † → 0ˆ as t → −∞, bˆ kout (t) − bˆ kI (t) → 0ˆ as t → +∞. (3.679) Dalton et al. (1999b) have outlined quasimode theory of the lossless beam splitter and Dalton et al. (1999d) have continued and extended it in relation to scattering theory. They have found references to the true mode and quasimode theories of the beam splitter (see there). They assume √ that the device consists of two trihedral pieces of glass of refractive index n (n > 2) separated by a thin air gap of width d. The coordinate axes are chosen such that refractive index equals n for |z| > d2 and it equals unity for |z| ≤ d2 . ˜ = μ0 everywhere. In this To obtain quasimodes they choose ˜ (R) = n 2 0 , μ(R) case the quasimode functions are plane waves. For the sake of quantization, they assume that the field is contained in a box of volume V = L 3 and the quasimode functions Uα (R) = + 1 n2 eα exp(ikα · R), Quasimode Theory where eα are polarization vectors and kα are wave vectors (eα · kα = 0). The angular frequencies are λα = c kα . n For the treatment to be simple, the authors may consider only two directions of propagation: one along √12 (e y − ez ) and the other along √12 (e y + ez ). In general, the beam splitter is described by the Hamiltonian ˆ (t) = H ˆ Q (t) + Vˆ Q−Q (t), H ˆ Q (t) and the coupling Hamiltonian Vˆ Q−Q (t) where the unperturbed Hamiltonian H in the rotating-wave approximation are given by the relations ˆ Q (t) = H Vˆ Q−Q (t) = ˆ †α (t) A ˆ α (t), μα A ˆ β (t). ˆ †α (t) A vαβ A α,β α=β The wave vectors kα for the quasimodes are kα j = να j 2π , j = x, y, z, L where ναx , ναy , ναz are integers. The interesting directions are represented by ναx = 0, ναy > 0, ναz = ∓ναy . On calculating the matrix elements (M1 )αβ , (H1 )αβ one obtains that vαβ is zero unless ναx = νβx ≡ νx , ναy = νβy ≡ ν y and the polarization type (, ⊥) is the same. On the simplifying assumption it √ 2 = λ . So only the follows that ναz = ∓ν y , νβz = ∓ν y , and λα = nc ν y 2π β L quasimodes of the same angular frequency are coupled. The complete expression for the coupling constant (see Dalton et al. (1999d)) simplifies vαβ 1 (n 2 − 1) 2π = sin νy d 2 2L L 2 c √ (n − 1) (poln(α) = poln(β) =) × 2 1 (poln(α) = poln(β) =⊥) n for the appropriate values of α and β. 3 Macroscopic Theories and Their Applications Sums over quasimodes α with the same νx , ν y reduce to sums over ν y and the sign of νz . The sums over ν y can be converted to integral over k y using the prescription νy L → 2π dk y . Let us note that c √ (n 2 − 1) (poln =) L 1 (n 2 − 1) 2 vαβ → sin k y d . 1 (poln =⊥) 2π 2 4π n The application of quantum scattering theory to the beam splitter is justified in the usual situation where integrated one-photon and two-photon detection rates are finite for incident light field states of interest (Dalton et al. 1999d). With modification made above we have not yet rederived the results of these authors. Also we are afraid that the appropriate operators do not converge in the rotatingwave approximation when only two directions of the incident light are considered. 3.5.2 Mode Functions for Fabry–Perot Cavity Dalton and Knight (1999a,b) have given a justification of the standard model of cavity quantum electrodynamics in terms of a quasimode theory of macroscopic canonical quantization. The quasimodes are treated for the representative case of the three-dimensional Fabry–Perot cavity. The form of the travelling and trapped mode functions for this cavity is derived in Dalton and Knight (1999a) and the mode–mode coupling constants are calculated in Dalton and Knight (1999b). The weak dependence of the coupling constants on the mode frequency difference demonstrates that the conditions for Markovian damping of the cavity quasimode are satisfied. We will speak of the atom–field interaction in the following subsection. A standard model used in cavity quantum electrodynamics and laser physics may be pictured as follows. An optical cavity is produced by a perfect mirror and a semi-transparent mirror. Radiative atoms are located in the optical cavity. The atoms are coupled directly to a cavity quasimode, whose mode function is nonzero inside the cavity and zero outside, with an atom–cavity coupling constant g. The cavity quasimode decays via Markovian damping with a rate constant Γc to certain external quasimodes, the mode functions of which are nonzero outside the cavity and zero inside, and which have the same axial wave vector as the cavity quasimode. Also the atom can decay directly via the Markovian damping with a rate constant Γ0 to certain external quasimodes, with nonaxial wave vectors. The standard model may be specified as a typical cavity model, the threedimensional planar Fabry–Perot cavity. The cavity region I lies between a perfect Quasimode Theory mirror in the z = +l plane and a thin layer of dielectric material with dielectric constant κ = n 2 of thickness d, located between the z = 0 and z = −d planes (region II). The external region III lies between the dielectric layer plane at z = −d and a second perfect mirror in the z = −(L + d) plane. The external region length L is much greater than the cavity length l, and both are larger than the dielectric layer thickness d. It is assumed that the three regions constitute a rectangular cuboid with bound aries also at x = ± L2 , y = ± L2 . The mode functions and the necessary partial derivatives of these functions must have the period L in x and y, i.e. be invariant to transition from the plane x = − L2 to the plane x = + L2 and from the plane y = − L2 to the plane y = + L2 . The electric permittivity function (R) for the true cavity is given as ⎧ ⎨ 0 for −(L + d) ≤ z < −d, −d ≤ z < 0, (R) = κ0 for ⎩ 0 ≤ z ≤ l. 0 for An artificial cavity is described by a modified thickness d˜ and a modified refractive ˜ For the quasi-cavity the electric permittivity function ˜ (R) is given as index n. ⎧ ˜ ≤ z < −d, ˜ ⎨ 0 for −(L + d) ˜ (R) = κ −d˜ ≤ z < 0, ˜ 0 for ⎩ 0 ≤ z ≤ l, 0 for where the dielectric constant κ˜ is related to the refractive index n˜ through κ˜ = n˜ 2 . One makes the thin, strong dielectric approximation (Dalton and Knight 1999a). In both cases μ = μ0 everywhere, as there are no magnetic media involved. Obviously, the general form of the true mode functions and the quasimode functions is the same. We may interpret the notation of the true mode functions as general as far as it is useful. Each wave vector k is written in terms of its axial component k z ez and its transverse component kτ , kτ = k x ex + k y e y . It can be derived that the mode functions have the form Ak (R) = exp[ikτ · (xex + ye y )]Zk (z). Here it has been assumed that in (3.692) 2π , νx = 0, ±1, ±2, . . . , L 2π k y = ν y , ν y = 0, ±1, ±2, . . . . L k x = νx 3 Macroscopic Theories and Their Applications In (3.693) ⎧ ⎨ αi ei exp(ikiz z) + αr er exp(ikrz z) region III, Zk (z) = βt et exp(iktz z) + βs es exp(iksz z) region II, ⎩ γu eu exp(ikuz z) + γv ev exp(ikvz z) region I, where αi , αr , βt , βs , γu , and γv are coefficients. The quantities, which are contained in relations (3.693) and (3.695), are defined with respect to the and ⊥ polarizations. Dalton and Knight (1999a,b) investigated not only the travelling modes but also the trapped modes. We refer to the original work for the latter. In the polarization case we have the following parameters. In region III (n = ez ) the wave vector of the forward-propagating wave is ki = kω [τ sin(θ1 ) + n cos(θ1 )], where ωk , c τ = ex cos φ + e y sin φ kω = (3.697) (3.698) and that of the backward-propagating wave is kr = kω [τ sin(θ1 ) − n cos(θ1 )]. The polarization vectors are ei = τ cos(θ1 ) − n sin(θ1 ), er = −τ cos(θ1 ) − n sin(θ1 ). (3.700) (3.701) The coefficients are αi αr ai ar α0 , ai ar 1 = 2i exp[ikω (L + d) cos(θ1 )] exp[−ikω (L + d) cos(θ1 )] and α0 will be characterized below. In region II the wave vectors are kt = nkω [τ sin(θ2 ) + n cos(θ2 )], ks = nkω [τ sin(θ2 ) − n cos(θ2 )], (3.704) (3.705) Quasimode Theory where Snell’s law holds, n sin(θ2 ) = sin(θ1 ). et = τ cos(θ2 ) − n sin(θ2 ), es = −τ cos(θ2 ) − n sin(θ2 ). (3.707) (3.708) The polarization vectors are The coefficients are βt βs bt bs 1 = 2 bt bs β0 exp(iξ0 ), exp(iφ0 ) − exp(−iφ0 ) and β0 exp(iξ0 ) (β0 ≥ 0) will be characterized below. In region I the wave vectors are ku = ki , kv = kr . eu = ei , ev = er . The polarization vectors are The coefficients are gu gv γu γv 1 = 2i gu gv γ0 , exp[−ikω l cos(θ1 )] exp[ikω l cos(θ1 )] and γ0 will be characterized in what follows. We have concentrated on the quasimodes. We have calculated the dependence of α0 and β0 on γ0 and that of γ0 and β0 on α0 . We have assumed that the γ0 dependence is useful for γ0 α0 and the α0 dependence is useful for α0 γ0 . We obtain that α0 ≈ cos[kω (L + d + l) cos(θ1 )] γ0 (3.715) − Λ cos(θ1 ) cos[kω (L + d) cos(θ1 )] sin[kω l cos(θ1 )], 3 Macroscopic Theories and Their Applications where Λ = n 2 kω d 1, cos(θ1 ) β0 exp(iξ0 ) ≈− sin[kω l cos(θ1 )]. γ0 cos(θ2 ) We should also assume that sin[kω l cos(θ1 )] ≈ 0 or a resonant field. We obtain that γ0 ≈ cos[kω (l + L + d) cos(θ1 )] α0 − Λ cos(θ1 ) cos[kω l cos(θ1 )] sin[kω (L + d) cos(θ1 )], cos(θ1 ) β0 exp (iξ0 ) ≈ sin[kω (L + d) cos(θ1 )]. α0 cos(θ2 ) (3.717) (3.718) We should also assume that cos[kω l cos(θ1 )] ≈ 0 or an external field far from resonance. In the ⊥ polarization case we have the following parameters. In region III the wave vectors are given in relations (3.696) and (3.699). The polarization vectors are ei = er = σ , σ =τ ×n = ex sin φ − e y cos φ. (3.720) (3.721) The coefficients are given in relation (3.702), where 1 exp[ikω (L + d) cos(θ1 )] ai = ar 2i − exp[−ikω (L + d) cos(θ1 )] and α0 will be characterized below. In region II the wave vectors are given in relations (3.704) and (3.705). The polarization vectors are et = es = σ . The coefficients are given in relation (3.709), where 1 bt exp(iφ0 ) = bs 2 exp(−iφ0 ) and β0 exp(iξ0 ) (β0 ≥ 0) will be characterized below. In region I the wave vectors are given in relation (3.711). The polarization vectors are eu = ev = ei . Quasimode Theory The coefficients are introduced in relation (3.713), where 1 exp[−ikω l cos(θ1 )] gu = gv 2i − exp[ikω l cos(θ1 )] and γ0 will be characterized in what follows. We obtain that α0 ≈ cos[kω (L + d + l) cos(θ1 )] γ0 [cos(θ2 )]2 −Λ cos[kω (L + d) cos(θ1 )] sin[kω l cos(θ1 )], cos(θ1 ) β0 exp(iξ0 ) ≈ − sin[kω l cos(θ1 )]. γ0 (3.727) (3.728) We should also assume that sin[kω l cos(θ1 )] ≈ 0 or a resonant field. We obtain that γ0 ≈ cos[kω (l + L + d) cos(θ1 )] α0 [cos(θ2 )]2 −Λ cos[kω l cos(θ1 )] sin[kω (L + d) cos(θ1 )], cos(θ1 ) β0 exp (iξ0 ) ≈ sin[kω (L + d) cos(θ1 )]. α0 (3.729) (3.730) We should also assume that cos[kω l cos(θ1 )] ≈ 0 or an external field far from resonance. These values have been obtained using wave optics, in which, for instance, the coefficients in region III are connected to those in region I by the relation γ αi =T u , (3.731) αr γv with 1 1 − 12 iΛ cos(θ1 ) iΛ cos(θ1 ) 2 1 − 2 iΛ cos(θ1 ) 1 + 12 iΛ cos(θ1 ) for polarization and 0 T≈ 2 )] 1 − 12 iΛ [cos(θ cos(θ1 ) 2 1 2 )] iΛ [cos(θ 2 cos(θ1 ) 2 )] − 12 iΛ [cos(θ cos(θ1 ) 2 )] 1 + 12 iΛ [cos(θ cos(θ1 ) for ⊥ polarization. The field must fulfil the boundary conditions ar αi − ai αr = 0, gv γu − gu γv = 0 for both polarizations. 3 Macroscopic Theories and Their Applications The eigenfrequencies ωk are given as solutions of the transcendental equation −ar ai T gu gv = 0, where relation (3.697) and the relations cos(θ2 ) = |kτ |2 1 − 2 2 , cos(θ1 ) = n kω |kτ |2 kω2 may be mentioned, and |kτ |2 is a parameter. To achieve an approximate normalization of the near-resonant mode, we put 2 |γ0 | = (L )2l0 independent of the polarization. When the external field is off a resonance, we put |α0 | = 2 (L )2 L (3.738) 0 independent of the polarization. The coupling constants between different quasimodes are calculated according to relations (3.636) and (3.637). The notation Uα (R) should be used for the quasimode functions which Aα (R) still denotes. Dalton and Knight (1999b) have found that Mαβ = Hαβ = 0 if ναx = νβx or ναy = νβy . They have also found that Mαβ and Hαβ are zero for quasimodes of different polarizations. They have shown that the coupling problem for modes in a threedimensional Fabry–Perot cavity is equivalent to a similar problem in a one-dimensional Fabry–Perot cavity. The selection rules allow coupling between axial cavity quasimodes and axial external quasimodes. Coupling constants between a single axial cavity quasimode and axial external quasimodes depend on the external quasimode frequency slowly. One may conclude that the conditions for irreversible Markovian damping of the cavity quasimode are satisfied. Analyses of cavities have been mentioned in the beginning of Section 3.3.1. Barnett and Radmore (1988) have shown that even the mode-strength function which characterizes the true modes may be approximated using quasimodes and a phenomenological coupling. They have concluded that the approximation is good if the cavity is of sufficiently high quality and if the precise spatial dependence of the field does not weigh. Quasimode Theory In the work of Garraway (1997a,b), atom-true field mode couplings are used as a basis for the pseudomode model. In certain situations quasimodes can be associated with pseudomodes (Dalton et al. 3.5.3 Atom–Field Interaction Within Cavity First we complete what is needed for the description of a system of radiative atoms interacting with the electromagnetic field in the presence of a neutral dielectric medium on the assumptions made before. We add that the radiative atoms are stationary and electrically neutral. The radiative atoms possess charge density ρL (R, t), current density jL (R, t), polarization density PL (R, t), and magnetization density ML (R, t) given in terms of the positions rξ α (t) and velocities r˙ ξ α (t) of the charged particles forming the radiative atoms, qξ α δ R − rξ α (t) , ρL (R, t) = ξ,α jL (R, t) = qξ α δ R − rξ α (t) r˙ ξ α (t). Here ξ = 1, 2, . . . lists different radiative atoms and α = 1, 2, . . . lists different particles within atom ξ . qξ α , Mξ α are the charge and mass for the ξ α particle, respectively. One defines PL (R, t) = qξ α [rξ α (t) − Rξ (t)] × δ R − Rξ (t) − u[rξ α (t) − Rξ (t)] du, 1 ˙ ξ (t)] qξ α u[rξ α (t) − Rξ (t)] × [˙rξ (t) − R ML (R, t) = ξ,α × δ R − Rξ (t) − u[rξ α (t) − Rξ (t)] du 1 ˙ ξ (t) qξ α [rξ α (t) − Rξ (t)] × R + × δ R − Rξ (t) − u[rξ α (t) − Rξ (t)] du, where Rξ (t) is the position of the centre of mass of the atom ξ , whose mass is Mξ . In the generalized Coulomb gauge condition, the scalar potential φ(R, t) satisfies a generalized Poisson equation ∇ · [(R)∇φ(R, t)] = −ρL (R, t). 3 Macroscopic Theories and Their Applications The macroscopic Lagrangian is given by the relation 1 Mξ α r˙ 2ξ α (t) − Vcoul (t) + Lc (R, t) d3 R, L (t) = 2 ξ,α where the Lagrangian density Lc (R, t) is given by the relation 1 1 ∂A(R, t) 2 Lc (R, t) = (R) − [∇ × A(R, t)]2 2 ∂t 2μ(R) ∂A(R, t) − PL (R, t) · + ML (R, t) · ∇ × A(R, t). ∂t In the Lagrangian Vcoul (t) is the Coulomb energy given by the relation [(R)∇φ(R, t)]2 3 Vcoul (t) = d R 2(R) and PL (R, t) is the reduced polarization density: PL (R, t) = PL (R, t) − (R)∇φ(R, t). The conjugate momentum field Π(R, t) is obtained from the Lagrangian density Lc (R, t) as Π(R, t) = (R) ∂A(R, t) − PL (R, t) ∂t and the particle momenta are obtained from L (t) as pξ α (t) = Mξ α r˙ ξ α (t) 1 + qξ α uB Rξ (t) + u[rξ α (t) − Rξ (t)], t du × [rξ α (t) − Rξ (t)]. 0 (3.749) The multipolar Hamiltonian is H (t) = p2ξ α (t) PL 2 (R, t) 3 dR 2Mξ α 2(R) ξ,α 2 Π (R, t) [∇ × A(R, t)]2 + + d3 R 2(R) 2μ(R) Π(R,t) · PL (R, t) 3 + d R − [∇ × A(R, t)] · ML (R, t) d3 R (R) qξ2α 1 + u∇ × A Rξ (t) + u rξ α (t) − Rξ (t) , t du 2Mξ α 0 ξ,α 2 , (3.750) × rξ α (t) − Rξ (t) + Vcoul (t) + Quasimode Theory where the reduced magnetization density ML (R, t) is given as ML (R, t) = qξ α u[rξ α (t) − Rξ (t)] × pξ α (t) Mξ α × δ R − Rξ (t) − u[rξ α (t) − Rξ (t)] du. We have property (3.586). The reduced polarization density is given in terms of true mode functions as PL (R, t) = (R)Ak (R) PL (R , t) · A∗k (R ) d3 R . Using expansion (3.602), one can write Lagrangian (3.744), with the Lagrangian density (3.745) as follows: L (t) = 1 Mαξ r˙ 2αξ (t) − Vcoul (t) 2 α,ξ + + − qαξ r˙ αξ (t) · uB Rξ (t) + u rξ α (t) − Rξ (t) , t du × rξ α (t) − Rξ (t) 1 ˙∗ ˙ β (t) − 1 Q ∗ (t)Vαβ Q β (t) Q α (t)(W−1 )αβ Q 2 α,β 2 α,β α ˙ ∗α (t)Nα (t), Q where N(t) = K∗ L(t), ˜ (R) ∗ L α (t) = U (R) · PL (R, t) d3 R. (R) α Making the choice (3.608) for K (Dalton et al. 1999b), one has N(t) = M−1 L(t). The generalized momentum coordinates Pα (t) for the electromagnetic field are Pα (t) = ˙ β (t) − Nα (t). (M−1 )αβ Q β 3 Macroscopic Theories and Their Applications The Hamiltonian is given by the relation H (t) = p2αξ (t) α,ξ + + + Vcoul (t) + 1 ∗ N (t)Mαξ Nξ (t) 2 α,ξ α 1 ∗ 1 ∗ Pα (t)Mαβ Pβ (t) + Q (t)Vαβ Q β (t) 2 α,β 2 α,β α Pα∗ (t)Mαβ Nβ (t) + 1 ∗ Q (t)(M−1 )αβ Rβ (t) 2 α,β α 1 ∗ + Q (t)X αβ (t)Q β (t), 2 α,β α where X(t) = M−1 D(t)M−1 , ˜ (R) ∗ ML (R, t) · ∇ × (3.759) Uβ (R) d3 R, (R) 2 1 1 qμξ Dαβ (t) = u u δ R − Rξ (t) − u rμξ (t) − Rξ (t) Mμξ 0 0 μ,ξ × δ R − Rξ (t) − u rμξ (t) − Rξ (t) d u du 2 ˜ (R) ˜ (R) ∗ × rμξ (t) − Rξ (t) U (R) · ∇ × Uβ (R) ∇× (R) α (R) ˜ (R) ∗ − rμξ (t) − Rξ (t) · ∇ × Uα (R) rμξ (t) − Rξ (t) (R) ˜ (R) (3.760) Uβ (R) d3 R d3 R , · ∇× (R) Rβ (t) = − ˜ (R) where ∇ × (R) Uβ (R) means that the vector R and the derivatives with respect to its components are replaced by the vector R and the derivatives with respect to the components of R , respectively. The terms in the Hamiltonian are the particle kinetic energy, Coulomb energy, polarization energy, radiative energy (two terms), electric interaction energy, magnetic interaction energy, and diamagnetic energy. The reduced polarization density can be expanded in terms of the quasimode functions ˜ (R)Uα (R) as PL (R, t) = (M−1 L(t))α ˜ (R)Uα (R). α Quasimode Theory The quantization for the radiative atom charged particles is the familiar prescriptions involving Hermitian operators rαξ (t) → rˆ αξ (t), pαξ (t) → pˆ αξ (t), with the usual commutation rules applying. The full quantum multipolar Hamiltonian is ˆ (t) = H ˆ 2 PL (R, t) 3 dR 2M 2(R) ξα ξα 1ˆ † ˆ ˆ μα Aα (t) Aα (t) + 1 + 2 α 0 1 √ Vαβ ˆ †α (t) A ˆ β (t) A + ηα ηβ Mαβ + √ 2 α,β ηα ηβ pˆ 2ξ α (t) + Vˆ coul (t) + 0 1 Vα,−β √ ˆ †β (t) + H.c. ˆ †α (t) A + − ηα ηβ Mα,−β + √ A 4 α,β ηα ηβ α=β ˆ Π(R, t) · Pˆ L 2 (R, t) 3 ˆ ˆ L (R, t) d3 R ∇ × A(R, t) · M d R− (R) qξ2α 1 ˆ Rξ (t)1ˆ + u rˆ ξ α (t) − Rξ (t)1ˆ , t du + u∇ × A 2Mξ α 0 ξ,α 2 , (3.763) × rˆ ξ α (t) − Rξ (t)1ˆ ˆ (R, t) are given by equations (3.761) and (3.751) in the operator where Pˆ L (R, t), M L form. The polarization energy term and the Coulomb energy term can be combined to be equal to the sum of intra-atomic Coulomb and polarization energy terms plus intra-atomic contact energy terms. One has (Dalton and Babiker 1997) Vˆ coul (t) + ˆ 2 PL (R, t) 3 IA IA (t) + Vˆ pol (t) + Vˆ cont (t). d R = Vˆ coul 2(R) One defines ρLξ (R, t) by relation (3.740) with the sum over ξ omitted and PLξ (R, t) by (3.741) with the same modification. One modifies (3.743) and (3.747) as (3.765) ∇ · (R)∇φξ (R, t) = −ρLξ (R, t) and PLξ (R, t) = PLξ (R, t) − (R)∇φξ (R, t) , 3 Macroscopic Theories and Their Applications respectively. In (3.764) IA (t) Vˆ coul IA Vˆ pol (t) = (R)∇ φˆ ξ (R, t) · (R)∇ φˆ ξ (R, t) 3 d R, 2(R) Pˆ Lξ (R, t) · Pˆ Lξ (R, t) d3 R, Pˆ Lξ (R, t) · Pˆ Lη (R, t) d3 R. 2(R) ξ,η Vˆ cont (t) = ξ =η To obtain the electric dipole approximation one neglects the magnetic and diamagnetic interaction energy terms. The polarization density is given in its dipolar approximation ˆ ξ (t)δ R − Rξ (t) , (3.770) μ Pˆ L (R, t) = ξ ˆ ξ (t) is the dipolar moment for the ξ atom. The reduced polarization density where μ becomes ˜ Rξ (t) μ ˆ ξ (t) · U∗β Rξ (t) ˜ (R)Uα (R). Pˆ L (R, t) = (M−1 )αβ (3.771) Rξ (t) ξ,α,β The atom–electromagnetic field interaction energy then assumes the forms 1 ηα ˜ Rξ (t) E1 Vˆ A−F (t) = i 2 R (t) ξ,α ξ ˆ α (t)μ ˆ †α (t)μ ˆ ξ (t) · Uα Rξ (t) − A ˆ ξ (t) · U∗α Rξ (t) , (3.772) × A ˆ Rξ (t), t ˆ ξ (t) · Π μ . (3.773) = Rξ (t) ξ The quantum Hamiltonian in the electric dipole approximation and rotating wave approximation is E1 RWA ˆ E1,RWA ˆ A (t) + H ˆ Q (t) + Vˆ A−F H (t) = H (t) + Vˆ Q−Q (t), with ˆ A (t) = H pˆ 2ξ α (t) ξ,α 2Mξ α IA IA + Vˆ coul (t) + Vˆ pol (t). Quasimode Theory The coupling constants describing energy exchange processes between a radiative atom placed in the cavity and nonaxial external quasimodes vary slowly with the external quasimode frequency. It follows that Markovian spontaneous emission damping occurs for the radiative atoms. On the contrary, their coupling with the (axial) cavity quasimodes consists in reversible photon exchanges as characterized through one-photon Rabi frequencies. In the analysis, the standard model in cavity quantum electrodynamics has been considered. In the model the basic processes are described by a cavity damping rate, a radiative atom spontaneous decay rate, and an atom–cavity mode coupling constant. This model has been justified in terms of the quasimode theory of macroscopic canonical quantization (Dalton and Knight 1999a,b). 3.5.4 Several Sets of Quasimodes The quasimode theory of macroscopic quantization (Dalton et al. 1999b,c) has been generalized (Brown and Dalton 2001a,b). The generalization allows for the case where two or more quasipermittivities are introduced, along with their associated sets of quasimode functions. This suggests problems such as reflection and refraction at a dielectric boundary, the linear coupler, and the coupling of two optical cavities. The theory comprises the above relations (3.579), (3.744), (3.745), (3.746), (3.747), (3.748), (3.586), and (3.750). In some situations, such as a single laser cavity or a beam splitter, it suffices to consider just a single quasipermittivity function in order to obtain suitable quasimode functions (Dalton et al. 1999b,c). A full quasimode treatment of the beam splitter has been given in Dalton et al. (1999d). In other situations, the linear coupler (Lai et al. 1991) being an example, it is appropriate to construct quasimode functions via the introduction of two distinct quasipermittivities, each with its own set of associated mode functions. We assume N sets of quasimode functions Uα(l) (R) (l = 1, . . . , N ), which are defined as the solutions of N separate Helmholtz equations involving the quasiper˜ (l) (R), respectively. With λα(l) the angumittivities and quasipermeabilities ε˜ (l) (R), μ lar frequency of the (l, α) quasimode, relations (3.594), (3.595), and (3.596) are generalized, ∇× 1 μ ˜ (l) (R) [∇ × Uα(l) (R)] = (λα(l) )2 ˜ (l) (R)Uα(l) (R), ∇ · ˜ (l) (R)Uα(l) (R) = 0, ˜ (l) (R)Uα(l)∗ (R) · Uβ(l) (R) d3 R = δαβ . (3.776) (3.777) (3.778) Expansion of the vector potential A(R) directly in terms of the quasimode functions Uα(l) (R) is not possible. Instead we can write (R) (l,m) (m) Q α(l) (t)K αβ Uβ (R), A(R, t) = ˜ (R) l,m α,β 3 Macroscopic Theories and Their Applications which involves a double sum over all quasimodes. The square matrix K has become a block matrix K, composed of K(l,m) . The quasimode functions Uα(l) (R) for N > 1 are not all linearly independent. The set of quasimodes arising from N different quasipermittivities is overcomplete. It is solved by following an analogy with the theory of linear combinations of atomic orbitals (Coulson 1952). In that theory only the lower energy atomic orbitals are included. Here only quasimodes with the frequencies are retained that are important for the quantum optics system. When applying the theory to situations where the true or quasi permittivities and permeabilities contain discontinuities, space integrals are cured with excluding infinitesimal volumes containing these discontinuities from their domain. Relation (3.753) is generalized, L (t) = 1 Mαξ r˙ 2αξ (t) − Vcoul (t) 2 α,ξ 1 + qαξ r˙ αξ (t) · uB Rξ (t) + u[rξ α (t) − Rξ (t)], t du α,ξ × [rξ α (t) − Rξ (t)] 1 (l)∗ 1 ˙ (l)∗ (l,m) ˙ (m) (l,m) (m) Q (t)Vαβ Q β (t) + Q α (t)(W−1 )αβ Q β (t) − 2 l,m α,β 2 α,β α ˙ α(l)∗ (t)Nα(l) (t). (3.780) − Q α Some of the matrices become block matrices of the form (given for an arbitrary case Y) ⎞ ⎛ (1,1) (1,2) Y . . . Y(1,N ) Y ⎜ Y(2,1) Y(2,2) . . . Y(2,N ) ⎟ ⎟ ⎜ (3.781) Y=⎜ . .. . . .. ⎟ ⎝ .. . . . ⎠ Y(N ,1) Y(N ,2) . . . Y(N ,N ) and column block matrices of the form (given for an arbitrary case C) ⎛ (1,N ) ⎞ C ⎜ C(2,N ) ⎟ ⎜ ⎟ C = ⎜ . ⎟. ⎝ .. ⎠ C(N ,N ) A matrix Y becomes the block matrix Y and a matrix C becomes the column block matrix C, but the notation is not changed. Relations (3.606), (3.607), and (3.754) are generalized ˜ (l) (R) (l)∗ ˜ (m) (R) (m) (l,m) = (R) (3.783) Mαβ Uα (R) · U (R) d3 R, (R) (R) β Quasimode Theory 1 ˜ (l) (R) (l)∗ = ∇× U (R) μ(R) (R) α ˜ (m)(R) (m) U (R) d3 R, · ∇× (R) β (l) ˜ (R) (l)∗ U (R) · PL (R, t) d3 R. L α(l) (t) = (R) α (l,m) Hαβ Relation (3.756) is generalized, (l,m) ˙ (m) Pα(l) (t) = (M−1 )αβ Q β (t) − Nα(l) (t). j (3.784) (3.785) Relation (3.757) is generalized: H (t) = p2αξ (t) α,ξ + Vcoul (t) + 1 (l)∗ (l,m) (m) N (t)Mαξ Nβ (t) 2 l,m α,ξ α 1 l∗ 1 (l)∗ (l,m) (m) (l,m) (m) Pα (t)Mαβ Pβ (t) + Q (t)Vαβ Q β (t) 2 l,m α,β 2 l,m α,β α (l,m) (m) Pα(l)∗ (t)Mαβ Nβ (t) + l,m α,β 1 (l)∗ (l,m)∗ (m) + Q (t)(M−1 )αβ Rβ (t) 2 l,m α,β α + 1 (l)∗ (l,m) Q (t)X αβ (t)Q (m) β (t). 2 l,m α,β α The block matrix X(t) is given by (3.758), where the notation has the actual meaning. Relations (3.760) and (3.759) are generalized, 1 1 2 qμξ (l,m) u u δ R − Rξ (t) − u[rμξ (t) − Rξ (t)] Dαβ (t) = Mμ,ξ 0 0 μ,ξ × δ R − Rξ (t) − u [rμξ (t) − Rξ (t)] du du ˜ (m) (R) (m) ˜ (l) (R) (l)∗ 2 U (R) · ∇ × U (R) × [rμξ (t) − Rξ (t)] ∇ × (R) α (R) β ˜ (l) (R) (l)∗ − [rμξ (t) − Rξ (t)] · ∇ × Uα (R) [rμξ (t) − Rξ (t)] (R) (m) ˜ (R) (m) (3.788) U (R) d3 R d3 R , · ∇× (R) β ˜ (l) (R) (l)∗ Rα(l) (t) = − ML (R, t) · ∇ × (3.789) Uα (R) d3 R. (R) 3 Macroscopic Theories and Their Applications Relation (3.761) is generalized: (M−1 L(t))α(l) ˜ (l) (R)Uα(l) (R). PL (R, t) = Replacements (3.615), (3.616) are adapted, ˆ α(l) (t), Q α(l)∗ (t) → Q ˆ α(l)† (t), Q α(l) (t) → Q Pα(l) (t) → Pˆ α(l) (t), Pα(l)∗ (t) → Pˆ α(l)† (t). (3.791) (3.792) The nonzero equal-time commutators are (cf. (3.617)) (m)† ˆ α(l) (t), Pˆ β [Q ˆ α(l)† (t), Pˆ β(m) (t)]. (t)] = iδlm δαβ 1ˆ = [ Q The electromagnetic field is real, which somewhat complicates the quantization (Brown and Dalton 2001a). Relations (3.618) and (3.619) are generalized, (l,l) ˆ (l) (l,l) ˆ (l) ˆ α(l)† (t)Vαα ˆ Q (t) = 1 Pα (t) + Q Q α (t) , Pˆ α(l)† (t)Wαα H 2 l α 1 Vˆ Q−Q (t) = 2 × (l,m) ˆ (m) (l,m) ˆ (m) ˆ α(l)† (t)Vαβ Pˆ α(l)† (t)Wαβ Pβ (t) + Q Q β (t) . (3.795) l,m α,β (l,α)=(m,β) Relations (3.621), (3.622), and (3.623) are generalized, ηα(l) ˆ (l) 1 ˆ (l) Q α (t) + i Pα (t), 2 2ηα(l) (l) η 1 ˆ (l)† α (l)† (l)† ˆ (t) − i ˆ α (t) = Q A Pα (t), 2 α 2ηα(l) ˆ α(l) (t) = A where ηα(l) (l,l) Vαα (l,l) Wαα It follows simply from (3.793) that these annihilation and creation operators obey the following equal-time nonzero commutation relations: (m)† ˆ ˆ β (t)] = δlm δαβ 1. ˆ α(l) (t), A [A Quasimode Theory Relations (3.624) and (3.625) are generalized, ˆ (l) (l)† ˆ −α Aα (t) + A (t) , (l) 2ηα ηα(l) ˆ (l) (l)† ˆ −α (t) . Aα (t) − A Pˆ α(l) (t) = −i 2 ˆ α(l) (t) Q Relation (3.626) is generalized, 1 ˆ (l) (l)† (l) ˆ ˆ ˆ HQ (t) = Aα (t) Aα (t) + 1 μα , 2 α l Relation (3.620) is generalized, / μα(l) (l,l) (l,l) Wαα Vαα . Let us recall that non−RWA RWA (t) + Vˆ Q−Q (t), Vˆ Q−Q (t) = Vˆ Q−Q RWA (t) is generalized, where Vˆ Q−Q RWA (t) = Vˆ Q−Q 2 × l,m α,β (l,α)=(m,β) ⎞ (l,m) / V αβ (l,m) ˆ (m) ˆ α(l)† (t) A ⎝ ηα(l) ηβ(m) Mαβ ⎠A +/ β (t), (3.805) (l) (m) ηα ηβ non−RWA and Vˆ Q−Q (t) is also generalized, non−RWA (t) = Vˆ Q−Q 4 l,m α,β ⎤ ⎞ (l,m) / V α,−β ⎠ ˆ (l)† (l,m) ˆ (m)† ⎦ × ⎣⎝− ηα(l) ηβ(m) Mα,−β +/ Aα (t) A β (t) + H.c. . (l) (m) ηα ηβ (3.806) Relations (3.630) and (3.631) are generalized, ˜ (m) (R) ˆ A(R, t) = 2ηα(l) (R) l,m α,β (l,m) ˆ (l) (l,m)∗ ˆ (l)† Aα (t)U(m) Aα (t)U(m)∗ × K αβ β (R) + K αβ β (R) , 3 Macroscopic Theories and Their Applications ˆ Π(R, t) = −i ηα(l) (l) ˜ (R) 2 (l) ˆ α(l)† (t)Uα(l)∗ (R) . ˆ α (t)Uα(l) (R) − A × A The theory comprises the above relations (3.764), (3.767), (3.768), and (3.769). Relation (3.772), which comprises the annihilation and creation operators, is generalized, ˜ (l) (R) (R) Rξ (t) ξ l,α ˆ α(l) (t)μ ˆ α(l)† (t)μ ˆ ξ (t) · Uα(l) Rξ (t) − A ˆ ξ (t) · Uα(l)∗ Rξ (t) . (3.809) × A E1 (t) = −i Vˆ A−F ηα(l) 2 Let us recall that in the rotating wave and electric dipole approximations we can write the quantum Hamiltonian as E1 RWA ˆ A (t) + H ˆ Q (t) + Vˆ A−F ˆ E1,RWA (t) = H (t) + Vˆ Q−Q (t). H (l,m) (l,m) The coupling constants Mαβ , Vαβ can be calculated from the matrices M and H using relations (3.609) and (3.610) (Brown and Dalton 2001a). For the usual case where μ ˜ (l) (R)=μ(R) and the overlap between the set l and the set m of mode functions is small, to good accuracy we have (l,m) Mαβ (l,m) ≈ Vαβ 1, i = j, α = β, (l,m) (l,m) (M1 )αβ + (M2 )αβ , otherwise, λα(l)2 , i = j, α = β, (l,m) (l,m) (H1 )αβ + (H2 )αβ , otherwise. Relations (3.636), (3.637) are generalized and also modified, (l,m) = (M1 )αβ (l,m) (M2 )αβ ˜ (R) ˜ (l) (R) −1 (R) ˜ (m) (R) − 1 Uα(l)∗ (R) (R) 3 · U(m) β (R) d R, ˜ (R) ˜ (m) (R) = ˜ (R) + − 1 Uα(l)∗ (R) (R) (R) 3 · U(m) β (R) d R, Quasimode Theory (l) 1 ˜ (R) (l)∗ = ∇× − 1 Uα (R) μ(R) (R) (m) ˜ (R) (m) − 1 Uβ (R) d3 R, · ∇× (R) ˜ (l) (R) ˜ (m) (R) 1 (m)2 (m)2 ˜ (R) λα =− + λβ 2 (R) (R) (l,m) (H1 )αβ (l,m) (H2 )αβ 3 × Uα(l)∗ (R) · U(m) β (R) d R. Relation (3.632) is generalized, μα(l) ≈ λα(l) and relation (3.633) is also generalized, (l,l) . μα(l) ≈ λα(l) + vαα Relation (3.634) is generalized, RWA Vˆ Q−Q (t) ≈ (l,m) ˆ (l)† ˆ (m) Aα (t) A vαβ β (t). l,m α,β (l,α)=(m,β) Relation (3.635) is generalized, (l,m) = vαβ / (l,m) (l,m) λα(l) λ(m) ) + (M ) (M 1 αβ 2 αβ β (l,m) (l,m) − (H2 )αβ (H1 )αβ / . λα(l) λ(m) β The foregoing theory has been applied to reflection and refraction at a dielectric interface (Brown and Dalton 2001b). The true mode approach has continued the previous literature, e.g. Allen and Stenholm (1992) or Carniglia and Mandel (1971). The analysis has been very thorough including the quantum scattering theory in the Heisenberg picture. The behaviour of the intensity for a localized one-photon wave packet has been examined, which has exhibited agreement with the classical laws of reflection and refraction. Such an accord is described also by the quantum theory based on a microscopic model of the dielectric media (Hynne and Bullough 1990). Here we will expound the quasimode approach in part. We shall assume that space has been divided into two regions. Region 1, which is formed by the points with z ≥ 0, is assumed to be filled with linear, homogeneous dielectric material of refractive index n 1 . Region 2, which consists of the points with z < 0, is assumed to contain material obeying the same restrictions, but with refractive index n 2 . The permittivity function for the system, (z), is then 3 Macroscopic Theories and Their Applications (z) = n 21 0 , z ≥ 0, n 22 0 , z < 0. We could try to use two quasipermittivity functions, one being n 21 0 in all space and the other being n 22 0 in all space. With the two values, two sets of plane waves are associated. The union of these sets does not enjoy the mutual orthogonality of all functions. Instead we choose two sets of quasimodes, each set effectively being restricted to just one of the regions. At a closer look, the functions are not confined in one region, but are evanescent in the other region. An effective mutual orthogonality is present. A completeness of the union of these sets is also available. The spatially confined nature of these types of mode functions is also used when applying a quantum scattering theory approach to energy transfer from one region to another. For the reflection and refraction problems we choose the two quasipermittivity functions (Brown and Dalton 2001b), 2 n 1 0 , z ≥ 0, (3.822) ˜ (1) (z) = (n˜ 2 )2 0 , z < 0, (n˜ 1 )2 0 , z ≥ 0, (3.823) ˜ (2) (z) = n 22 0 , z < 0, where the quasirefractive indices n˜ 1 and n˜ 2 are positive constants which fulfil n˜ 1 , n˜ 2 1. The vanishingly small refractive index in one region means that all incident waves in the other region except some with angles of incidence smaller than the critical angle produce only an evanescent wave in the region with negligible refractive index. From the generalized Helmholtz equation (3.776), we can determine the form (2) of the quasimode functions U(1) α (R) and Uα (R), which are associated with the quasipermittivities ˜ (1) (z) and ˜ (2) (z), respectively. We will treat the case of outof-plane polarization. The quasimode functions are to an excellent approximation given by the formulae (1) U(1) α (R) ≈ Nα / √ (1) r˜1∗ exp(ik(1) × αi · R) + r˜1 exp(ikαr · R) Θ(z) / · R) [1 − Θ(z)] σ, + t˜1 r˜1∗ exp(ik(1) αt (2) U(2) α (R) ≈ Nα / √ (2) r˜2∗ exp(ik(2) × αi · R) + r˜2 exp(ikαr · R) [1 − Θ(z)] / · R)Θ(z) σ, (3.825) + t˜2 r˜2∗ exp(ik(2) αt Quasimode Theory where (l) (l) + 1 + i tan(θ˜αi ) tanh(θ˜αt ) ,, r˜l = , , , (l) (l) ,1 − i tan(θ˜αi ) tanh(θ˜αt ), t˜l = 2 , (l) (l) ˜ 1 − i tan(θαi ) tanh(θ˜αt ) with (l) tan(θ˜αi )= (l) |(kαi )τ | , (l) (kαi )z / (l) (l) 2 |(kαi )τ |2 − |kαt | (l) , + for l = 1, − for l = 2, )=± tanh(θ˜αt (l) |(kαi )τ | n˜ 2 (1) n˜ 1 (2) |k(1) |k |, |k(2) |k |. (3.827) αt | = αt | = n 1 αi n 2 αi Here Θ(z) is the step function and Nα(l) are normalization constants appropriate to the case where + the evanescent wave has been neglected. We note that r˜l are complex units and t˜l r˜l∗ = |t˜l |, which may justify an alternative phase factor. √ The usual formulae are obtained by multiplying the right-hand sides by r˜l , which also simplifies them. Approximate expressions are obtained also by considering r˜l = −1 and t˜l = 0. On the modification the two sets of the functions are continuous contrary to the notation. It is obvious that the subscript α should be replaced by a variable ki(l) . The corresponding z-component should be negative for l = 1 and it should be positive for l = 2. In the case of the continuous sets, the scalar product of the functions simplifies to a Dirac delta function when we choose Nα(l) = 1 2π 1 √ . n l 0 As this value does not depend on ki(l) , the subscript α has been preserved. We have concluded that the discrete sets are not always obtained so easily as desired. The continuous sets of the mode functions in the case where the evanescent waves have been neglected are discretized easily. The use of quantization box of volume L 3 is (l) (l) and kαr obey immediate. We assume that the propagation vectors kαi (l) (l) (l) kαi,X = kαr,X = Nα,X 2π , L (l) are integers. We should complete the case X = z. where X = x, y and Nα,X 3 Macroscopic Theories and Their Applications For X = z the quantization box should not suggest the periodic boundary condition. We assume that the mode function vanish for z = 0 and z = L. Then of course (l) (l) (l) = −kαr,z = Nα,z kαi,z π . L (l) The integer Nα,z should be negative for l = 1 and it should be positive for l = 2. The modes with in-plane polarization are not considered for simplicity (Brown and Dalton 2001b). Chapter 4 Microscopic Theories A divergence from the macroscopic theories emerges, when the polarization of the medium is described by separate equations. In the framework of this approach the electric permittivity of the medium can be derived. The description of the fields can be quantized. It seems that the separation of the equations for the medium polarization is not a sufficient ground for the theory to be considered microscopic, but we adopt this nomenclature. It is important that the motion of the medium polarization may be damped and losses may be included. A quantum noise is considered for the field commutators not to depend on the time. Many papers have been devoted to the Green-function approach to the quantization of the electromagnetic field in a medium. As this theory rather begins with a quantum noise, it differs formally from the method of continua of harmonic oscillators. The equivalence between the Green-function approach and the method of continua for media suitable for both approaches has been demonstrated, however. The Green-function approach has been elaborated on for various media, only the inclusion of a nonlinearity of the medium was under development in the course of writing this book. The magnetic properties are usually neglected, but they must be included in the phenomenological quantum description of negative-index materials. Even though the Casimir effect is not regularly connected with the propagation, an expression of the noise which is quantal in essence fits in the framework of the electromagneticfield quantization. 4.1 Method of Continua of Harmonic Oscillators Many scientists would call the following exposition a macroscopic theory for a lossy medium, whereas we refer to a microscopic theory. A standard microscopic approach is expected from the quantum theory of solids. Still, in the quantum theory of solids, continua of harmonic oscillators have been considered, on which we shall concentrate ourselves in what follows. In the framework of this model, one can see the presence and correlation of fluctuations of the electric-field strength in the vacuum state of the field and the ground state of the matter. A. Lukˇs, V. Peˇrinov´a, Quantum Aspects of Light Propagation, C Springer Science+Business Media, LLC 2009 DOI 10.1007/b101766 4, 4 Microscopic Theories 4.1.1 Dispersive Lossy Homogeneous Linear Dielectric Huttner and Barnett (1992a) have started from the observation that the macroscopic approach to the theory of the electromagnetic field in a medium is a quantization scheme that does accept dispersion, but not losses. So it does not deal with a fundamental property of the susceptibility, the Kramers–Kronig relations. Losses in quantum mechanics are treated by coupling to a reservoir, and thus a quantization scheme to describe the losses must introduce the medium explicitly. A rigorous treatment is contained in the book of Klyshko (1988). Huttner and Barnett (1992a) use the model of Hopfield (1958) and Fano (1956), having first treated the quantization of light in a purely dispersive dielectric (Huttner et al. 1991) using a simple version of this model (Kittel 1987). Their analysis is restricted to a one-dimensional model and to transverse electromagnetic fields. After introducing the Lagrangian densities, the effect of choice of the type of coupling between light and matter on the definition of the conjugate variables for the components of the vector potential is discussed. The matter is not quite identical with the reservoir, but couplings rather form a chain, the radiation is coupled to the matter (it is a field again) and the matter is coupled to the reservoir (it is a field of the dimension increased by unity). Diagonalization by the Fano technique is performed (Fano 1961, Barnett and Radmore 1988), cf., (Rosenau da Costa et al. 2000). Huttner and Barnett (1992a) work, as usual, with fields in the reciprocal space. Only at the very beginning the radiation and the matter in the direct space, and the reservoir in the Cartesian product of the direct and a reciprocal space are considered. The total transition to a direct space for the reservoir is not usual, but is conceivable. The description of the matter and reservoir is first diagonalized. This diagonalˆ ization gives rise to the (dressed) matter-field B(k, ω), whose operator exhibits the same dependence on the wave vector and the frequency as the operator of the reservoir field. It is proven that also the coupling constant dependent on at least the frequency of the reservoir “elementary” mode fulfils the assumptions for further ˆ diagonalization. This diagonalization gives origin to the field C(k, ω) for polaritons. The operator of this field shares the dependence on the wave vector and the frequency with the operator of the reservoir field. In contrast with the vacuum theory (the theory of the electromagnetic field in a vacuum) a macroscopic field emerges this way whose operator depends also on the frequency. The vector potential depends on the spatial coordinate and the time as usual and it has the form of the integral of the vector potential for a unit density of polaritons with the wave vector k and the frequency ω multiplied by the polariton operator ˆ C(k, ω). The appropriate relation contains the complex relative permittivity of the medium (ω) as a linear transform of the coupling constant g(ω) between the light ˆ and the dressed matter-field B(k, ω). The complex relative permittivity (ω) fulfils the Kramers–Kronig relations. ˆ Taking into account the frequency decompositions of the fields E(x, t) and ˆ B(x, t) (see, Huttner and Barnett (1992b)), one can introduce, in an “almost” Method of Continua of Harmonic Oscillators conventional manner, the positive and negative propagating components. These differ almost negligibly due to the imaginary part of the refractive index (see (3.506), (3.507), or (3.79)). That is to say that the fields cˆ + (x, ω) and cˆ − (x, ω) are introduced. ˆ Respecting the frequency decomposition of the field D(x, t), the fields cˆ ± (x, ω) ˆ are used and the spatial Langevin force f (x, ω) is introduced. Using these definitions, two Maxwell equations are transformed onto two spatial Langevin equations. The equal-space commutation relations between the operators at the frequencies ω and ω can also be derived. From this, simple equal-space commutation relations between the operators in the “application” times s and s follow, 1 cˆ ±dir (x, s) = √ 2π cˆ ± (x, ω) exp(−iωs) dω, and that may be why Huttner and Barnett (1992a) name the papers devoted to the phenomenological approach to quantization (Levenson et al. 1985, Potasek and Yurke 1987, Caves and Crouch 1987, Lai and Haus 1989, Huttner et al. 1990). Let us note that Huttner and Barnett (1992b) in the introduction mention also the popular approach (Huttner et al. 1990), in which spatial progression equations are derived and quantization of the field is performed imposing the equal-space commutation relations. In contrast with the macroscopic theories this technique is not derived from a Lagrangian and has not been justified in terms of a canonical scheme. In Huttner and Barnett (1992b) the derivation of such equal-space commutation relations is provided in the case of a linear dielectric. The canonical scheme and losses cannot be easily unified, but this has been solved in Huttner and Barnett (1992b). The one-dimensional model has been expanded to three dimensions. The Hamiltonian is first derived, then diagonalized, and the expansions of the field operators are transformed. The propagation of light in the dielectric is analysed, the field is expressed in terms of space-dependent amplitudes, and their spatial equations of evolution are obtained. Huttner and Barnett (1992b) have started the canonical quantization from a Lagrangian density L = Lem + Lmat + Lres + Lint , where −E B 0 ; ˙ <= > 2 1 ; <= > 2 (∇ × A) = (A + ∇U ) − 2 2μ0 is the electromagnetic part which is expressed in terms of the vector potential A and the scalar potential U , Lmat = ρ ˙2 (X − ω02 X2 ) 2 4 Microscopic Theories is the polarization part, modelled by a harmonic-oscillator field X of frequency ω0 (the polarization field), ρ ∞ ˙2 (Yω − ω2 Y2ω ) dω (4.5) Lres = 2 0 is the reservoir part, comprising a field Yω of the continua of harmonic oscillators of frequencies ω, used to model the losses (reservoirs), and ∞ ˙ + U ∇ · X) − X · ˙ ω dω v(ω)Y (4.6) Lint = −α(A · X 0 is the interaction part with coupling constants α and v(ω). The interaction between the light and the polarization field has the coupling constant α and the interaction between the polarization field and other oscillator fields used to model the losses has the coupling constant v(ω). In general, α could be a tensor. The displacement field is defined by D(r, t) = 0 E(r, t) − αX(r, t). As U˙ does not appear in the Lagrangian, U is not a proper dynamical variable, but it can be written in terms of the proper dynamical variable X. The former has an integral expression and that is why we go to the reciprocal space. For example the electric field is written as 1 E(k, t)eik·r d3 k. (4.8) E(r, t) = 3 (2π) 2 We shall underline the newly introduced quantities in order to differentiate between the quantities in real and reciprocal spaces. Let us recall that E∗ (k, t) = E(−k, t). It comprises both the annihilation and the creation operators, see below. The total Lagrangian can be written in the form (Lem + Lmat + Lres + Lint ) d3 k, (4.9) L= where the prime means that the integration is restricted to half the reciprocal space and the Lagrangian densities become Lem = 0 (|E|2 − c2 |B|2 ), ˙ 2 − ω02 |X|2 ), Lmat = ρ(|X| ∞ ˙ ω |2 − ω2 |Yω |2 ) dω, Lres = ρ (|Y 0 ˙ +A·X ˙ ∗ + ik · (U ∗ X − U X∗ )] Lint = −α[A∗ · X ∞ ˙ω +X·Y ˙ ∗ω ) dω. − v(ω)(X∗ · Y 0 Method of Continua of Harmonic Oscillators As usual in quantum optics, we choose the Coulomb gauge, k · A(k, t) = 0, so that the vector potential A is a purely transverse field. The scalar potential in the reciprocal space U (k, t) = i α 0 κ · X(k, t) , k where κ is a unit vector in the direction of k. The polarization field X and other oscillator fields Yω (the matter fields) are decomposed into transverse and longitudinal parts. For example X can be written as X(k, t) = X⊥ (k, t) + X (k, t)κ and Yω can be expressed similarly. The total Lagrangian can then be written as the sum of two independent parts. The transverse part contains only transverse fields and is ⊥ ⊥ ⊥ ⊥ 3 (L⊥ (4.13) L = em + Lmat + Lres + Lint ) d k, where 2 2 2 ˙ 2 L⊥ em = 0 (|A| − c k |A| ), ⊥ ⊥ 2 2 ˙ 2 L⊥ mat = ρ(|X | − ω0 |X | ), ∞ ⊥ 2 2 2 ˙⊥ (|Y L⊥ ω | − ω |Yω | ) dω, res = ρ 0 ∞ ⊥∗ ⊥∗ ˙ ⊥ ˙ L⊥ = − αA · X + v(ω)X · Y dω + c. c. int ω The longitudinal part, containing only longitudinal fields, is also given in Huttner and Barnett (1992b). It can be derived that D is a purely transverse field. For convenience, one can restrict oneself to transverse components of other fields and omit the superscript ⊥ . Unit polarization vectors eλ (k), λ = 1, 2, are introduced, which are orthogonal to k and to one another, and the transverse fields are decomposed along them to get A(k, t) = Aλ (k, t)eλ (k) and similar expressions for the other fields. L can now be used to obtain the components of the conjugate variables for the fields −0 E λ ≡ ∂L ˙ λ, = 0 A λ∗ ˙ ∂A 4 Microscopic Theories Pλ ≡ ∂L λ = ρ X˙ − α Aλ , λ∗ ˙ ∂X Q λω ≡ δL λ = ρ Y˙ ω − v(ω)X λ . λ∗ ˙ δY ω The famous ambiguity is worth mentioning: The conjugate of A can be −0 E (with the coupling αρ A · P), as well as −D (with the coupling E · X). Thus any gauge determines a type of coupling. The Hamiltonian for the transverse fields is (Hem + Hmat + Hint ) d3 k, (4.19) H= where Hem = 0 |E|2 + c2 k˜ 2 |A|2 is the / electromagnetic energy density, k˜ being defined by k˜ = ωc c (4.20) + k 2 + kc2 with kc ≡ α2 , ρc2 0 Hmat = |P|2 + ρ ω˜ 02 |X|2 ρ . ∞ - |Q |2 v(ω) ∗ ω 2 2 + + ρω |Yω | + (X · Qω + c. c.) dω ρ ρ 0 is the energy density of the matter fields, including the interaction between the polarD∞ 2 ization and the reservoirs and ω˜ 02 ≡ ω02 + 0 [v(ω)] dω is the renormalized frequency ρ2 of the polarization field, α (4.22) Hint = (A∗ · P + c. c.) ρ is the interaction energy between the electromagnetic field and the polarization. Part 2 of the interaction energy with the matter, namely αρ |A |2 , has already been classified into (4.20). Fields are quantized in a standard fashion (Cohen-Tannoudji et al. 1989) by postulating equal-time commutation relations between the variables and their conjugates i ˆ δλλ δ(k − k )1, 0 λ λ ∗ ˆ [ Xˆ (k, t), Pˆ (k , t)] = iδλλ δ(k − k )1, λ ˆ (k, t), Eˆ [A λ ∗ (k , t)] = − ˆ ˆ (k , t)] = iδλλ δ(k − k )δ(ω − ω )1, [Yˆ ω (k, t), Q ω (4.23) (4.24) (4.25) where all quantized operators are denoted by a caret. As usual, the annihilation operators are introduced. Method of Continua of Harmonic Oscillators 0 ˜ ˆ λ λ kc A (k, t) − i Eˆ (k, t) , ˜ 2kc ρ i ˆλ λ ˆ ˆb(λ, k, t) = ω˜ 0 X (k, t) + P (k, t) , 2ω˜ 0 ρ ρ 1 ˆλ λ ˆ ˆbω (λ, k, t) = −iωY ω (k, t) + Q ω (k, t) 2ω˜ ρ ˆ a(λ, k, t) = (4.26) (4.27) (4.28) From the equal-time commutation relations for the fields (4.23), (4.24), (4.25), the equal-time commutation relations for the creation and annihilation operators ˆ ˆ [a(λ, k, t), aˆ † (λ , k , t)] = δλλ δ(k − k )1, ˆ ˆ [b(λ, k, t), bˆ † (λ , k , t)] = δλλ δ(k − k )1, † [bˆ ω (λ, k, t), bˆ ω (λ , k , t)] = δλλ δ(ω − ω )δ(k − k )1ˆ are obtained. The normally ordered Hamiltonian for the transverse fields is ˆ mat + H ˆ int , ˆ =H ˆ em + H H where ˆ em = H ˜ aˆ † (λ, k, t)a(λ, ˆ kc k, t) d3 k, ˆ mat = H ˆ k, t) + ω˜ 0 bˆ † (λ, k, t)b(λ, ∞ 0 ωbˆ ω† (λ, k, t)bˆ ω (λ, k, t) dω ˆ V (ω) b(λ, k, t)bˆ ω† (λ, k, t) † ˆ + b (λ, −k, t)bω (λ, k, t) + H. c. dω d3 k, ˆ =i Λ(k) a(λ, k, t)bˆ † (λ, k, t) 2 λ=1,2 + aˆ † (λ, −k, t)bˆ † (λ, k, t) + H. c. d3 k, ˆ† ˆ int H where V (ω) = v(ω) ρ ω , ω˜ 0 Λ(k) ≡ ω˜ 0 ckc2 , k˜ and the k integration has been restored to the full reciprocal space. It is worth mentioning that the Maxwell–Lorentz equations can be derived from the Hamiltonian. It is important that the matter can be formally decoupled from the reservoir by the Fano technique and a dressed matter field obtained. Following Fano (1961), the polarization and reservoir parts of the Hamiltonian can be diagonalized. The dressed 4 Microscopic Theories ˆ matter field creation and annihilation operators Bˆ † (λ, k, ω, t) and B(λ, k, ω, t) are introduced, respectively, which satisfy the usual equal-time commutation relations, ˆ ˆ (4.34) [ B(λ, k, ω, t), Bˆ † (λ , k , ω , t)] = δλλ δ(k − k )δ(ω − ω )1, ˆ ˆ B(λ, k, ω, t) = α0 (ω)b(λ, k, t) + β0 (ω)bˆ † (λ, −k, t) ∞ + α1 (ω, ω )bˆ ω (λ, k, t) 0 † + β1 (ω, ω )bˆ ω (λ, −k, t) dω . (4.35) The coefficients α0 (ω), β0 (ω), α1 (ω, ω ), β1 (ω, ω ) are defined as follows. It is interesting that the diagonalization is performed once for the polarization and reservoir parts of the Hamiltonian and once for the total Hamiltonian. The Hamiltonian expressed in the modal annihilation operators is considered. From relation (4.35) it can be seen that the diagonalization is performed independently for every pair of the counterpropagating modes of the polarization field (“the modes” here are only formally similar to those of the electromagnetic field) and that it is performed using a Bogoliubov transformation. A useful definition of an “eigenoperator” is presented Barnett and Radmore (1988), ˆ ˆ ˆ mat ] = ω B (λ, k, ω, t). [ B(λ, k, ω, t), H The coefficients of the Bogoliubov transformation are calculated, that is to say the formulae ω + ω˜ 0 V (ω) , (4.37) α0 (ω) = 2 2 ω − ω˜ 02 z(ω) where z(ω) is defined by ∞ 1 V(ω ) dω lim , 2ω˜ 0 ε→+0 −∞ ω − ω + iε ω − ω˜ 0 V (ω) , β0 (ω) = 2 2 ω − ω˜ 02 z(ω) ω˜ 0 V ∗ (ω ) V (ω) , α1 (ω, ω ) = δ(ω − ω ) + 2 ω − ω − i0 ω2 − ω˜ 02 z(ω) z(ω) = 1 − β1 (ω, ω ) = ω˜ 0 2 V (ω ) ω + ω V (ω) − ω˜ 02 z(ω) (4.38) (4.39) (4.40) are derived. In the study no constant V(ω) ≡ |V (ω)|2 occurs. As usual with the substitutions, we are also interested in the inverse transformation. It is given by the relations ∞ ∗ ˆ ˆ α0 (ω) B(λ, k, ω, t) − β0 (ω) Bˆ † (λ, −k, t, ω) dω (4.42) b(λ, k, t) = 0 Method of Continua of Harmonic Oscillators bˆ ω (λ, k, t) = 0 ˆ α1∗ (ω , ω) B(λ, k, ω , t) − β1 (ω , ω) Bˆ † (λ, −k, ω , t) dω . (4.43) The conditions I ≡ I (ω, ω ) ≡ ∞ 0 |α0 (ω)|2 − |β0 (ω)|2 dω = 1, α1∗ (ν, ω)α1 (ν, ω ) − β1 (ν, ω)β1∗ (ν, ω ) dν = δ(ω − ω ) (4.45) for the coefficients of the Bogoliubov transformation seem to be familiar. It has been shown that the diagonalization cannot be performed on the common assumption of white noise (the Markov-type coupling). It is commented on free charges and a conducting medium being beyond the scope of Huttner and Barnett (1992a). We need not grieve for the assumption of the white noise. Without it, we are farther from the original Lorentzian formulation, nothing more. The diagonalization of the total Hamiltonian is formally very similar to √ the diagonalization of its matter part. A dimensionless coupling constant ζ (ω) = i ω˜ 0 [α0 (ω)+ β0 (ω)] is defined and the annihilation operators are introduced (by a Fano type of technique), ˆ ˆ k, t) + β˜ 0 (k, ω)aˆ † (λ, −k, t) C(λ, k, ω, t) = α˜ 0 (k, ω)a(λ, ∞ ˆ + k, ω , t) α˜ 1 (k, ω, ω ) B(λ, 0 + β˜ 1 (k, ω, ω ) Bˆ † (λ, −k, ω , t) dω , where the coefficients α˜ 0 (k, ω), β˜ 0 (k, ω), α˜ 1 (k, ω, ω ), and β˜ 1 (k, ω, ω ) are rather complicated and are derived in the form ˜ ωc2 ω + kc ζ (ω) , (4.47) α˜ 0 (k, ω) = 2 ˜ ˜kc 2 ω − k 2 c2 z˜ (k, ω) where ωc2 z˜ (k, ω) = 1 − ˜ 2 2(kc) or alternatively α˜ 0 (k, ω) = ωc2 ˜ kc ε→+0 −∞ |ζ (ω )|2 ω − ω + iε dω , ˜ ω + kc ζ (ω) , 2 ∗ (ω)ω2 − k 2 c2 where the complex relative permittivity (ω) is introduced (ω) = 1 + 1 2 2 k c − (k 2 c2 + ωc2 )˜z ∗ (k, ω) , independent of k, ω2 4 Microscopic Theories ˜ ζ (ω) ω − kc , 2 ∗ (ω)ω2 − k 2 c2 ω2 ζ ∗ (ω ) ζ (ω) , α˜ 1 (k, ω, ω ) = δ(ω − ω ) + c ∗ 2 ω − ω − i0 (ω)ω2 − k 2 c2 β˜ 0 (k, ω) = ωc2 ˜ kc (4.51) (4.52) and ω2 β˜ 1 (k, ω, ω ) = c 2 ζ ∗ (ω ) ω − ω − i0 ζ (ω) . ∗ (ω)ω2 − k 2 c2 ˆ The operators C(λ, k, ω, t) and Cˆ † (λ, k, ω, t) also satisfy the usual commutation relations, ˆ [C(λ, k, ω, t), Cˆ † (λ , k , ω , t)] = δλλ δ(k − k )δ(ω − ω )1ˆ and being operators for eigenmodes, ˆ ˆ ] = ωC(λ, ˆ [C(λ, k, ω, t), H k, ω, t), they have a harmonic time dependence ˆ ˆ C(λ, k, ω, t) = C(λ, k, ω, 0)e−iωt . The vector potential is now given by ˆ t) = A(r, ∞ × 0 ωc2 eλ (k) 20 λ=1,2 ζ ∗ (ω) −i(ωt−k·r) ˆ C(λ, k, ω, 0)e + H. c. dω d3 k. (4.57) ω2 (ω) − k 2 c2 ˆ t), Relation (4.8) being modified for the operators expresses the operators A(r, ˆ ˆ ˆ ˆ ˆ ˆ ˆ E(r, t),... in terms of the operators A(k, t), X(k, t), Yω (k, t), E(k, t), P(k, t), Qω (k, t). ˆ ˆ Let us note that on the substitution into (4.8) for A(k, t), E(k, t),... by the relations [ˆa(k, t) + aˆ † (−k, t)], ˜ 2kc0 ˜ kc ˆ E(k, t) = i [ˆa(k, t) + aˆ † (−k, t)], 20 ... ... ..., ˆ t) = A(k, Method of Continua of Harmonic Oscillators ˆ t), bˆ ω (k, t) and aˆ † (k, t), bˆ † (k, t), the annihilation and creation operators aˆ (k, t), b(k, ˆb†ω (k, t), respectively, are introduced. On the substitution into the intermediate result ˆ t) and bˆ ω (k, t) by relations (4.42) and (4.43), the operators for the operators b(k, ˆB(k, ω, t) are introduced. On the substitution into the intermediate result for the operators aˆ (k, t) by the relation (the slightly modified relation (4.2) from Huttner and Barnett (1992b)) aˆ (k, t) = ˆ ˆ † (−k, ω, t) dω α˜ 0∗ (k, ω)C(k, ω, t) − β˜ 0 (k, ω)C ˆ and for the operators B(k, ω, t) by the relation ˆ B(k, ω, t) = 0 ˆ ˆ † (−k, ω, t) dω , ω, t) − β˜ 1 (k, ω , ω)C α˜ 1∗ (k, ω, ω )C(k, (4.60) ˆ the operators C(k, ω, t) are introduced, which have the time dependence (4.56). ˆ On the substitution into the intermediate result for C(k, ω, 0) by relation (4.46), ˆ the operators aˆ (k, 0) and B(k, ω, 0) are introduced and on the substitution into the ˆ intermediate result for the operators B(k, ω, 0) by relation (4.35), the operators ˆb(k, 0) and bˆ ω (k, 0) are introduced. On the substitution into the intermediate result ˆ 0), for these operators by the formulae (4.26), (4.27), and (4.28), the operators A(k, ˆ ˆ ˆ ˆ ˆ X(k, 0), Yω (k, 0), E(k, 0), P(k, 0), Qω (k, 0) are introduced. On the substitution into the intermediate result for these operators by the relations ˆ t)e−ik·r d3 r, A(r, 3 (2π) 2 ˆ t)e−ik·r d3 r, ˆE(k, t) = 1 3 E(r, (2π) 2 ... ... ..., ˆ t) = A(k, ˆ 0), E(r, ˆ 0),... are introduced. These substitutions solve the Cauchy the operators A(r, or initial problem. Huttner and Barnett (1992b) restrict themselves first to a one-dimensional case when describing the propagation in the dielectric. The vector potential is considered in the simpler form 1 ˆ A(x, t) = √ 4π A(ω) cˆ + (x, ω, 0)e−iωt + cˆ − (x, ω, 0)e−iωt + H. c. dω, where A(ω) = η(ω) , 0 Scω|n(ω)|2 4 Microscopic Theories S is a cross-sectional area, n(ω) is the complex refractive index defined as the square root of the relative permittivity (ω) with a positive real part η(ω), and the operators cˆ ± (x, ω, t) are ˆ ω, t)eikx Im{K (ω)} iφ(ω) ∞ C(k, cˆ ± (x, ω, t) = e dk, (4.64) π K (ω) ∓ k −∞ where the complex wave number K (ω) and the phase factor eiφ(ω) are expressed as n(ω)ω , c ζ ∗ (ω) |n(ω)| = , |ζ (ω)| n K (ω) = and Im{K (ω)} > 0. Since the magnetic field can be expressed similarly as the vector potential, the spatial quantum Langevin equations of progression can be obtained as + ∂ cˆ ± (x, ω, t) = ±iK (ω) ˆc± (x, ω, t) ± 2Im{K (ω)} ˆf (x, ω, t), ∂x where the Langevin-noise operator is ˆf (x, ω, t) = − √ i eiφ(ω) 2π ˆ C(k, ω, t)eikx dk and it also enters a rather similar expression for the electric displacement operator. Equation (4.67) have been obtained from the Maxwell equations for the monochromatic fields. Huttner and Barnett (1992b) remind of the simple commutation relations ˆ [ ˆf (x, ω, t), ˆf † (x , ω , t)] = δ(x − x )δ(ω − ω )1, further of the equal-space commutation relations † ˆ [ˆc± (x, ω, t), cˆ ± (x, ω , t)] = δ(ω − ω )1, † [ˆc± (x, ω, t), cˆ ∓ (x, ω , t)] = 0ˆ and, finally, that cˆ + (x, ω, t) commutes with all the Langevin operators ˆf (x , ω , t) and ˆf † (x , ω , t) for all x > x, while cˆ − (x, ω, t) commutes with all the Langevin operators ˆf (x , ω , t) and ˆf † (x , ω , t) for all x < x. Jeffers and Barnett (1994) modelled the propagation of squeezed light through an absorbing dispersive dielectric medium. Hradil (1996) considered “lossless” dispersive dielectrics, i.e. dielectrics with a thin absorption line. He formulated a canonical quantization of the electromagnetic field in a closed Fabry–P´erot resonator with a dispersive slab. Method of Continua of Harmonic Oscillators Wubs and Suttorp (2001) have solved the initial-value problem for the dampedpolariton model formulated by Huttner and Barnett (1992a,b) and have found that for long times all field operators can be expressed in terms of the initial reservoir operators. They have investigated the transient dynamics of the spontaneousemission rate of a guest atom in an absorbing medium. Hillery and Drummond (2001) have studied the scattering of the quantized electromagnetic field from a linear dispersive dielectric in the limit of “thin” absorption lines. The field is represented by means of the dual vector potential. Input–output relations are unitary and no additional quantum-noise terms are required. Equations specialized to the case of a dielectric layer with a uniform density of oscillators are usual expressions. Janowicz et al. (2003) analyse radiative heat transfer between two dielectric bodies. Quantization of the electromagnetic field in inhomogeneous, dispersive, and lossy dielectrics is performed with the help of a procedure which is still attributed to Huttner and Barnett (1992b). Expectation value of the Poynting vector operator is computed. To this end, two techniques suitable for nonequilibrium processes are utilized: the Heisenberg equation of motion and the diagrammatic Keldysh procedure. It is remarked that in nonlinear models the Keldysh formalism provides a framework for the perturbation expansion. The calculations fit into the development of the theory of thermal scanning microscopy. 4.1.2 Correlation of Ground-State Fluctuations The quantization of the radiation imbedded in a dielectric with a space-dependent refractive index has been expounded in the book by Vogel and Welsch (1994). A canonical quantization scheme for radiation fields in linear dielectrics with a space-dependent refractive index has been developed by Kn¨oll et al. (1987) and later by Glauber and Lewenstein (1991). For application, see, for example, Kn¨oll et al. (1986, 1990, 1991), Kn¨oll and Welsch (1992) and a related work (Kn¨oll and Leonhardt 1992). Gruner and Welsch (1995) have contributed to the stream of papers aiming at a description of quantum properties of the dispersive and lossy dielectrics including the vacuum fluctuations, i.e. fluctuations of radiation field in the ground state of the coupled light–matter system. They study it in terms of a symmetrized correlation function. They try to expound and supplement the paper by Huttner and Barnett (1992b) from the point of view of the quantization of the phenomenological Maxwell theory. First, the quantization of radiation in a dispersive and lossy dielectric is performed. This begins from the classical Maxwell equations (3.176) with (3.177) and a constitutive relation comprising an integral term, D(r, t) = 0 E(r, t) + 0 χ (τ )E(r, t − τ ) dτ , 4 Microscopic Theories is transformed into the Fourier space to yield D(r, ω) = 0 (ω)E(r, ω) and the Helmholtz equation is presented. The Huttner–Barnett quantization scheme is introduced with a diagonalized Hamiltonian ∞ ˆ ˆ = ωCˆ † (λ, k, ω)C(λ, k, ω) dω d3 k, (4.73) H λ=1,2 which comprises a sum over λ which is absent from relation (3.14) of Huttner and Barnett (1992b). The effect of the medium is entirely determined by the complex permittivity (ω). It still has no tensorial character. Let us remember relations (4.3), ˆ (4.5), (4.6), and (4.7) of Huttner and Barnett (1992b). / In these relations, C(k, ω) √ 2 ω c ˆ ζ ∗ (ω) = ω Im{(ω)} utishould be replaced by C (λ, k, ω) and the identity 2 lized. The frequency-dependent field operators are introduced in the three-dimensional case. Not only the equal-time commutation relations, but even the most general ones are presented. The vector-field operators aˆ (r, ω) and ˆf(r, ω) have been introduced, the vector aˆ (r, ω) being a generalization of the component cˆ (x, ω) from Huttner and Barnett (1992b). The definition of the transverse δ function is presented as δi⊥j (r) 1 = (2π)3 ∞ −∞ ki k j δi j − 2 k eik·r d3 k, which can be interpreted also as a transverse projection of the columns of 3 × 3 identity matrix multiplied by the δ function. The transverse projection of the columns of an identity matrix multiplied by other functions has proved to be useful, for example, the commutators [aˆ i (r, ω), aˆ j (r , ω )] can be expressed in terms of such projections. Here, the operator Δi j , Δi j F(r) = ∞ −∞ δi⊥j (r − r )F(r ) d3 r , is applied (the putting of an identity matrix to be multiplied by a function and subsequent transverse projection of the columns), but very complicated expressions are obtained. Continuing the use of the operators aˆ (r, ω) and ˆf(r, ω), an analogue of relation (5.21) of Huttner and Barnett (1992b) (cf. (4.62) here) has been written. Then, analogues of their frequency decompositions of the vector-potential operators, electricfield strength operators, etc. have been presented. The operator constitutive equation (in the Fourier space) ˆ ω) − 0 F(ω)ˆf(r, ω), ˆ ω) = 0 (ω)E(r, D(r, Method of Continua of Harmonic Oscillators where 0 F(ω) = 0 Im{(ω)} π differs from the classical equation (4.72) by an additional term. On substituting into the phenomenological Maxwell equations, the partial differential equation for the operator aˆ (r, ω) is obtained which is a Helmholtz equation with a right-hand side. In the three-dimensional case, there exists no decomposition into first-order equations. The canonical commutation relations are ˆ ˆ i (r, t), Eˆ j (r , t)] = − i δi⊥j (Δr)1, [A 0 Δr = r − r . with the abbreviation A test of the consistency of the theory in the limit (ω) → 1 has been accomplished. ˆ Let us recall the usual annihilation operators a(λ, k), which satisfy the commutation relations ˆ [a(λ, k), aˆ † (λ , k )] = δλλ δ(k − k )1ˆ ˆ 0). and enter the expansion for A(r, ˆ The operators aˆ (r, ω) and ˆf(r, ω) derived from C(λ, k, ω) are not independent operators. Cf., Huttner and Barnett (1992b) who in the one-dimensional case introduce forward and backward-propagating fields and show that such a definition ensures the causal (one-sided) independence of the respective operators of the operator ˆf(r, ω). In the three-dimensional case, there exists no generalization of relation (4.64) and no equation for such quantities. The theory is applied to the determination of the correlation of the ground-state fluctuations of the electric-field strength. The symmetric correlation function of the electric-field strength K mn (Δr, τ ) = 1 ˆ 0| E m (r, t + τ ) Eˆ n (r + Δr, t) 2 + Eˆ n (r + Δr, t) Eˆ m (r, t + τ ) |0 is considered. We remark that K mn (Δr, τ ) = 4π 2 c2 × 0 n R (ω) ω exp − ωc n I (ω) cos(ωτ )Δi j sin n R (ω)Δr dω, n R (ω)Δr c 0 4 Microscopic Theories where Δr = |Δr| and + n I (ω) = Im{ (ω)} = Im{n(ω)}, + n R (ω) = Re{ (ω)} = Re{n(ω)}. (4.83) (4.84) Restricting attention to optical frequencies within an interval of the width 2Δω, ω0 − Δω < ω < ω0 + Δω, Δω 1, ω0 (4.85) (4.86) where ω0 is an appropriately chosen centre frequency and assuming that dispersion and absorption are small on lengths of order of β −1 , β= ω n R (ω), c and times of order of ω−1 , Gruner and Welsch (1995) let (βΔr )−1 1, (ωτ )−1 1. (4.88) (4.89) Further, they assume a transparent medium, such as a fibre, for which it may be justified to put approximately ω , ω0 n I (ω) ≈ n I (ω0 ) ≡ n I0 . n R (ω) ≈ n R0 + n R1 (4.90) (4.91) The influence of absorption, phase, and group velocities and group velocity dispersion on the dynamics of the field fluctuations within a frequency interval (4.85) has been studied. The absorption causes a spatial decay of the correlation of the field fluctuations. The light cone of strong correlation, which in empty space is determined by the speed of light in vacuum, is now given by the group velocity in the medium provided that the spatial distance is not too large. With increasing distance, also the dispersion of the group velocity needs a consideration. 4.2 Green-Function Approach On allowing for a frequency-dependent complex permittivity that is consistent with the Kramers–Kronig relations and introducing a random operator noise source associated with the absorption of radiation, the classical Maxwell equations can be considered as quantum operator equations. Their solution based on a Green-function Green-Function Approach expansion of the vector-potential operator seems to be a natural generalization of the mode expansion applicable to source-free radiation in nearly lossless dielectrics. 4.2.1 Dispersive Lossy Linear Inhomogeneous Dielectric Gruner and Welsch (1996a) have expounded a quantization scheme which starts with phenomenological Maxwell equations instead of Lagrangian densities and is consistent with the Kramers–Kronig relations and the familiar (equal-time) canonical commutation relations for the vector potential and electric field. This is realized for homogeneous and inhomogeneous, especially, multilayered dielectrics. In the phenomenological classical Maxwell theory, the equations comprise (ω), the frequency-dependent complex relative permittivity introduced phenomenologically. This function has the analytical continuation in the upper complex half-plane, (Ω), which satisfies the relation (−Ω∗ ) = ∗ (Ω). The real and imaginary parts of the relative permittivity satisfy the well-known Kramers–Kronig relations ∞ Im{(ω )} 1 V.p. dω , π −∞ ω − ω ∞ 1 Re{(ω )} − 1 Im{(ω)} = − V.p. dω , π ω − ω −∞ Re{(ω)} − 1 = (4.93) (4.94) where V.p. is the principal value of the integral. The quantization scheme is based on the Helmholtz equation with the source term ˆ˜ ω) = ˆ˜j (r, ω), ˆ˜ ω) + K2 (ω)A(r, ΔA(r, n ˆ˜ ω) is the “Fourier transform” of the (known) operator vector-potential where A(r, ˆ t) and ˆ˜jn (r, ω) is the “Fourier transform” of the operator-noise current. In fact, A(r, from the exposition it can be seen that the vector-potential operator is introduced by the relation ∞ ˆ˜ ω, 0) dω + H. c., ˆ 0) = A(r, (4.96) A(r, 0 where quantum mechanically also the frequency-dependent operators can be time dependent, t = 0. When Im{(ω)} > 0, a hypothetical addition of a nontrivial solution of the homogeneous Helmholtz equation would violate the boundary condition at infinity. 4 Microscopic Theories ˆ˜ ω)≡ A(r, ˆ˜ ω, t) is uniquely determined by a linear transforHence, the operator A(r, mation of the source operator ˆ˜jn (r, ω)≡ ˆ˜jn (r, ω, t). This operator can be chosen in the form (cf., Gruner and Welsch (1995)) ˆ˜j (r, ω) = F(ω) ω ˆf(r, ω), n c2 ˆ is diagonal in the operators ˆf(r, ω), with F(ω) given in (4.77). The Hamiltonian H ˆ = H ωˆf† (r, ω) · ˆf(r, ω) dω d3 r, and these operators have the usual properties † ˆ [ ˆf i (r, ω), ˆf j (r , ω )] = δi⊥j (r − r )δ(ω − ω )1, [ ˆf i (r, ω), ˆf j (r , ω )] = † [ ˆf i (r, ω), ˆf † (r , ω )] j ˆ = 0. (4.99) (4.100) From the foregoing considerations it follows that (when all appropriate conditions are fulfilled) the operator of the vector potential can be defined by the relation ˆ 0) = A(r, G(r, r , ω)ˆ˜jn (r , ω, 0) d3 r dω + H. c., where the Green function G(r, r , ω) satisfies the equation ΔG(r, r , ω) + K 2 (ω)G(r, r , ω) = δ(r − r ) and the boundary condition that it vanishes at infinity. Another required property is ˙ˆ 0) ˆ 0) = −A(r, E(r, and the canonical field commutation relations ˆ ˆ i (r, 0), Eˆ j (r , 0)] = − i δi⊥j (r − r )1. [A 0 Relation (4.104) must be verified by straightforward calculation. For the sake of clarity, Gruner and Welsch (1996a) illustrate this procedure in linearly polarized radiation propagating in the x direction. Relation (4.104) are replaced by the relation ˆ ˆ ˆ , 0)] = − i δ(x − x )1, [ A(x, 0), E(x A0 Green-Function Approach where A is the normalization area perpendicular to the x direction. It is shown that when losses in the dielectric may be disregarded, Im{(ω)} → 0, the concept of quantization through the mode expansion can be recognized. The operators ˆf (x, ω) are replaced by the operators aˆ ± (x, ω) (it would be possible to introduce the operaˆ k = ±Re{n(ω)} ωc )), which satisfy the commutation relations tors a(x, ω † ˆ (4.106) [aˆ ± (x, ω), aˆ ± (x , ω )] = exp −Im{n(ω)} |x − x | δ(ω − ω )1, c ω ω † [aˆ ± (x, ω), aˆ ∓ (x , ω )] = 2Im{n(ω)} exp ∓iβ(ω) (x + x ) c c sin Re{n(ω)} ω |x − x | ω c × exp −Im{n(ω)} |x − x | c Re{n(ω)} ωc ˆ × θ [±(x − x )]δ(ω − ω )1, (4.107) where θ (x) is the Heaviside function. These operators become independent of x in the limit Im{n(ω)} ωc |x − x | → 0. As the commutation relation (4.105) is in an obvious contradiction with a macroscopic approach, it is important that Gruner and Welsch (1996a) have derived the relation ˆ Δω (x, 0), Eˆ Δω (x , 0)] = − i ˆ δ(x − x )1, AR (ωc )0 ˆ Δω (x, 0), Eˆ Δω (x, 0). where ωc is the centre frequency for suitably defined operators, A The theory further reveals that the weak absorption gives rise to space-dependent mode operators that spatially progress according to quantum Langevin equations in the direct space. As could be expected, the operators aˆ ± (x, ω), as the forward- and backward-propagating fields, are governed by quantum Langevin equations, but it holds that the operator-valued Langevin noise is space dependent, 1 ω ω Fˆ ± (x, ω) = ± 2Im{n(ω)} exp ∓iRe{n(ω)} x ˆf (x, ω). i c c In other words, the operators aˆ ± (x, ω) progress in space. As an example of inhomogeneous structure, two bulk dielectrics with a common interface are considered. The problem of determining a classical Green function reappears. The verification of the commutation relation (4.105) is performed by straightforward calculation, which is more complicated. A general proof of this relation is not present, causality reasons are only pointed out. There exists a straightforward generalization of the quantization method based on a mode expansion (Khosravi and Loudon 1991, 1992, Agarwal 1975). The behaviour of short light pulses propagating in a dispersive absorbing linear dielectric with a special attention to squeezed pulses has been studied (Schmidt et al. 1996). 4 Microscopic Theories 4.2.2 Dispersive Lossy Nonlinear Inhomogeneous Dielectric Emphasizing the important differences from the linear model, the Lagrangian and Hamiltonian for the nonlinear dielectric are introduced by Schmidt et al. (1998). The Lagrangian density (4.2) has been denoted by Ll (r) and this relation with L replaced by Ll (r) has been utilized in the Lagrangian density in the relation L(r) = Ll (r) + Lnl (r), Lnl (r) = f [X(r)]. where moreover While in the linear case it is sufficient to quantize only the transverse fields, in the nonlinear case such a procedure would result in a loss of generality. The result of the substitution from relations (4.31), (4.32), (4.33) into relation (4.30) which we ˆ , we denote here as H ˆ ⊥ . The total Hamiltonian can be written as have denoted as H l ˆ nl , ˆ =H ˆl + H H ˆ nl is given by where the nonlinear interaction term H ˆ nl = − H f [X(r)] d3 r ˆ l that governs the linear dynamics can be written as and the Hamiltonian H ˆl = H ˆ+H ˆ l⊥ , H l where ˆ= H l ∞ ˆ k, t) + ω0 bˆ † (, k, t)b(, ωbˆ ω† (, k, t)bˆ ω (, k, t) dω d3 k 0 ∞ † ˆ k, t)][bˆ ω† (, k, t)bˆ ω (, −k, t)] dω d3 k V (ω)[bˆ (, −k, t) + b(, + 2 0 (4.115) ˆ k, t), bˆ ω (, −k, t) must be appropriately defined (see, and the components b(, ˆ nl couples the transverse Schmidt et al. (1998)) for bˆ (k), bˆ (k, ω)). In general, H and longitudinal fields, cf., relation (4.12). Schmidt et al. (1998) have derived evolution equations for the field operators and shown that additional noise sources appear in the nonlinear terms. Linear relationships between quantum (operator-valued) fields are introduced following Huttner Green-Function Approach and Barnett (1992b) as well as Gruner and Welsch (1995). The relations hold for all times and both in linear and in nonlinear cases. Schmidt et al. (1998) do not attempt at diagonalization of the nonquadratical ˆ , relation (4.112), the notation of which is still the same as of the Hamiltonian H Hamiltonian in (4.30). They avoid the difficulty with the generalization of the defˆ ˆ initions (4.46) and (4.35) to the nonlinear functions of the operators a(k), B(k, ω), ˆb(k), bˆ ω (k). We now approach the following representations of the matter fields. The longituˆ (r) can be expressed in terms of the field ˆf (r, ω) as dinal matter field X ∞ ˆ (r) = X [α0∗ (ω) − β0∗ (ω)]fˆ (r, ω) dω + H. c., (4.116) 2ρ ω˜ 0 0 ˆ ⊥ (r) can be expressed in terms of the field ˆf(r, ω) as the transverse matter field X ∞ ⊥ ˆ˜ ⊥ (r, ω) dω + H. c., ˆ X (r) = X (4.117) 0 . 0 ˆ ⊥ ˆ ˜ ω) + ˜ (r, ω) = X Im{(ω)} ˆf(r, ω) , −iω[(ω) − 1]A(r, α π 0 ˆ˜ ω) being connected with the field ˆf(r, ω) as the solution of equation (4.95) with A(r, and the explicit relation (4.97), and the vector-potential field ∞ ˆ˜ ω) dω + H. c. ˆ A(r) = A(r, (4.119) 0 If the validity of the expressions (4.119), (4.116), and (4.117) is related to the time evolution of the kind of (4.56), we may be afraid that this correctness will not endure the change to the nonlinear case. This change is reflected in the equations of motion for the basic fields and the vector-potential field in the Heisenberg picture, ∂ ˆ ˆ ] = ωˆf (r, ω) + [ˆf (r, ω), H ˆ nl ], f (r, ω) = [ˆf (r, ω), H ∂t ∂ˆ ˆ ] = ωˆf(r, ω) + [ˆf(r, ω), H ˆ nl ], ω) = [ˆf(r, ω), H i f(r, ∂t ∂ ˆ˜ ˆ˜ ω) + [A(r, ˆ˜ ω), H ˆ˜ ω), H] ˆ = ωA(r, ˆ nl ]. i A(r, ω) = [A(r, ∂t (4.120) (4.121) (4.122) To the number of the relations, which nevertheless hold in linear and nonlinear cases, the nonhomogeneous Helmholtz equation belongs ˆ˜ ω) = ω ˆ˜ ω) + K2 (ω)A(r, ΔA(r, c2 Im{(ω)} ˆf(r, ω). π 0 4 Microscopic Theories ˆ˜ ω) ≡ A(r, ˆ˜ ω, t). Respecting the notaHere it is noticed that ˆf(r, ω) ≡ ˆf(r, ω, t), A(r, tion K 2 (ω) = c−2 ω2 (ω) (cf., (4.65)), we can see that ˆˆ A(r, ˆ˜ ω, t) = K2 (ω1) ˆ˜ ω, t), K 2 (ω)A where 1ˆˆ is the identity superoperator and relation (4.122) implies that 1 ˆ× ∂ , ω1ˆˆ = 1ˆˆ + H ∂t nl where we, for the sake of clarity, write ∂t∂ to the right from the notation 1ˆˆ and the ˆ × on an operator O ˆ is defined by action of H nl ˆ ×O ˆ ≡ [H ˆ nl , O]. ˆ H nl Relation (4.125) can be written in the form 1 ∂ ω ˆ˜ ω, t) = ˆ˜ ω, t) + K2 1ˆˆ + H ˆ × A(r, ΔA(r, Im{(ω)} ˆf(r, ω, t), nl 2 ∂t c π 0 (4.127) ˆ where the elimination of the field X(r) using relations (4.116) and (4.118) indicates ˆ˜ ω) obey the nonlinear new noise sources. All of the fields ˆf (r, ω), ˆf(r, ω), A(r, dynamics. By integration of (4.127) over ω, an equation adequate to the linear and nonlinear cases is obtained. The wealth of operator-valued fields serves the expression of the dispersion and absorption in the nonlinear medium. The basic equations are applied to the one-dimensional case and propagation equations for the slowly varying field amplitudes of pulse-like radiation are derived. The scheme is related to the familiar model of classical susceptibilities and applied to the problem of propagation of quantized radiation in a dispersive and lossy Kerr medium. In the linear theory it is possible to separate the two transverse polarization directions from each other and from the longitudinal direction. As has already been stated, this is not possible for nonlinear media. In practice, in a single-mode optical fibre, only one transverse polarization direction will be excited. Then the total Hamiltonian (4.112) can be reduced to a one-dimensional single-polarization form. Let us consider the propagation in the x direction of plane waves polarized in the y direction. The one dimensionality of the problem permits one to decompose the ˆ˜ (x, ω), respectively, propagating in ˆ˜ ˆ˜ (x, ω) and A field A(x, ω) into components A + − the positive and negative x-directions, ˆ˜ (x, ω), ˆ˜ ˆ˜ (x, ω) + A A(x, ω) = A + − ˜ˆ ± (x, ω) are the solutions of spatial equations of progression where A ∂ ˜ˆ ˆ˜ (x, ω) ∓ iN A± (x, ω) = ±iK (ω) A ± ∂x Im{(ω)} ˆ f (x, ω), (ω) Green-Function Approach / ˆ˜ (x, ω) ≡ A ˆ˜ (x, ω, t). with the normalization factor N = 4π0 Ac2 . We remark that A ± ± Similarly as from relations (4.123) and (4.127), one can arrive from relation (4.129) at relation ∂ 1 ˆ × ˆ˜ ∂ ˆ˜ A± (x, ω, t) = ±iK 1ˆˆ + H A± (x, ω, t) ∂x ∂t nl Im{(ω)} ˆ f (x, ω, t). (4.130) ∓ iN (ω) ˆ (+) In analogy to (4.119), the operators A ± (x) can be introduced, ˆ (+) A ± (x) = ˆ˜ (x, ω) dω. A ± Integrating (4.130) over ω, an equation appropriate to the linear and nonlinear cases is obtained. Adequately to the derived equations which we consider to be mere approximations in the nonlinear case, Schmidt et al. (1998) study the narrow-bandwidth field components and narrow-bandwidth pulses. The theory has been applied to narrowbandwidth pulses propagating in a dielectric with a Kerr-like 4.2.3 Elaboration of Linear Theory Dung et al. (1998) have developed three-dimensional quantization presented in part in Gruner and Welsch (1996a) concerning dispersive and absorbing inhomogeneous dielectric medium. The approach directly starts with the Maxwell equations in the frequency domain for the macroscopic electromagnetic field. It is shown that the classical Maxwell equations together with the constitutive relations except relation (4.71) can be transferred to quantum theory. On considering the charge and current densities, one concentrates oneself on the noise-charge and noise-current densities. The operator-valued noise-charge density ρˆ˜ and the operator-valued noise-current density ˆ˜j are introduced, which are related to the operator-valued noise polarizaˆ˜ tion P, ˆ˜ ω), ˆ˜ ω) = −∇ · P(r, ρ(r, ˆj(r, ˆ˜ ω). ˜ ω) = −iωP(r, (4.132) (4.133) It follows from relations (4.132) and (4.133) that ρˆ˜ and ˆ˜j fulfil the equation of continuity: ˆ˜ ω). ∇ · ˆ˜j(r, ω) = iωρ(r, 4 Microscopic Theories The source term ˆ˜j is related to a bosonic vector field ˆf by the relation like (4.97). The commutation relation (4.100) remains valid and relation (4.99) must be modified to the form † ˆ [ ˆf i (r, ω), ˆf j (r , ω )] = δi j δ(r − r )δ(ω − ω )1. It is pointed out that the current density ˆj˜ is not transverse, because the whole electromagnetic field is considered. Hence, the vector field ˆf assumed here is not transverse as well and the spatial δ function in relation (4.135) is an ordinary δ function instead of a transverse δ function. Relation (4.96) is an integral representation of the vector-potential operator. Dung et al. (1998) start from the partial differential equation 2 ˆ˜ ω) = iωμ ˆ˜j(r, ω), ˆ˜ ω) − ω (r, ω)E(r, ∇ × ∇ × E(r, 0 c2 whose solution can be represented as (here and in part of what follows we use a different notation) ˆ ˜ E(r, ω) = iωμ0 G(r, s, ω) · ˆ˜j(s, ω) d3 s, (4.137) where G(r, s, ω) is the tensor-valued Green function of the classical problem. It satisfies the equation ω2 ∇r ∇r − 1 Δr + 2 (r, ω) · G(r, s, ω) = δ(r − s)1 c together with appropriate boundary conditions. Dung et al. (1998) have derived commutation relations ∞ ∂ ω εkm j G i j (r, r , ω) dω, (4.139) [ Eˆ i (r), Bˆ k (r )] = π 0 ∂ xm −∞ c2 where εkm j is the Levi-Civit`a tensor and G i j (r, r , ω) = ei · G(r, r , ω) · e j , [ Eˆ i (r), Eˆ k (r )] = 0ˆ = [ Bˆ i (r), Bˆ k (r )]. In the sense of the Helmholtz theorem there exists a unique decomposition of the ˆ˜ , i.e. the Coulomb ˆ˜ into a transverse part E ˆ˜ ⊥ and a longitudinal part, E electric field E ˆ ˆ ˆ ⊥ ˜ and E ˜ = iωA ˜ = −∇ ϕ. ˆ˜ In the Coulomb gauge, gauge can be introduced, where E Green-Function Approach ˆ˜ and ϕ, ˆ˜ respectively, are related to the electric the vector and scalar potentials A field as ˆ˜ (r, ω) = 1 δi⊥j (r − s) Eˆ˜ j (s, ω) d3 s, (4.142) A i iω ∂ ˆ˜ ω) = − δij (r − s) Eˆ˜ j (s, ω) d3 s, (4.143) ϕ(r, ∂ xi where δi⊥j and δi j are the components of the transverse and longitudinal tensorvalued δ functions δ ⊥ (r) = δ(r)1 + ∇∇(4π|r|)−1 , δ (r) = −∇∇(4π|r|)−1 . (4.144) (4.145) ˙ˆ ˆ are canonically conjugated field variables. On It is recalled that A(r) and 0 A(r) the contrary, the complexity of the commutation relation (4.139) suggests that the “canonical” commutators are not so simple as we would expect by the definition. The commutation relation between the vector potential and the scalar potential is as complicated, when one and only one of these quantities is differentiated with respect to the time or comprises such a derivative. The simple commutation relations are ˙ˆ (r), A ˙ˆ (r )], ˆ j (r )] = 0ˆ = [ A ˆ i (r), A [A i j ˙ ˆ ˆ [ϕ(r), ˆ ϕ(r ˆ )] = 0 = [ϕ(r), ˆ A (r )]. i (4.146) (4.147) Then, the theory is applied to the bulk dielectric such that the dielectric function can be assumed to be independent of space, (r, ω) = (ω) for all r. In this case, the solution of equation (4.138) that satisfies the boundary condition at infinity is (cf., Tomaˇs 1995) G(r, r , ω) = ∇r ∇r + K 2 (ω)1 K −2 (ω)g(|r − r |, ω), where g(r, ω) = exp[iK (ω)r ] . 4πr Relation (4.139) can be simplified as ∂ i ˆ δ(r − r )1, [ Eˆ i (r), Bˆ k (r )] = − εikm 0 ∂ xm and the “canonical” commutator corresponds to the definition ˙ˆ (r )] = i δ ⊥ (r − r )1. ˆ ˆ i (r), A [A j 0 i j 4 Microscopic Theories Moreover, ˆ ˆ j (r )] = 0. [ϕ(r), ˆ A The commutation relations presented are equal-time Heisenberg picture ones and therefore it is emphasized that they are conserved. To make contact with the earlier work, Dung et al. (1998) define the (4.153) f⊥ (r, ω) = δ ⊥ (r − s) · f(s, ω) d3 s, (4.154) f (r, ω) = δ (r − s) · f(s, ω) d3 s. The commutation relations (4.135) and (4.100) imply that ⊥() [ ˆf i ⊥() ⊥() ˆ (r, ω), ( ˆf j (r , ω ))† ] = δi j (r − r )δ(ω − ω )1, ⊥() ⊥() ˆ [ ˆf i (r, ω), ˆf j (r , ω )] = [ ˆf i⊥ (r, ω), ( ˆf j (r , ω ))† ] = 0. The representation of transverse vector potential simplifies to ˆ ˜ A(r, ω, 0) = μ0 g(|r − r |, ω)ˆ˜j⊥ (r , ω, 0) d3 r . It can be derived that the scalar potential operator ˆ ρ(s, ˜ ω, 0) 3 1 ˆ˜ ω, 0) = d s, ϕ(r, 4π 0 (ω) |r − s| (4.155) (4.156) ˆ˜ ω, 0) = (iω)−1 ∇ · ˆj˜ (r, ω, 0). where ρ(r, Another application is the quantization of the electromagnetic field in an inhomogeneous medium that consists of two bulk dielectrics with a common interface. The determination of the tensor-valued Green function for three-dimensional configuration of dielectric bodies is a very involved problem, in general. Dung et al. (1998) return to the simple configuration which was mentioned in Gruner and Welsch (1996a). It is referred to Tomaˇs (1995) for the classical treatment of multilayer structures. It is shown that for the configuration under study, the commutation relations (4.150), (4.151), and (4.152) hold. The necessity of a new calculation of the quantum electrodynamical commutation relations for a new three-dimensional configuration (cf., Dung et al. 1998) is not absolute. Scheel et al. (1998) have proven that the fundamental equal-time commutation relations of quantum electrodynamics are preserved for an arbitrarily space-dependent Kramers–Kronig dielectric function. Let us recall that the complex-valued dielectric function (r, ω) depends on frequency and space, (r, ω) → 1, if ω → ∞. Green-Function Approach It is assumed that the real part (responsible for dispersion) and the imaginary part (responsible for absorption) are related to each other according to the Kramers– Kronig relations, because of causality. This also implies that (r, ω) is a holomorfic function in the upper complex half-plane of frequency ∂ (r, ω) = 0, Im ω > 0. ∂ω∗ Scheel et al. (1998) study relation (4.139). By comparison of the right-hand sides of this relation and relation (4.150), they arrive at the identity to be proven ∞ ← ← ω G(r, r , ω) dω × ∇ r = −iπ 1δ(r − r ) × ∇ r . (4.161) − 2 c −∞ ← Here the left arrow means that the operators ∂ x∂ will first be written as ∂ ∂x in the m m expansion of the Hamilton operator, ∇, with this upper limit. Based on the partial differential equation (4.138) for the tensor-valued Green function, an integral equation will be presented in what follows. The partial differential equation and the boundary condition at infinity determine the Green function uniquely. By comparison of relation (4.137) with a constitutive relation, we could derive that iμ0 ωG i j (r, s, ω) are holomorphic functions of ω in the upper complex half-plane, i.e. ∂ ωG k j (r, s, ω) = 0, Im ω > 0, ∂ω∗ ωG k j (r, s, ω) → 0 if |ω| → ∞. Second derivation of the Cauchy–Riemann equation (4.162) consists in the application of ∂ω∂ ∗ to relation (4.138). The left-hand side of relation (4.162) is then the unique solution of the homogeneous problem. Kn¨oll and Leonhardt (1992) calculate the time dependent, let us say a directspace Green function. This could be useful in the combination with a time-dependent (direct-space) noise fdir (r, s). Scheel et al. (1998) have derived the relation ∞ ˜ eiωτ Di j (r, s, τ ) dτ, (4.164) iμ0 ωG i j (r, s, ω) = Di j (r, s, ω) = 0 where Di j (r, s, τ ) are components of the tensor-valued response function that causally relates the electric field E(r, t) to an external current jext (s, t − τ ), so that 1 Di j (r, s, τ ) = 2π = −μ0 ∞ −∞ ˜ i j (r, s, ω) dω e−iωτ D ∂ G i jdir (r, s, τ ), ∂τ 4 Microscopic Theories where G i jdir (r, s, τ ) ≡ 1 2π ∞ −∞ e−iωτ G i j (r, s, ω) dω is the direct-space Green function. From the theory of partial differential equations it is known (see, e.g. Garabedian (1964)) that there exists an equivalent formulation of the problem in terms D (r,ω) d3 r of an integral equation. On introducing 0 (ω) ≡ D d3 r , an appropriately spaceaveraged reference relative permittivity, the integral equation for the tensor-valued Green function can be written as (0) G(r, s, ω) = G (r, s, ω) + K(r, v, ω) · G(v, s, ω) d3 v, (4.167) where G(0) (r, s, ω) = [1 − ∇r ∇s K −2 (s, ω)]g(|r − s|, ω), K(r, v, ω) = [∇r g(|r − v|, ω)][∇v ln K 2 (v, ω) ] + [K 2 (v, ω) − K 02 (ω)]g(|r − v|, ω)]1. (4.168) (4.169) Here g(r, 0) ≡ g0 (r, 0) is given by (4.149), where K (ω) ≡ K 0 (ω), ω2 (r, ω), c2 ω2 K 02 (ω) = 2 0 (ω). c K 2 (r, ω) = (4.170) (4.171) It can be seen that the components of the kernel K ik (r, v, ω) are holomorphic functions of ω in the upper complex half-plane, with K ik (r, v, ω) → 0 if |ω| → ∞. To prove the fundamental commutation relation (4.150), we first decompose the tensor-valued Green function into two parts, G(r, s, ω) = G1 (r, s, ω) + G2 (r, s, ω), where G1 (r, s, ω) satisfies the integral equation G1 = G(0) + K · G1 d3 v, 1 with G(0) 1 (r, s, ω) = g(|r − s|, ω)1, G2 (r, s, ω) = ← Γ(r, s, ω)∇ s . (4.175) (4.176) Green-Function Approach In relation (4.176) Γ is the solution of the integral equation Γ = Γ(0) + K · Γ d3 v, with Γ(0) (r, s, ω) = −∇r [K −2 (s, ω)g(|r − s|, ω)]. Scheel et al. (1998) derive that iμ0 ωG1 and μ0 ω2 Γ are the Fourier transforms of the response functions to the noise-current density and the noise-charge density, respectively. ← Combining relations (4.173) and (4.176) and recalling that ∇ r × ∇ r = 0, we see that the left-hand side of relation (4.161) can be rewritten as ∞ −∞ ← ω G(r, r , ω) dω × ∇ r = − c2 ∞ −∞ ← ω G1 (r, r , ω) dω × ∇ r . c2 Thus, only the noise-current response function iμ0 ωG1 contributes to commutator (4.139). Multiplying the integral equation (4.174) by the function cω2 and integrating over ω, we obtain as the derivation of relation (4.105) from the holomorphic properties of the tensors K and ωG1 that ∞ −∞ ω G1 (r, r , ω) dω = iπ1δ(r − r ). c2 The outer product of this equation and the operator (−∇ r ) can be taken and together with relation (4.179) implies relation (4.161). In addition, it is shown that the scheme also applies to media with both absorption and amplification (in a bounded region of space). An extension of the quantization scheme to linear media with bounded regions of amplification is given and the problem of anisotropic media is briefly addressed, for which the permittivity is a symmetric complex tensor-valued function of ω, i j (r, ω) = ji (r, ω). Extensions of previous work on the propagation in absorbing dielectrics took linear amplification into account (Jeffers et al. 1996, Matloob et al. 1997, Artoni and Loudon 1998). Kn¨oll et al. (1999) investigated quantum-state transformation by dispersive and absorbing four-port devices. Under the usual assumptions on the dielectric permittivity, quantization of the Hamiltonian formalism of the electromagnetic field using a method close to the microscopic approach was performed by Tip (1998). A proper definition of band gaps in the periodic case and a new continuity equation for energy flow were obtained, and an S-matrix formalism for scattering from 4 Microscopic Theories ˇ absorbing objects was worked out. In this way the generation of Cerenkov and transition radiation have been investigated. A path-integral formulation of quantum electrodynamics in a dispersive and absorbing dielectric medium has been presented by Bechler (1999) and has been applied on the microscopic level to the quantum theory of electromagnetic fields in dielectric media. Results concerning quantum electrodynamics in dispersing and absorbing dielectric media have been reviewed by Kn¨oll et al. (2001). Tip et al. (2001) have proven the equivalence of two methods for quantization of the electromagnetic field in general dispersing and absorbing linear dielectrics: the Langevin-noise-current method and the auxiliary field method. Petersson and Smith (2003) have illustrated the role of evanescent waves in power calculations for counterpropagating beams. In classical optics the field of a beam can be represented in terms of its plane-wave spectrum (Smith 1997, Clemmow 1996). Counterpropagating and “counter-evanescent” plane waves are defined relative to a selected plane. When a line current is placed over a dielectric slab, it is appropriate to insert a plane between the line current and the slab. The time-average power passing through a plane is a sum of powers contributed by the propagating plane waves and by “counter-evanescent” pairs of plane waves with the same transverse components of their wave vectors. Suttorb and Wubs (2004) have provided a microscopic justification of the phenomenological quantization scheme for the electromagnetic field in inhomogeneous dielectric due to Gruner and Welsch (1995) (cf., references in subsection 4.2.3). Matloob (2004a) has paid attention to a damped harmonic oscillator. He has used a macroscopic Langevin equation for it. A canonical quantization scheme for the Langevin equation has been provided. A macroscopic electromagnetic field has been quantized in a homogeneous linear isotropic dielectric by the association of a damped quantum-mechanical harmonic oscillator with each mode of the radiation field. Matloob (2004b) has introduced a particular damped harmonic oscillator. He has used an appropriate form of the macroscopic Langevin equation. The canonical quantization scheme has been followed. A macroscopic electromagnetic field has been quantized in a linear isotropic permeable dielectric medium by associating a damped quantum-mechanical oscillator with each mode of the radiation field. In Matloob (2005) a homogeneous medium is assumed that is isotropic in its rest frame. One works with positive frequency parts of the fields. The Minkowski relations are presented which are generalized constitutive relations for uniformly moving media. The electric induction vector depends also on the magnetic strength vector and the magnetic-induction vector depends also on the electric strength vector. Using the Minkowski relations the Maxwell–Minkowski equations are derived. The field vectors E and B are expressed in terms of the vector potential in the Weyl gauge. A time-independent wave equation with the noise polarization and noise magnetization for the vector potential is derived. It is shown that the constitutive relations may be convenient, with the electric induction vector independent of the magnetic strength vector and the magnetic-induction vector independent of the electric strength vector, on considering the anisotropy of the material in the laboratory Green-Function Approach frame. The Green tensor is studied in reciprocal and spatial coordinate space. The fields are quantized by expressing the noise-current density in terms of two infinite sets of appropriately chosen bosonic field operators. The vacuum field fluctuation is expressed. 4.2.4 Optical Field at Dielectric Devices Matloob et al. (1995) provided expressions for the electromagnetic-field operators for three geometries: an infinite homogeneous dielectric, a semi-infinite dielectric, and a dielectric slab. A microscopic derivation has shown that a canonical quantum theory of light at the dielectric–vacuum interface is possible Barnett, Matloob, and Loudon (1995). A simple quantum theory of the beam splitter, which can be applied to a Fabry– P´erot resonator, was introduced by Barnett et al. (1996) and developed by Barnett et al. (1998). Artoni and Loudon (1997) applied the Huttner–Barnett scheme for quantization of the electromagnetic field in dispersive and absorbing dielectrics for the calculations of the effects of perpendicular propagation in a dielectric slab and to the properties of the incident light pulse. Their approach has provided a deeper understanding of antibunching (Artoni and Loudon 1999). Brun and Barnett (1998) considered an experimental set-up using a two-photon interferometer, where insertion of a dielectric into one or both arms of the interferometer is essential. Suggestive is a comparative study of fermion and boson beam splitters (Loudon 1998). Fermions can be studied in analogy with bosons (Cahill and Glauber 1999). Di Stefano et al. (1999) extended the field quantization to these material systems whose interaction with light is described, near a medium boundary, by a nonlocal susceptibility. Di Stefano et al. (2000) have developed a quantization scheme for the electromagnetic field in dispersive and lossy dielectrics with planar interface, including propagation in all the spatial directions, and considering both the transverse electric and the transverse magnetic polarized fields. Di Stefano et al. (2001a) have presented a one-dimensional scheme for the electromagnetic field in arbitrary planar dispersing and absorbing dielectrics, taking into account their finite extent. They have derived that the complete form of the electric-field operator includes a part that corresponds to the free fields incident from the vacuum towards the medium and a particular solution which can be expressed by using the classical Green-function integral representation of the electromagnetic field. By expressing the classical Green function in terms of the classical light modes, they have obtained a generalization of the method of modal expansion (e.g. Kn¨oll et al. (1987)) to absorbing media. Di Stefano et al. (2001b) have based an electromagnetic-field quantization scheme on a microscopic linear two-band model. They have derived for the first time a noise-current operator for general anisotropic and/or spatially nonlocal media, which can be described only in terms of an appropriate frequencydependent susceptibility. 4 Microscopic Theories The Green-tensor formalism is well suited to studying the behaviour of the quantized electromagnetic field in the presence of dispersing and absorbing bodies. Especially, it has been applied successfully to the study of input–output relations (Gruner and Welsch 1996b). As a continuation of this work, Khanbekyan et al. (2003) studied the quantized field in the presence of a dispersing and absorbing multilayered planar structure (shortly, multilayer plate). Three-dimensional input– output relations have been derived for frequency components of the electric-field operator in the transverse reciprocal space. Input–output relations for frequency components of this operator in the direct space have been given as well. The conditions have been stated, under which the input–output relations can be expressed in terms of bosonic operators. These relations have been discussed for the case of the plate being surrounded by vacuum. The theory applies to effectively free fields and those created by active atomic sources inside and/or outside the plate. Khanbekyan et al. (2003) consider n − 1 layers with thicknesses d j , j = 1, . . . , n − 1, the region on the left of the plate ( j = 0), and the region on the right of the plate ( j = n). The permittivity is (z, ρ, ω) = λ j (z) j (ω), independent of ρ, where ρ = (x, y) and λ j (z) = 1, if z ∈ jth region, 0, otherwise. ˆ ω), and ˆf(r, ω) and all the regions For simplicity, we shift the fields G(r, r , ω), E(r, along the z-axis such that the jth region has the left boundary plane going through the origin ( j > 0) or the right boundary plane going through the origin ( j = 0). The result of the shift of respective fields will be denoted by G( j j ) (r, r , ω), Eˆ ( j) (r, ω), and ˆf( j) (r, ω). If Θ(z) is the unit-step function, then Θ( j − j ) for j = j , [Θ(z − z )]( j j ) = (4.184) Θ(z − z ) for j = j , Θ( j − j) for j = j , (4.185) [Θ(z − z)]( j j ) = Θ(z − z) for j = j . Since the Green tensor depends only on the difference ρ−ρ , it can be represented as a two-dimensional Fourier integral 1 eik·(ρ −ρ ) G( j j ) (z, z , k, ω) d2 k, G( j j ) (r, r , ω) = (4.186) (2π)2 where k = (k x , k y ) is the wave vector parallel to the interfaces and G( j j ) (z, z , k, ω) = e−ik·σ G( j j ) (z, z , σ ≡ ρ − ρ , ω) d2 σ . Green-Function Approach The electric field operator Eˆ ( j) (r, ω) may be written as a twofold Fourier transform: 1 (2π)2 ˆ ( j) (r, ω) = E ˆ ( j) (z, k, ω) d2 k, eik·ρ E ˆ ( j) (z, ρ, ω) d2 ρ, e−ik·ρ E ˆ ( j) (z, k, ω) = E and the bosonic field operator may be written in the integral form as 1 (2π)2 ˆf( j) (r, ω) = eik·ρ ˆf( j) (z, k, ω) d2 k, ˆf( j) (z, k, ω) = e−ik·ρ ˆf( j) (z, ρ, ω) d2 ρ. The Green tensor G( j j ) (z, z , k, ω) by the paper (Tomaˇs 1995) may be written as G( j j ) (z, z , k, ω) = −ez δ j j ez δ(z − z ) + g( j j ) (z, z , k, ω), k 2j g( j j ) (z, z , k, ω) = i j> j< σq Eq (z, k, ω)Ξqj j Eq (z , −k, ω)[Θ(z − z )]( j j ) 2 q=p,s j< j> + Eq (z, k, ω)Ξqj j Eq (z , −k, ω)[Θ(z − z)]( j j ) , (4.193) (σp = 1, σs = −1). In equation (4.193) j> Eq (z, k, ω) = eq+ (k)eiβ j (z−d j ) + r j/n eq− (k)e−iβ j (z−d j ) , ( j) ( j) Eq (z, k, ω) = eq− (k)e−iβ j z + r j/0 eq+ (k)eiβ j z ( j) ( j) (4.194) (4.195) and q Ξqj j = iβ d iβ d 1 t0/j e j j t0/j e j j , q Dq j βn t0/n Dq j with q Dq j = 1 − r j/0r j/n e2iβ j d j , 4 Microscopic Theories (d0 = dn = 0) and βj = k 2j − k 2 = β j + iβ j (β j , β j ≥ 0) (k = |k|), where kj = q and t j/j = βj q t β j j /j ε j (ω) ω = k j + ik j (k j , k j ≥ 0), c and r j/j , respectively, the transmission and reflection coefficients between the regions j and j. The unit vectors eq± (k) in equations (4.194) and (4.195) are the polarization unit vectors for transverse electric (q = s) and transverse magnetic (q = p) waves ( j) k ( j) es± (k) = × ez , k k 1 ( j) ∓β j + kez . ep± (k) = kj k (4.200) (4.201) If both the incoming fields incident on the two boundary planes of the plate are known, as well as the fields generated inside the plate, one can calculate the fields outgoing from the two boundary planes by means of input–output relations. These relations are valid also for evanescent-field components. To obtain generally valid input–output relations, we restrict our attention to the electric-field operator. This operator in front of the structure (superscript 0) and behind the structure (superscript n) is decomposed in terms of input and output amplitude operators (0) (0) ˆ (0) (z, k, ω) = eq+ (k) Eˆ qin (z, k, ω) E q=p,s (0) (0) + eq− (k) Eˆ qout (z, k, ω) , (n) (n) ˆ (n) (z, k, ω) = eq− (k) Eˆ qin (z, k, ω) E (n) (n) + eq+ (k) Eˆ qout (z, k, ω) . Here, the operators μ0 ω iβ0 z (0) Eˆ qin (z, k, ω) = − e 2β0 and μ0 ω −iβn z (n) (z, k, ω) = − e Eˆ qin 2βn (0) e−iβ0 z jˆ(0) (z , k, ω) · eq+ (k) dz ∞ z (n) eiβn z ˆj(n) (z , k, ω) · eq− (k) dz Green-Function Approach are input amplitude operators, where the Green function is of a very simple form. The operators (0) (0) Eˆ qout (z, k, ω) = e−iβ0 z Eˆ qout (k, ω) μ0 ω −iβ0 z e 2β0 (0) eiβ0 z jˆ(0) (z , k, ω) · eq− (k) dz and (n) (n) Eˆ qout (z, k, ω) = eiβn z Eˆ qout (k, ω) μ0 ω iβn z z −iβn z ˆ(n) (n) j (z , k, ω) · eq+ e e (k) dz + 2βn 0 are output amplitude operators, where the Green function has the complicated form and is “hidden” in the input–output relations 0 (0) (k, ω) Eˆ qout (n) (k, ω) Eˆ qout r0/n (k, ω) tn/0 (k, ω) q q t0/n (k, ω) rn/0 (k, ω) 1 0 ˆ (0) E qin (k, ω) Eˆ (n) (k, ω) qin 0 1 10 n−1 ( j) ( j) ( j) φq0+ (k, ω) φq0− (k, ω) Eˆ q+ (k, ω) + . ( j) ( j) ( j) Eˆ q− (k, ω) φqn+ (k, ω) φqn− (k, ω) j=1 They relate the output amplitude operators at the boundary planes of the plate to the input amplitude operators at these planes, , , (0) (0) (k, ω) = Eˆ qin,out (z, k, ω), Eˆ qin,out , , (n) (n) (k, ω) = Eˆ qin,out (z, k, ω), Eˆ qin,out and to the amplitude operators associated with the layers μ0 ω ( j) Eˆ q± (k, ω) = − 2β j ( j) e∓iβ j z jˆ( j) (z , k, ω) · eq± (k) dz , ( j) j = 1, 2, . . . , n − 1, which may have been better denoted say by Fˆ q± (k, ω), since , , ( j) ( j) Eˆ q+ (k, ω) = Eˆ q+ (z, k, ω), − . z=d j , , ( j) ( j) Eˆ q− (k, ω) = Eˆ q− (z, k, ω), + . z=0 (4.212) (4.213) 4 Microscopic Theories In equation (4.208), the coefficients at the operators (4.211) read q ( j) φq0+ t j/0 e2iβ j d j Dq j q r j/n , ( j) φq0− t j/n eiβ j d j Dq j Dq j ( j) φqn+ t j/0 ( j) φqn− t j/0 eiβ j d j Dq j r j/n . It is worth noting that any two planes z = z (0) ≤ 0− and z = z (n) ≥ 0+ for j = 0 and j = n, respectively, also can be used in principle for a formulation of the input–output relations. The theory applies to effectively free fields and those created by active atomic sources inside and/or outside the plate. 4.2.5 Modification of Spontaneous Emission by Dielectric Media Scheel et al. (1999a) have found quantum local-field corrections appropriate to the spontaneous emission by an excited atom. Dung et al. (2000) have developed a formalism for studying spontaneous decay of an excited two-level atom in the presence of arbitrary dispersing and absorbing dielectric bodies. They have shown how the minimal-coupling Hamiltonian simplifies to a Hamiltonian in the dipole approximation. The formalism is based on a source-quantity representation of the electromagnetic field in terms of the tensor-valued Green function of the classical problem and appropriately chosen bosonic quantum fields. All relevant information about the bodies such as form and dispersion and absorption properties is contained in the tensor-valued Green function. This function has been available for various configurations such as planarly, spherically, and cylindrically multilayered media (Chew 1995). The theory has been applied to the spontaneous decay of a two-level atom placed at the centre of a three-layer spherical microcavity, the wall being modelled by a Lorentz dielectric. The tensor-valued Green function of the configuration has been known (Li et al. 1994). The calculations have been performed on the assumption of a dielectric with a single resonance. For simplicity, it has been assumed that the atom is positioned at the centre of the cavity. Weak and strong couplings are studied and in the study of the strong couplings both the normal-dispersion range and the anomalous-dispersion range associated with the band gap are considered. Whereas in the range of normal dispersion, the cavity input–output coupling dominates the strength of the atom–field interaction, the significant effect within the band gap is the photon absorption by the wall material. Dung et al. (2001) have studied nonclassical decay of an excited atom near a dispersing and absorbing microsphere of given complex permittivity that satisfies the Kramers–Kr¨onig relations laying emphasis on a Drude–Lorentz permittivity. Among others, they have found a condition on which the decay becomes purely nonradiative. Dung et al. (2002a,b) have given a rigorous quantum-mechanical derivation of the rate of intermolecular energy transfer in the presence of dispersing and Green-Function Approach absorbing media with spatially varying permittivity. They have applied the theory to bulk material, multislab planar structures, where they also have made comparison with experiments, and to microspheres. They have shown that the minimal-coupling scheme and the multipolar-coupling scheme yield exactly the same form of the rate formula. Tip (2004) has used his auxiliary field method to obtain various equivalent Hamiltonians for charged particles interacting with absorptive dielectrics. In two steps, the representations cease to manifest the generalized Coulomb gauge used, but it remains in one term, concentrated in a wave operator. It has also been shown for excited atoms in a photon crystal with transition frequency in a band gap that their states do not decay radiatively. For a transparent dielectric, theoretical studies can take a traditional approach. Inoue and Hori (2001) have developed a formalism of quantization of electromagnetic fields including evanescent waves based on the detector-mode functional defined in terms of those for the widely used triplet modes. They have evaluated atomic and molecular radiation near a dielectric boundary surface. Matloob and Pooseh (2000) have discussed a fully quantum-mechanical theory of the scattering of coherent light by a dissipative dispersive slab. Matloob and Falinejad (2001) have calculated the Casimir force between two dielectric slabs by using the notion of the radiation pressure associated with the quantum electromagnetic vacuum. Specifically, they have used the fact that only the field correlation functions are needed for the evaluation of vacuum radiation pressure on an interface. Matloob (2001) has postulated an electromagnetic-field Lagrangian density at each point of space–time to be of an unfamiliar form comprising the noise-current density. He has expressed the displacement D(r, t) merely in terms of the electric-field E(r, t ), t ≤ t, without adding a noise polarization term. In the framework of a semiclasical approach, Paspalakis and Kis (2002) have studied the propagation dynamics of N laser pulses interacting with an (N +1)-level quantum system (one upper state and N lower states). Assuming the system to be in a superposition state of all of the lower levels initially they have determined the conditions of complete opacity or transparency of the medium. The coupling of pulses is most interesting in the limit of parametric generation. A simplified approach to the quantization is sufficient for the theory of the radiation pressure on dielectric surfaces (Loudon 2002). Loudon (2003) continues (Loudon 2002) with two changes or extensions. Instead of a planar pulse he considers Laguerre–Gaussian light beams. He considers the transfer of angular momentum to a dielectric. He may issue from the book (Allen et al. 2003) and arrive at the paper (Padgett et al. 2003). New forces are produced by a pulse of Laguerre–Gaussian light in comparison with a plane-wave pulse, which produces only a longitudinal force. These are radial and azimuthal forces. A simplification is achieved by assuming that the modal function has zero radial index. The pulse is assumed to contain a single photon. In case an interface is considered the propagation direction of the pulse is assumed to be perpendicular to the surface. If the pulse is propagated into a dielectric, then it is assumed that the dielectric is – weakly – attenuating to ensure that the model need 4 Microscopic Theories not complicate by including any exit or reflection, or finiteness of the dielectric in the direction of propagation. Loudon (2003) gives Laguerre–Gaussian light in terms of the Lorentz-gauge vector potential. He does not speak of the associated Laguerre polynomials, but it is obvious that the degree of a polynomial is zero, when he restricts himself to the radial index p = 0. The theory is quantized. The single-photon pulse is represented by the state vector |1 . The spectrum of the photon wave packet is a narrow-band Gaussian ˆ t):, with function. The author introduces the normal-order Poynting operator :S(r, ˆ z-component denoted by : Sz (r, t):. If the dispersion is ignored, the author can write the expectation value ω0 c 1|: Sˆ z (r, t):|1 = L 2 2c2 ηz 2 exp − 2 t − |u|2 , π L c where L is a conventional length of the pulse, ω0 is a central frequency of the wave packet, η ≡ η(ω0 ) is a refractive index, and u ≡ u k0 ,l (r) is the modal function, k0 = η(ωc0 )ω0 is an angular wave number, and l is the orbital angular-momentum quantum number. The time integral of equation (4.216) is 1|: Sˆ z (r, t):|1 dt = ω0 |u|2 . Further, the author constructs the normally ordered angular-momentum density operator. He introduces the Lorentz force-density operator :ˆf(r, t): and lets : ˆf z (r, t): denote the longitudinal component of this operator. He determines that 2ω0 c(η2 − 1) 2 ηz ˆ 1|: f z (r, t):|1 = − t − L3 π c 2 2 2c ηz × exp − 2 t − |u|2 . L c Similarly, he+lets : ˆf ρ (r, t): denote the radial component of the operator :ˆf(r, t):. As usual, ρ = x 2 + y 2 . He determines that ηz 2 2c2 exp − 2 t − |u|2 . L c (4.219) The/radial force compresses the dielectric towards the cylinder of radius ρ0 = w0 |l|2 , where w0 is the beam waist. 1|: ˆf ρ (r, t):|1 = 2ω0 (η2 − 1) √ 2π ηL |l| 2ρ − 2 ρ w0 Green-Function Approach Similarly, he lets : ˆf φ (r, t): denote the azimuthal component of the operator ˆ :f(r, t):. As usual, φ = arg(x + iy). He determines that 2 2 (η − 1) 2 4c ηz 1|: ˆf φ (r, t):|1 = − t − ηL 3 π c 2 2 σ |l| 2σρ ηz l 2c − + 2 |u|2 , (4.220) × exp − 2 t − L c ρ ρ w0 where σ is the spin angular-momentum quantum number of the beam. The author extends the theory to the case, where space is divided into two regions with a dielectric of real refractive index η0 (ω) at z < 0 and a dielectric of a complex refractive index n(ω) = η(ω) + iκ(ω) at z > 0. The modification of the result (4.217) at z > 0 is ∞ 2ω0 κz 4η0 η 1|: Sˆ z (r, t):|1 dt = ω0 exp − |u|2 , c (η0 + η)2 + κ 2 −∞ where η0 , η, and κ are evaluated at frequency ω0 . The author gives particular attention to the transfer of longitudinal and angular momentum to the dielectric from light incident from free space. He lets the total force on dielectric in {z > 0} and at time t be represented by the force operator ˆf(r, t) dr ˆ = (4.223) F(t) z>0 and lets Fˆ z (t) denote the longitudinal component of this operator. The time-integrated force, or the total linear momentum transfer to the dielectric, is ⎧ ⎫ ⎪ ⎪ ⎪ ⎪ ∞ ⎨ η2 + κ 2 − 1 ⎬ 2ω 2 0 ˆ 1|: Fz (t):|1 dt = + c ⎪ (η + 1)2 + κ 2 (η + 1)2 + κ 2 ⎪ −∞ ⎪ ⎩= ⎭ >; < = >; <⎪ surface 2ω0 η + 1 + κ . c (η + 1)2 + κ 2 = >; < 2 In pursuit of the torque the author first introduces the operator that represents the density of the z-component of the torque on the dielectric :gˆ z (r, t):. Next he lets the total torque on the dielectric in {z > 0} and at time t be represented by the torque operator ˆ z (t) = gˆ z (r, t) dr. (4.225) G z>0 4 Microscopic Theories The time-integrated torque, or total angular-momentum transfer on dielectric, is ⎧ ⎫ ⎪ ⎪ ⎪ ⎪ ∞ 4(l + σ )η ⎨ η2 + κ 2 − 1 1 ⎬ ˆ 1|:G z (t):|1 dt = + 2 (η + 1)2 + κ 2 ⎪ η2 + κ 2 η + κ2 ⎪ −∞ ⎪ ⎩= ⎭ >; < = >; <⎪ 4(l + σ )η . = 2 η + 1 + κ2 = >; < It is shown that it is meaningful to divide the total transfer of linear momentum into surface reflected, surface transmitted, and bulk transmitted contributions provided a photon has passed through. From this the shift of a slab of mass M may be calculated, due to a normally incident single-photon wave packet. Similarly (but for κ = 0 only) the angular rotation of the dielectric slab, whose moment of inertia around the z-axis is denoted I may be calculated, due to such a wave packet. A scheme for transferring quantum states from the propagating light fields to a macroscopic, collective vibrational degree of freedom of a massive mirror has been proposed (Zhang et al. 2003). The proposal may realize an Einstein–Podolsky– Rosen state in position and momentum for a pair of massive mirrors at distinct locations by exploiting a nondegenerate optical parametric amplifier. Loudon et al. (2005) have paid attention to the photon drag effect. The momentum transfer from light to a dielectric material has been calculated by evaluation of the relevant Lorentz force. The photon drag effect is named after the generation of currents or electric fields in semiconductors. Leonhardt and Piwnicki (2001) have analysed the propagation of slow light in moving media in the case where the light is monochromatic in the laboratory frame. The extremely low group velocity is caused by the electromagnetically induced transparency of an atomic transition. Lombardi (2002) has re-examined the physical significance of different velocities which can be introduced for a wave train. Oughstun and Cartwright (2005) have compared the group velocity with the instantaneous centroid velocity of the pulse Poynting vector for an ultrashort Gaussian pulse. Very long pulses that are well tuned to a region of anomalous dispersion do not have superluminal peak velocity of a real physical significance. Yanik and Fan (2005) have formulated basic principles that underlie stopping and storing light coherently in many different physical systems. Following a brief discussion of one of known atomic stopping-light schemes, an all-optical scheme has been analysed in detail. Tip (2007) has studied the properties of atoms close to an absorptive dielectric using his quantized form of the phenomenological Maxwell equations. The author has treated the coupling of atoms with longitudinal modes in detail. The atomic interaction potential changes from the Coulomb one to a static potential, i.e. one that obeys a Poisson equation with zero-frequency limit of the permittivity. The longitudinal interactions of atoms with absorptive dielectrics are responsible for nonradiative decay of the atoms. It has been found that the Hamiltonian used by Green-Function Approach Dung et al. (2002b) is unitarily equivalent to a special case of the Hamiltonian used by Tip (2007). 4.2.6 Left-Handed Materials The interest in “left-handed” materials is reflected both in the theory and in the experiment. Veselago (1967, 1968) was the first to address the question of propagation of the electromagnetic waves in the medium with both the permittivity 0 and the permeability μ0 μ negative. Veselago has shown that, although such materials are not available in the nature, their existence is not excluded by the Maxwell equations Markoˇs 2005). First we summarize the basic electromagnetic properties of left-handed materials. The propagation of the electromagnetic wave, E = E0 ei(k·r−ωt) , is described by the wave equation ω2 (4.227) k2 − 2 μ E(r, t) = 0, c where k is the wave vector and ω is the frequency, k2 = ω2 μ. c2 Propagation is possible if k2 > 0. This is always true in dielectrics, where μ = 1 and is real, while no propagation is possible in metals, the permittivity of which is negative. Materials with both and μ negative allow the propagation of electromagnetic waves. The Maxwell equations simplify to the form k × E = ωμ0 μH, k × H = −ω0 E. Using them in the identities k · (E × H) = H · (k × E) = E · (H × k), k · (E × H) = ωμ0 μH2 = ω0 E2 , k · S < 0, we can derive that 4 Microscopic Theories is the Poynting vector. It means that the wave vector k and the Poynting vector S have opposite directions. The three vectors E, H, and k are a left-handed set of vectors, contrary to conventional materials, where they are a right-handed set. This inspired Veselago to name the materials with both and μ negative left-handed materials. Veselago also has pointed out that the left-handed material must be dispersive, or μ must depend on the frequency of a monochromatic field, since the timeaveraged energy density of the electromagnetic field, ∂ [ω0 (ω0 )] 2 ∂ [ω0 μ(ω0 )] 2 1 E + μ0 H , (4.234) 0 U = 2 ∂ω0 ∂ω0 would be negative. Here the electromagnetic field is assumed to be quasimonochromatic, with a centre or carrier frequency ω0 > 0. The time averaging is done over . The original Gaussian units of measurement can be the period of the carrier, 2π ω0 1 1 respected by the replacements 0 → 4π , μ0 → 4π . From the Kramers–Kronig relations it follows that and μ must be complex. Strictly speaking, we should speak of the medium with both Re and Re μ negative. Many results have been derived on the assumption of a transparent medium, i.e. that Im and Im μ may be neglected. The energy density (4.234) is positive, since (Landau et al. 1984) ∂(ωμ) ∂(ω) > 0, > 0. ∂ω ∂ω The permittivity and permeability determine the index of refraction, n ≡ n(ω), and the relative impedance, Z ≡ Z (ω), by the relations √ n = ± μ, Z = μ . It is assumed that the complex square root has a positive real part, or nonnegative imaginary part if the real part vanishes. The plus sign in (4.236) is used when the square root has a positive imaginary part and the minus sign in (4.236) is used when the square root has a negative imaginary part. Even though the assumption that Im = Im μ = 0 is frequently used and it invalidates the rule of the sign, we may assume Im > 0 or Im μ > 0 small and apply the rule. Due to continuity, the minus sign is used in (4.236) for < 0 and μ < 0. We introduce a phase velocity vector vp ≡ ω s, s·k where s is the direction of the Poynting vector S. As the wave vector k= ω ns, c Green-Function Approach we have let vp denote nc s. On writing the wave vector k in the forms k=± ω√ μ s, c we obtain that ∂(ωμ) 1 ∂(ω) ∂k = Z + Z −1 s, ∂ω 2c ∂ω ∂ω ∂k > 0, ∂ω where Z still means the relative impedance, not the coordinate. Now we introduce a group-velocity vector vg = 1 s, ∂k s · ∂ω s · vg > 0. It has the same direction as the Poynting vector not only in the right-handed media, but also for the left-handed materials. For n < 0 or, more generally, Re n < 0, the negative refraction occurs. Let us illustrate that the derivation of the Snell law is valid also for a negative refractive index. We consider a planar interface in the plane z = 0 between the half-spaces z < 0 and z > 0. The half-space z < 0 is free and the half-space z > 0 is filled with a left-handed material. We introduce n 1 = 1 and n 2 = n. We assume an + i(k+ − 1 ·r−ωt) , a reflected wave with E incident monochromatic wave with E+ 1 = E10 e 1 = + − i(k− + + + − ·r−ωt) i(k ·r−ωt) , and a transmitted one with E2 = E20 e 2 . Here k1 , k1 , and k+ E10 e 1 2 are the respective wave vectors. The respective Poynting vectors may be denoted − + + − + by S+ 1 , S1 , and S2 . Let s1 , s1 , and s2 mean their directions. Quite reasonably, we + − + suppose that s1z > 0, s1z < 0, and s2z > 0 both for the right-handed media and the left-handed materials. + − + = 0. Then k1y = 0 and k2y = 0 by the isotropy. Still it holds that We consider k1y − + + + + + + + + + ω = ωc s1x , k2x = ωc n 2 s2x = ωc ns2x . k1x = k1x , k2x = k1x . Let us note that k1x = c n 1 s1x + + From this, ns2x = s1x . Therefore, the negative refraction occurs when the refractive index n is negative. A planar slab of a material with = −1 and μ = −1 can be compared with a lens. Its imaging is described by the equation a + b = l, where a > 0 is the distance from the object to the front plane, b > 0 is the distance from the rear plane to the image, and l is the thickness of the slab. The image is real and direct, though not amplified. It is worth noting that the left-handed material can enhance incident evanescent waves (Pendry 2000). As we will show below, the slab of a material with = μ = −1 does not reflect light. These remarkable properties have led to the term a perfect lens for the planar slab. Artificial structures were first proposed, which have negative permittivity and permeability in the microwave region of frequencies. A periodic array of very thin metallic wires has the negative permittivity. We let a mean the spatial period of 4 Microscopic Theories the lattice and r mean the radius of wires. Pendry et al. (1996) have expressed the response of this medium to the external electric field parallel to the wires (of a two-dimensional lattice) using the effective permittivity (a three-dimensional lattice has been considered first) eff ≡ eff (ω) = 1 − where ωp2 ω(ω + iγe ) 2πc ωp = + a a ln( r ) is the plasma angular frequency and γe is the absorption parameter. Similarly, it was predicted in Pendry et al. (1999) that a periodic array of splitring resonators behaves as a medium with negative magnetic permeability. The response of the regular lattice to the external magnetic field perpendicular to the plane of the ring is given by the effective permeability μeff ≡ μeff (ω) = 1 − Fω2 . − ω02 + iΓω Here ω0 is the resonant frequency and Γ is the absorption parameter. The parameter F is the filling factor for the split ring. Combination of both structures gives rise to the material with both negative permittivity and permeability—the left-handed material. The reports on first experiments, e.g. (Shelby et al. 2001) were followed by some criticism. For instance, it was argued that the negative refraction is ruled out by the causality principle (Valanju et al. 2002). Absorption was suggested as an alternative explanation of the experiment with negative refraction (Sanz et al. 2003). Numerical simulations of the transmission of the electromagnetic waves through the left-handed medium offered an independent possibility to verify the theoretical predictions (Ziolkowski and Heyman 2001, Markoˇs and Soukoulis 2002a,b). Formulae for the transmission amplitude, t, and the reflection amplitude, r, of the electromagnetic wave through a homogeneous slab read 1 = cos(nkl) − t r i = − (Z t 2 i (Z + Z −1 ) sin(nkl), 2 − Z −1 ) sin(nkl), where k = ωc conventionally. Especially, t = exp(inkl), r = 0 for = μ = −1. On the assumption that the wavelength of the electromagnetic wave is much larger than the spatial period of the left-handed structure and on using relations (4.245) and (4.246), the index of refraction and the impedance have been derived from the numerical data (Smith et al. 2002). The origin of absorption has been traced up (Markoˇs et al. 2002). Green-Function Approach For applications it would be very interesting to pass from the microwave to the optical frequencies, or from the GHz to THz region. Pendry (2003) has provided an approval of the contemporary reports on experimental proofs of some properties of the materials with negative refractive index. In 2007, a progress has been reported on Dolling et al. (2007). Photonic crystals have been analysed in theory and numerical calculations (Notomi 2000, Foteinopoulou et al. 2003, Foteinopoulou and Soukoulis 2003) and used in experiments (Cubukcu et al. 2003). Optical frequencies have been assumed, but it has not been asked whether the effective permittivity and the effective permeability may be defined. Foteinopoulou et al. (2003) present in fact all the interesting results of Veselago (1968) in the lossless case. We cite only the time-averaged energy flux S = U vg and the time-averaged momentum density p = Uω k. They concentrated themselves on clarification of some of the controversial issues. Their numerical calculations have described a two-dimensional photonic crystal. For the photonic crystal system a frequency range exists for which the effective refractive index is negative. So, a wave hitting the photonic crystal interface for that frequency will undergo negative refraction similar to a wave hitting the interface of a homogeneous medium with negative index n. It is worth noting that the effective medium is two-dimensionally isotropic. Otherwise, an effect similar to the negative refraction may occur in a right-handed medium. In their simulations, a finite extent line source was placed outside a photonic crystal at an angle of 30◦ . The source starts emitting at t = 0 an almost monochromatic TE wave. The source is adjusted to generate a Gaussian beam. Foteinopoulou et al. (2003) have used finite-difference time-domain simulations to study the time evolution of an electromagnetic wave as it reaches the interface. The wave is trapped temporarily at the interface, reorganizes, and, after a long time, the wave exibits the negative refraction. Shen (2004) has defined a frequency-independent effective rest mass of a photon. Letting m eff denote this mass, we may find it from the relation m 2eff c4 = − res ωn 2 (ω). ω=0 2 If the left-handed medium can be modelled as two-time derivative Lorentz material (Ziolkowski 2001), then m 2eff ≥ 0 for the electromagnetic parameters characteristic of (Ruppin 2000). Ruppin (2002) has obtained a modification of formula (4.234) for the timeaveraged energy density, which respects the absorption in the medium, but is restricted to a relative permittivity (ω) and a relative permeability μ(ω) of the following forms: ≡ (ω) = 1 + ωp2 ωr2 − ω2 − iΓe ω , Γe > 0, 4 Microscopic Theories μ ≡ μ(ω) = 1 − Fω02 , Γh > 0. ω2 − ω02 + iΓh ω At the same time, it is a generalization of the result obtained by Loudon (1970) in the case of no magnetic dispersion. The time-averaged energy density is 2ω 2ωμ 1 2 2 0 + |E| + μ0 μ + |H| , U = 2 Γe Γh where = Re , = Im , μ = Re μ, μ = Im μ, conventionally. Veselago (2002) has presented a miniature review of the progress in negativeindex materials. The subject of that paper which comprises such a review is the formulation of Fermat’s principle. The formulation stating that the optical length is stationary is correct. For simplicity, we may consider differentials only for a Euclidean space. The differential of the optical length is the differential of the Euclidean length multiplied by the refractive index. If a variation of a path is restricted to the positive-index medium, the optical length of the path taken by a ray of light in travelling between two points is a local minimum. If a variation is restricted to the negative-index medium, the optical length of the sought path is a local maximum. Pendry (2003) has provided an approval of the contemporary reports on experimental proofs of some properties of the materials with negative refractive index. Naqvi and Abbas (2003) have noted that principle of duality has a somewhat different form in negative-index materials. Engheta (1998) has expressed a continuous transition from the original solution (α = 0) to the dual solution (α = 1) in the case of the positive-index medium π π + Z 0 Z H sin α , Efd = E cos α 2π 2 π Z 0 Z Hfd = −E sin α + Z 0 Z H cos α . (4.250) 2 2 For the negative refractive index, a form of Maxwell’s equations suggests a different transformation π π − Z 0 Z H sin α , Efd = E cos α π2 π2 Z 0 Z Hfd = E sin α + Z 0 Z H cos α . (4.251) 2 2 The two transformations are identified when, in the version for the negative-index material, the replacement α ↔ −α is made. Marqu´es et al. (2002) have measured transmission in a hollow metallic waveguide. They have found that it behaves similarly as the periodic array of thin metallic wires with its negative effective permittivity both when unloaded and when loaded. After a similarity to the left-handed medium had been achieved, the waveguide transmitted waves at about 6 GHz. A standard analysis of a metallic waveguide Green-Function Approach which still utilizes the effective magnetic permeability does not admit a radiation mode and contradicts the experiment. Another calculation has associated a mode with the split-ring-resonator medium anisotropy (Kondrat’ev and Smirnov 2003). The transmission may have been radiationless. Marqu´es et al. (2002) worked with the wavelength of 5 cm, while they had the longest waveguide with l = 36 mm. Quite remarkably, the hollow waveguide behavesas a one-dimensional plasma ω2 with effective permittivity eff ≡ eff (ω) = 0 1 − ωc2 . The assertion that the transmission of electromagnetic waves occurs due to the split-ring-resonator medium anisotropy has been dismissed by experiment (Marqu´es et al. 2003). Zharov et al. (2003) have noticed that so far properties of left-handed materials in the nonlinear regime of wave propagation have not been studied. They assume that a metallic structure is embedded into a nonlinear dielectric with a permittivity which depends on the strength of the electric field in a general way, D ≡ D (|E|2 ). As an application they consider a linear dependence which corresponds to the Kerr nonlinearity. The effective nonlinear dielectric permittivity eff (|E |2 ) is found to be a sum of the earlier result (4.242) and a third-order nonlinear term, D (|E|2 ). The effective magnetic permeability of the composite structure (for F 1) μeff (H) = 1 + Fω2 − ω2 + iΓω 2 ω0NL (H) differs from the earlier result (4.244) by the dependence of the eigenfrequencies of oscillations on the magnetic field. It holds that ω0NL (H) = ω0 X, where X is one of the stable roots of the equation |H |2 = α A2 E c2 (1 − X 2 )[(X 2 − Ω2 )2 + Ω2 γ 2 ] , X6 where H is an appropriate component of the field H, α = ±1 stands for a focusing or defocusing nonlinearity, respectively, E c is a characteristic electric field, A2 = 3 ω2 h 2 16 D0c20 , D0 = D (0), and γ = ωΓ0 . h is the width of the ring. Huang and Gao (2003) have been motivated by the paper (Chui and Hu 2002). They have investigated the effective refractive index spectra of a granular composite, in which metallic magnetic inclusions are embedded into the host medium. They calculate the effective permittivity and the effective permeability as well based on the Clausius–Mossotti relation (Grimes and Grimes 1991). Numerical results show that, by controlling the volume fraction of dispersive spherical particles in nondispersive host medium, a composite medium which is left handed in a certain frequency region can be prepared. They investigate a three-phase composite. Especially, by embedding dielectric and magnetic granules into the host medium and controlling the volume fractions of the two sorts of the granules, the left-handed composite medium can be realized. 4 Microscopic Theories A wave packet description allows an easy grasp of negative refraction (Huang and Schaich 2004). The interesting properties of the left-handed materials have been illustrated on the realistic assumption of losses and incidence of a Gaussian beam (Cui et al. 2004). It has been demonstrated that a negative-index material allows an ultrashort pulse to propagate with minimal dispersion (D’Aguanno et al. 2005). The negative-index material has been described with a lossy Drude model (ω) ˜ =1− 1 , μ(ω) ˜ =1− ω( ˜ ω˜ + iγ˜e ) ωpm ωpe 1 , ω( ˜ ω˜ + iγ˜m ) where ω˜ = ωωpe is the normalized frequency, ωpe and ωpm are the respective electric and magnetic plasma frequencies, and γ˜e = ωγpee and γ˜m = ωγmpe are the respective electric and magnetic loss terms normalized with respect to the electric plasma frequency. Attention has been paid to the group-velocity dispersion parameter d2 k β2 = dω 2 [Agrawal (1995)]. The frequencies for which β2 = 0 are plotted as zero group-velocity dispersion points in figures. It has been noted that these points are related to ω < ωpe and that no zero group-velocity dispersion point is present when ωpm = 1. ωpe For a macroscopic quantization, Milonni (1995) selects any frequency region, where absorption is negligible, i.e. away from absorption resonances. He assumes a uniform (or homogeneous) dielectric medium. The formulation simplifies and, although restricted in their range of validity, the results are applicable to a wide range of interesting and practical situations. He derives the electromagnetic energy density in a form which resembles the time-averaged linear dispersive energy (3.186). But the magnetic term of the energy density is expressed using the frequency-dependent magnetic permeability, i.e. at the same level of generality as the electric term. For some frequency ω a mode function Fω (r) can be considered which satisfies the transversality condition and the Helmholtz equation ∇ · Fω (r) = 0, ∇ 2 Fω (r) + ω (ω)μ(ω)Fω (r) = 0 c2 and appropriate boundary conditions. It is assumed that the mode function is normalized, (4.258) |Fω (r)|2 d3 r = 1. The monochromatic components at frequency ω of the fields are E(r, t) = Re{Cω α(t)Fω (r)}, c B(r, t) = Re −i Cω α(t)∇ × Fω (r) , ω (4.259) (4.260) Green-Function Approach D(r, t) = Re[(ω)Cω α(t)Fω (r)], i c Cω α(t)∇ × Fω (r) , H(r, t) = Re − μ(ω) ω (4.261) (4.262) where α(t) = α(0) exp(−iωt). The fields are expressed with these relations as in the classical optics, with an amplitude α(t), but also with a normalization constant Cω for the electric strength field. The energy is determined as the integrated energy density in the form H = d n(ω) |Cω |2 |α(t)|2 [n(ω)ω]. 8π μ(ω) dω The choice Cω = 4π μ(ω) n(ω) d[n(ω)ω] dω of the normalization constant gives the energy in the form H = 1 |α(t)|2 , 2 which corresponds to a harmonic √ oscillator. For the quantization of this oscillator ˆ where a(t) ˆ is an annihilation operator, in we do the replacement α(t) → 2ω a(t) (4.259)–(4.262). Milonni (1995) does mention (Drummond 1990), but he does not expound, or comment on, the first and the second time derivative of the electric displacement which we obtain in (3.206) on substituting the first of equations (3.207) for Eν . The approach of Drummond (1990) enables one to respect the inhomogeneity of the medium as noted in (Milonni 1995). Let us remark that this approach leads also to twice as many annihilation and creation operators as expected. Ignoring that also the number of such narrow-band fields may be greater than one, this ratio reminds us of the microscopic model assuming an oscillator medium with a single resonance. As different systems of units have been utilized we will compare some formulae of Milonni (1995) with those of Drummond (1990). Restricting himself to a single carrier frequency (ν = 1), Drummond (1990) has presented the following expansions: F G ∂ωk1 G H ∂k 1 1 ik·x−iωk1 t ˆ (x, t) = i k × e1kλ aˆ kλ e − H. c. , (4.266) D 1 2V k ζ˜1 (ωk ) k,λ F G G ∂ωk1 ∂k 1 1H 1 1 ik·x−iωk1 t ˆ ˆ a e μωk e − H. c. , (4.267) B (x, t) = −i 2V k ζ˜1 (ωk1 ) kλ kλ k,λ 4 Microscopic Theories where V is the quantization volume and (cf., (3.220)) ζ˜1 (ωk1 ) . ωk1 = k μ Regarding wide bandwidths or a simplification merely, Drummond (1990) has completed the following expansion: k ∂ω ∂k ˆ aˆ kλ ekλ eik·x−iωk t + H. c. , (4.269) Λ(x, t) = ˜ 2V k ζ (ωk ) k,λ where −1 ∂ωk ωk ωk ζ˜ (ωk ) . = 1− ∂k k 2ζ˜ (ωk ) Using relation (4.266), abandoning the Taylor expansion as in relation (4.269) and dividing by , we obtain a normalization constant in the form n(ω)ω 1 √ . (4.271) Cω 2ω = 2 2 d[n(ω)ω] dω Since μnr = nr and for different units in the relations under consideration, the replacement 2π ↔ 210 is performed, we have also 1 √ Cω 2ω = 2 μ(ω)2π ω n(ω) d[n(ω)ω] dω in conformity with Milonni (1995). The normalization constant is obvious from the expression √ ˆ t) = 1 Cω 2ω aˆ ω Fω (r) + aˆ ω† F∗ω (r) . E(r, 2 For a multimode field, the electric-field operator (4.273) is replaced by F G G 2πωβ μ(ωβ ) † ∗ ˆ t) = H ˆ ˆ a E(r, (t)F (r) + a (t)F (r) , β β β β d[n(ω )ω ] n(ωβ ) dωββ β β where Fβ (r) is a mode function for mode β obeying the transversality condition and the Helmholtz equation ∇ 2 Fβ (r) + ωβ2 c2 (ωβ )μ(ωβ )Fβ (r) = 0. Green-Function Approach For an effectively unbounded medium plane-wave modes can be used, i Fβ (r) → √ ekλ exp(ik · r), V with V a quantization volume and ekλ (λ = 1, 2) a unit polarization vector orthogonal to k. On introducing the notation γk = d[n(ωk )ωk ] c = dωk vg (ωk ) and assuming ekλ to be real for simplicity, we can write the operators for E, B, D, and H fields in the forms ˆ t) = i E(r, 2πωk μ(ωk ) n(ωk )γk V † × [aˆ kλ (t) exp(ik · r) − aˆ kλ (t) exp(−ik · r)]ekλ , 2πμ(ωk )c2 ˆ t) = i B(r, ωk n(ωk )γk V k,λ † × [aˆ kλ (t) exp(ik · r) − aˆ kλ (t) exp(−ik · r)]k × ekλ , 2πωk n(ωk )(ωk ) ˆ t) = i D(r, γk V k,λ † × [aˆ kλ (t) exp(ik · r) − aˆ kλ (t) exp(−ik · r)]ekλ , 2πc2 ˆ t) = i H(r, ωk n(ωk )μ(ωk )γk V k,λ † × [aˆ kλ (t) exp(ik · r) − aˆ kλ (t) exp(−ik · r)]k × ekλ . A comparison with the paper (Huttner et al. 1991) can be made. It is not so easy as the previous excercise, because sums are to be compared with integral expressions. In the sum (4.280) the factor 2πωk n(ωk )(ωk )vg (ωk ) = cV 2π ωk (ωk )vg (ωk ) vp (ωk )V is used. In the integral expression, n 2± is involved instead of (ωk ) (for μ(ωk ) = 1). Again the replacement 2π ↔ 20 must be done. The macroscopic quantization (Milonni 1995) leads only to optical polaritons, while the microscopic quantization, 4 Microscopic Theories although it is frequently named also a “macroscopic one”, introduces also acoustical polaritons. Milonni and Maclay (2003) have applied the theory of Milonni (1995) to the effects of a negative-index medium on an excited guest atom, the Doppler effect, radiative recoil, and spontaneous and stimulated radiation rates, and also to the spectral density of thermal radiation. A quantization scheme for the electromagnetic field interacting with atomic systems in the presence of dispersing and absorbing magnetodielectric media is contained in Dung et al. (2003). The magnetodielectric media include left-handed material. The spontaneous decay of an excited two-level atom is influenced by the environment. The atom embedded in a homogeneous, purely electric medium has the decay rate Γ = nΓ0 , √ where n = is the refractive index and Γ0 is the decay rate in free space. It follows also from Fermi’s golden rule. Now the magnetodielectric medium has the decay rate Γ = μnΓ0 , √ where the refractive index n = μ. This should be verified for the material with both μ and n negative. Moreover, the expression (4.284) should be generalized to include the magnetodielectric absorption. It is assumed that the magnetodielectric medium is causal and linear. It is characterized by a relative permittivity (r, ω) and a relative permeability μ(r, ω). For instance, (r, ω) = 1 + μ(r, ω) = 1 + 2 ωTe (r) 2 (r) ωPe , − ω2 − iωγe (r) 2 (r) ωPm , 2 2 ωTm (r) − ω − iωγm (r) (4.285) (4.286) where ωPe (r), ωPm (r) are the coupling strengths, ωTe (r), ωTm (r) are the transverse resonance frequencies, and γe (r), γm (r) are the absorption parameters. For notational convenience, the spatial argument has been omitted. The quantization has been performed by generalizing the theory expounded also ˆ˜ ω), . . . be the operators of the electric strength, in this book, Section 4.1.2. Let E(r, ˆ˜ ω) and M(r, ˆ˜ etc. in frequency space. Especially, let P(r, ω), respectively, be the operators of the polarization and the magnetization in frequency space. The operator-valued Maxwell equations are very similar to the classical ones. We present the electric constitutive relation ˆ˜ ω) + Pˆ˜ (r, ω), ˆ˜ ω) = [(r, ω)−1]E(r, P(r, 0 N Green-Function Approach with Pˆ˜ N (r, ω) being the noise polarization associated with the electric losses due to material absorption, and the magnetic constitutive relation ˆ˜ (r, ω), ˆ˜ ω) + M ˆ˜ M(r, ω) = κ0 [1 − κ(r, ω)]B(r, N ˆ˜ −1 where κ0 = μ−1 0 , κ(r, ω) = μ (r, ω), and MN (r, ω) is the noise magnetization associated with magnetic losses. From the viewpoint of the theory the noise terms guarantee that the field commutators do not depend on Re{(r, ω)} and Re{κ(r, ω)}. ˆ˜ ω), the right-hand side of which We present also the wave equation for E(r, ˆ˜ (r, ω), ˆ includes the noise polarization P˜ N (r, ω) and the noise magnetization M N 2 ˆ˜ ω) = iωμ ˆ˜j (r, ω), ˆ˜ ω) − ω (r, ω)E(r, ∇ × κ(r, ω)∇ × E(r, 0 N 2 c where ˆ˜j (r, ω) = −iωPˆ˜ (r, ω) + ∇ × M ˆ˜ (r, ω) N N N is the noise current. Dung et al. (2003) consider the solution ˆ˜ ω) = iωμ G(r, r , ω)ˆ˜jN (r , ω) d3 r , E(r, 0 where G(r, r , ω) is the classical Green tensor obeying the equation ω2 (r, ω)G(r, r , ω) = δ(r − r )1. c2 ∇ × κ(r, ω)∇ × G(r, r , ω) − Cf., (4.136), (4.137), and (4.138) above. In Section 4.2.7 will be referred to calculations, in which the following property of the Green tensor is used, G(r, r , −ω∗ ) = G∗ (r, r , ω). All of the commutation relations follow from the choice ˆP˜ (r, ω) = i 0 Im{(r, ω)} fˆ (r, ω) (4.293) N e π and ˆ˜ (r, ω) = M N κ0 Im{κ(r, ω)} ˆfm (r, ω) π and from the commutation relations for the fundamental bosonic vector fields (λ, λ = e, m) † ˆ [ ˆf λi (r, ω), ˆf λ j (r , ω )] = δλλ δi j δ(r − r )δ(ω − ω )1, ˆ [ ˆf λi (r, ω), ˆf λ j (r , ω )] = 0. (4.295) (4.296) 4 Microscopic Theories So far the time dependence has not been considered, but the complete notation should be as in (4.157) and (4.158) above. We may note that the vector potential ˆ˜ ω, 0) = 1 E ˜ˆ ⊥ (r, ω, 0), A(r, iω ˆ˜ ω, 0), cf. (4.142). We consider where Eˆ˜ ⊥ (r, ω, 0) means the transverse part of E(r, also the scalar potential with the property as quantized in (4.143) ˆ˜ (r, ω, 0), ˆ˜ ω, 0) = E − ∇ ϕ(r, ˆ˜ ω, 0), cf. (4.143). where Eˆ˜ (r, ω, 0) means the longitudinal part of E(r, ˆ˜ ω, 0), we may introduce also Having defined A(r, ˆ˜ ω, 0). ˆ˜ ω, 0) = ∇ × A(r, B(r, On dividing by iωμ0 , equation (4.289) yields ˆ˜ ω, 0) = 0, ˆ˜ ω, 0) + iωD(r, ˆ ∇ × H(r, where we have used also the two constitutive relations. The definition of the operator of the magnetic induction may be rewritten as an operator-valued Maxwell equation ˆ˜ ω, 0). ˆ˜ ω, 0) = iωB(r, ∇ × E(r, On the scalar multiplication of equation (4.289) with the ∇ operator from the left, we obtain that ω2 ˆ˜ ω) = iωμ (−iω)∇ · Pˆ˜ (r, ω), − 2 ∇ · (r, ω)E(r, (4.302) 0 N c or ˆ˜ ω) + Pˆ˜ (r, ω) = 0, ˆ ∇ · 0 (r, ω)E(r, (4.303) N or ˆ˜ ω) = 0, ˆ ∇ · D(r, ˆ˜ ω) + Pˆ˜ (r, ω). ˆ˜ ω) = (r, ω)E(r, D(r, 0 N ˆ˜ ω) = 0. ˆ ∇ · B(r, By definition, It can be shown that the equal-time commutation relations [ Eˆ i (r, t), Eˆ j (r , t)] = 0ˆ = [ Bˆ i (r, t), Bˆ j (r , t)], [0 Eˆ i (r, t), Bˆ j (r , t)] = −iεi jk ∂k δ(r − r )1ˆ (4.307) (4.308) Green-Function Approach are preserved, where ∂k means the partial derivative with respect to the kth component of the vector r. The interaction of charged particles with the medium-assisted electromagnetic field is studied using Hilbert space on which the components of the bosonic fields ˆf λi (r, ω) and operators rˆ α and pˆ α act. Here rˆ α and pˆ α are, respectively, the position and the canonical momentum operator of the αth particle of mass m α and charge qα . The charge density ρˆ A (r, t) = qα δ r1ˆ − rˆ α (t) ρˆ A (r , t) d3 r 4π 0 |r − r | and the scalar potential of the particles ϕˆ A (r, t) = are introduced. In the minimal-coupling scheme and for nonrelativistic particles, the total Hamiltonian reads ˆ (t) = H ωˆfλ (r, ω, t) · ˆfλ (r, ω, t) dω d3 r 1 ˆ rα (t), t) 2 pˆ α (t) − qα A(ˆ + 2m α α 1 ρˆ A (r, t)ϕˆ A (r, t) d3 r + ρˆ A (r, t)ϕ(r, + ˆ t) d3 r. 2 The third and last terms can be written in the forms 1 2 ρˆ A (r, t)ϕˆ A (r, t) d3 r = qα qα 1 , 2 α α 4π 0 |ˆrα (t) − rˆ α (t)| α=α ρˆ A (r, t)ϕ(r, ˆ t) d3 r = qα ϕˆ rˆ α (t), t . ˆ 0 (r, t), ˆ 0 (r, t), B In this new situation the old operators should be denoted as E ˆ ˆ ˆ D0 (r, t), H0 (r, t), except the bosonic vector fields fλ (r, t). Then we introduce the field operators in the presence of charge particles ˆ t) = Bˆ 0 (r, t), ˆ t) = Eˆ 0 (r, t) − ∇ ϕˆ A (r, t), B(r, E(r, ˆ t) = D ˆ 0 (r, t) − 0 ∇ ϕˆ A (r, t), H(r, ˆ t) = H ˆ 0 (r, t), D(r, (4.314) (4.315) 4 Microscopic Theories the new operators. The exposition in the literature may have been more explicit, ˆ (r, t), because the old operators have not been renamed and the new operators are E etc. These operators obey the time-independent and time-dependent Maxwell equations, where · · ˆjA (r, t) = 1 qα rˆ α (t) δ r1ˆ − rˆ α (t) + δ r1ˆ − rˆ α (t) rˆ α (t) 2 α is the operator of the current density of the particles. In the literature it has been shown that the Hamiltonian (4.311) generates the time-dependent Maxwell equations and the Newton equations of motion for the charged particles. The authors treat the spontaneous decay of an excited two-level atom. They solve the problem of the time development of the state |ψ(t) , |ψ(0) = |{0} |u , whose alternative are the quantum states |1λ (r, ω) |l . Here |l is the lower state whose energy is set equal to zero and |u is the upper state of energy ωA . ωA † is a transition frequency, |1λ (r, ω) ≡ ˆfλ (r, ω)|{0} . In a formal expression of the state |ψ(t) , Cu (t), Cel (r, ω, t) and Cml (r, ω, t) are the respective coefficients. These coefficients satisfy linear differential equations and the initial conditions Cu (0) = 1 and Cλl (r, ω, 0) = 0. In the differential equations, both the tensor-valued Green function and the vector dA , the transition dipole moment, occur. As expected, the solution reminds us of the Weisskopf–Wigner theory. In the integro-differential equation the vector dA and the tensor-valued Green function still occur in a relatively simple fashion. The decay rate Γ can be expressed in terms of them and the shifted transition frequency ω˜ A is utilized. The case of nonabsorbing bulk material is treated first. On this assumption Γ = Re[μ(ω˜ A )n(ω˜ A )]Γ0 , where Γ0 = ω˜ A3 dA2 3π 0 c3 is the free-space decay rate, but taken at the shifted transition frequency. When (ω˜ A ) and μ(ω˜ A ) have opposite signs, then the refractive index is purely imaginary and Γ = 0. In contrast, for nonabsorbing left-handed material, the rate Γ is given by relation (4.317) without the notation Re. The case of atom in a spherical cavity is treated second. The cavity may be conceived as a way of removing the singularity of the tensor-valued Green function. Forms of ΓΓ0 valid for an atom at an arbitrary position inside a spherical free-space cavity surrounded by an arbitrary spherical multilayer material environment may be applied (Dung et al. 2003, Li et al. 1994, Tai 1994). Green-Function Approach In fact, those formulae are specialized for the atom situated at the centre of the cavity and the otherwise homogeneous material environment. Results of Scheel et al. (1999a, 2000) have been amended. The ratio of the decay rates has been calculated in the literature as a function of the (shifted) atomic transition frequency ω˜ A . Large cavities are considered first. Then smaller ones are characterized. The number of clear-cut cavity resonances decreases as the radius of the cavity decreases. A cavity is considered whose radius is much smaller than the transition wavelength. Every time a comparison is made between (a) dielectric matter, (b) magnetic matter, and (c) magnetodielectric matter. The decay rate in the cases (a) and (b) inside a band gap is low and, in the case (a), it also increases much due to each cavity resonance. In the case (c) this rate is low, if the band gap may belong either to the permittivity or to the permeability. Also the role of the resonances is similar to the case (a) or (b). But in the overlap of the electric and magnetic band gaps, the decay rate increases. Using their results, the authors may address also the local-field corrections. The description of an atom should not be derived using the macroscopic field, even if the description, such as relation (4.317), does not involve the field. The theory of the authors directly applies to the real-cavity model. But simplifying assumptions are too weak to work for the magnetodielectric matter. The complexity relies on the dependence on z = R ω˜cA . The expansion in powers of z must begin with a term proportional to R −3 , continue with R −1 , and an already constant term and the O(R) term. If the material absorption may be disregarded, or when and μ may be taken for real numbers, the terms proportional to R −3 and R −1 may be left out. Hence 3(ω˜ A ) 1 + 2(ω˜ A ) 2 Re[μ(ω˜ A )n(ω˜ A )]Γ0 . For strong dielectric absorption and R small we have a different approximation Γ≈ 9 Im{(ω˜ A )} |1 + 2(ω˜ A )|2 c ω˜ A R 3 Γ0 . The decay may be regarded as being purely radiationless (Dung et al. 2003). Felbacq and Bouchitt´e (2005) have found a unified theoretical approach to the left-handed materials. They have used a renormalization group analysis, which takes into account the coupling between each resonator. They have checked the theoretical results numerically. They can explain the result by Pokrovsky and Efros (2002) that, by embedding wires in a medium with negative μ, one does not get a left-handed medium. Ozbay et al. (2007) have provided 17 references to reports on metamaterials appropriate to a wide range of operating frequencies such as radio, microwave, millimetre wave, far-infrared, mid-infrared, near-infrared frequencies, and even visible wavelengths. In that article results obtained from experiments at the microwave frequencies have been reported. 4 Microscopic Theories Roppo et al. (2007) have studied the pulsed second-harmonic generation in positive- and negative-index media. In the positive-index media and in the presence of phase mismatch two forward-propagating components of the second-harmonic are generated (Bloembergen and Pershan 1962). In the pulsed generation, the second harmonic signal comprises a pulse which walks off (and is recognized in much work) and a second pulse which is “captured” and propagates under the pump pulse. 4.2.7 Application to Casimir Effect In this subsection we will pay attention to papers which apply the quantization of the electromagnetic field in dispersive and absorbing media to the Casimir effect. Casimir’s work had its origin in a problem of colloidal chemistry, namely, the stability of hydrophobic suspensions of particles in dilute aqueous electrolytes (Sparnaay 1989, Milonni 1994). Such suspensions are said to be stable if the particles do not coagulate. The particles are charged. Each particle is surrounded by ions of opposite charge. We expect that a repulsive force between the particles separated by a distance d increases more rapidly than an attractive force as d → L D + 0, where L D is a Debye length. We realize also that the repulsive force between these particles decreases more rapidly than the attractive force as d → ∞. The attractive force should be obtained by integrating the pairwise forces between atoms, assuming an interatomic force given by the London–van der Waals interaction (London 1930). Now we mention the original idea of Casimir. In 1948 he gave expression for the attractive force per unit area FC (d) = cπ 2 , 240d 4 where d is a distance between two uncharged, perfectly conducting parallel plates. Milonni (1994) has reviewed a standard calculation of the Casimir force. It is the calculation of the difference between the zero-point field energies for finite and infinite plate separations. This difference is interpreted as the potential energy of the system. In calculating this energy, the Euler–Maclaurin summation formula is used. The initial difference between a divergent sum over modes of the confined field and a divergent integral is modified to a difference between a convergent sum and a convergent integral, featuring a formal dependence on a function f (k). Here k is the wavenumber of a mode. Finally it emerges that the result does not depend on the function f (k), if f (k) satisfies some conditions. The attractive force between the plates is then obtained as the derivative of the potential energy with respect to the distance d on changing the sign. The same result has been obtained by considering the radiation pressure (Milonni 1988). In calculating it, the Euler–Maclaurin summation formula has been applied. Let us note that only a d-independent factor is determined here. The same expression Green-Function Approach can be obtained also by a modification of the standard calculation, so the derivative with respect to the distance and with the changed sign can be taken independent of finding the constant. As the function f (k) depends on km ≈ a10 , where a0 is the Bohr radius, we see that a different function f (x) depends on km d ≈ ad0 , a number of the Bohr radii spanning the distance between the plates. These calculations are presented in Chapters 2 and 3 in Milonni (1994). Much later, in Chapter 7, he mentions forces between dielectric slabs. In the early 1950s, predictions of microscopic theories did not agree with experimental results. Milonni (1994) mentions the Lifshitz macroscopic theory (Lifshitz 1956). He does not expound this theory in fact. He indicates that some results follow the Casimir approach. The force between two semi-infinite dielectric slabs separated by a different dielectric medium or vacuum is derived. The case of vacuum between slabs has been treated by Lifshitz. The general case of a dielectric medium between slabs has been treated by Schwinger et al. (1978). The physical basis of Lifshitz’s calculations is not so difficult to understand. He was first to use a random field, K(r, t), corresponding to some real or complex noise polarization. At zero temperature, this field satisfies the fluctuation–dissipation relation (the Gaussian system) K i (r, t)K j (r , t) = 2 Im{(ω)}δi j δ(r − r ) . Let us recall that the (Kronecker and Dirac) delta functions in the commutator between noise polarizations are multiplied by π 0 Im{(ω)}. From relation (7.69), p. 233 in Milonni (1994), it can be seen that K(r, t) = 4π P(r, t) in the Gaussian system of units. Therefore, it would be appropriate to convert K(r, / t) into the 0 . Then the International System of units similarly as D(r, t), using the factor 4π fluctuation–dissipation relation becomes K i (r, t)K j (r , t) = 0 Im{(ω)}δi j δ(r − r ) 2π We recognize the right-hand side as a half of the appropriate commutator (K(r, t) = P(r, t) in the SI). Exactly, K(r, t) ≡ K(r, ω, t) and we miss the functional factor δ(ω − ω ) in the Lifshitz theory. Lifshitz (1956) then calculates the force in terms of the Maxwellian stress tensor, which we present here in the form 1 B(r, t)B(r, t) T(r, t) = 0 E(r, t)E(r, t) + μ0 1 2 1 B (r, t) 1. (SI). − 0 E2 (r, t) + 2 μ0 In Kupiszewska and Mostowski (1990) and Kupiszewska (1992), the Casimir effect is studied on a restriction to the one-dimensional version. This means that 4 Microscopic Theories only wave vectors normal to the surface are taken into account in the calculations. It is also assumed that the temperature is zero, T = 0. In Kupiszewska and Mostowski (1990), the Casimir effect for the case of two non absorbing dielectric slabs has been studied in detail. The electromagnetic field has been quantized in the presence of a dielectric medium. The use of the Maxwellian stress tensor has been considered. It has been noted that the value of the appropriate component of the stress tensor is equal to the energy density for the one-dimensional calculation. An infinite expression has been regularized by means of an exponential cutoff function. The Casimir force has been calculated in the limit of semi-infinite slabs. A result has been provided for any slab thickness, but for a small reflection coefficient. In Kupiszewska (1992), the absorbing dielectric slabs have been considered. The medium has been modelled as a continuous field of quantum harmonic oscillators interacting with a heat bath. The atoms and the electromagnetic field have been described with equations of motion and the long-time solution has been found. As a generalization of the previous result, two contributions to the Casimir effect have been distinguished. Weigert (1996) has considered several modes between the perfectly conducting metallic plates to be in a squeezed vacuum state. At a given time instant the smallest possible expectation value of the energy in a neighbourhood of one of the mirrors is obtained through a calculation. It has been proposed to generate squeezed modes inside such a cavity and to measure an increase of the Casimir force. Weigert (1996) admits that the state of the system is not stationary, but he does not consider just the Lorentz force, only the stress tensor. He calls it energy stress tensor, but he calculates only with a stress tensor. The theoretical work on the Casimir force outweighs experimental work. On regarding dielectric measurement, an accuracy of better than 10% has been reported (Lamoreaux 1997). An introductory guide to the literature on the Casimir force has been published (Lamoreaux 1999). Electromagnetic field quantization in an absorbing medium has been readdressed, and the Casimir effect both for two lossy dispersive dielectric slabs and between two conducting plates was analysed by Matloob (1999a,b) and by Matloob et al. (1999). Matloob and Falinejad (2001) have investigated the Casimir effect between two dispersive absorbing slabs in three dimensions. The dielectric function of the slabs has been assumed to be an arbitrary complex function of frequency satisfying the Kramers–Kronig relations. The Maxwellian stress tensor has been used to evaluate the vacuum radiation pressure of the electromagnetic field on each slab in terms of vacuum expectation values. These averages have been expressed using the fluctuation–dissipation theorem and Kubo’s formula (Landau and Lifshitz 1980). A simple relation to the imaginary part of a tensor-valued Green function has been recognized. So, the infinities of the stress tensor and the regular expression which diverges to them have been obvious. No explicit electromagnetic field quantization has been made. In a certain step of calculations the infinities cancel. Attention has been paid to various limits of the general expression and to the Lorentz model of the Green-Function Approach dielectric function. The effect of finite temperature on the Casimir force between two dielectric slabs has also been considered. Kurokawa and Wakayama (2002) have introduced a Casimir energy for a compact Riemann surface of genus at least 2 and have related it to the Selberg zeta function. The scope of the paper has been the application of the Selberg trace formula to such a Riemann surface similar to methods of mathematical physics and quantum chaos. da Silva et al. (2002) have generalized the so-called thermofield dynamics via an analytic continuation of the Bogoliubov transformations. It has been achieved that a field in arbitrary confined regions of space and time is described. In the case of an electromagnetic field, the energy-momentum tensor has been subjected to the generalized Bogoliubov transformation. The Casimir effect has been calculated for zero and nonzero temperature. The generalized Bogoliubov transformation has been applied also to the description of the field fulfilling the Dirichlet boundary conditions (at a conducting plate) and the Neumann boundary conditions at a permeable plate (the Casimir–Boyer model). Tomaˇs (2002) has considered the Casimir effect in a dispersive and absorbing multilayered system using the Minkowski stress tensor method. He has calculated the Casimir force in a lossless dispersive layer of an otherwise absorbing multilayer by employing the quantized field operators as emerge from the scheme expounded in this chapter. He has presented the expression obtained and has compared it with the result of Zhou and Spruch (1995) who had applied the surface mode summation method to purely dispersive media. As an illustration he has calculated the Casimir force on a dielectric slab in a planar cavity with realistic mirrors. The difference between Casimir energies in two distinct layers has been established and the difference between Casimir forces in two such layers has been presented provided that their refractive indices are equal. Boyer (2003) has presented a model, where physical ideas are transparent and the calculations allow easy numerical evaluation. The model has no direct connection with experiment. One-dimensional analogues of three-dimensional concepts and their properties are studied. A simplified thermodynamics is evoked. He assumes a one-dimensional box of length L at zero temperature T = 0. But the onedimensionality assumption reaches so far that all the virtual photons, if considered, should have the same, or just the opposite direction of the wave vector. He introduces the Casimir energy ΔUzp (x, L) for the case, where a partition is present in the box at a position x. He considers boundary conditions, let us say for intervals (0, x) and (x, L), namely, the Dirichlet or Neumann boundary conditions. The Dirichlet condition corresponds to a perfectly conducting boundary condition describing a perfectly conducting material in three spatial dimensions and the Neumann condition is simplified from an infinitely permeable boundary condition describing an infinitely permeable medium for electromagnetic waves. The boundary conditions at x = 0 and x = L are enforced by the walls. In fact, x cannot be used for the coordinate, which is denoted by x instead. The boundary condition at x = x is enforced by the partition. 4 Microscopic Theories Boyer (2003) lets α be 0 or 1, where α = 0 for like boundary conditions for partition and walls, and α = 1 for unlike boundary conditions. He finds that ΔUzp (x, L) = −πc 1 α − 24 16 1 1 4 + − x L−x L For α = 0, we may state that, off the centre of the box, forces act which repel the partition from the nearest wall. He speaks of an attractive force between the partition and the walls. For α = 1, we may say that, from the centre of the box, attractive forces act. He mentions a repelling force between the partitions and the walls. The force between a conducting plate and a permeable plate was given, e.g. in Boyer (1974). The zero-point-energy limit is contrasted by the high-temperature energyequipartition limit. This corresponds to the Rayleigh–Jeans spectrum of radiation. Then ΔURJ (x, L , T ) = 0. He discusses also the Casimir forces at finite temperature. Emig (2003) has developed a novel approach for calculating the Casimir forces between periodically deformed objects. Theories for realistic geometries have been developed in response to high-precision measurements (Mohideen and Roy 1998, Chan et al. 2001a,b, Bressi et al. 2002). The theories do not comprise rigorous, nonperturbative methods for calculating the force. The simplest and commonly used approximation is the proximity force theorem. For corrugated metal plates, it fails at a small corrugation length. A different approximation is the pairwise summation of renormalized retarded van der Walls forces. However, Lifshitz’s theory for dielectric bodies demonstrates that, in general, the interaction cannot be obtained from a pairwise summation. The results do not agree with those from the zeta-function method (Barton 2001) in a situation, where this method can be used. Emig (2003) has considered the force between a rectangular corrugated plate and a flat one. This geometry cannot be treated by perturbation theory due to the rectangular edges. The force has been found by the non-perturbative method. It was respected that in the most precise experiments the Casimir force between rough metallic plates was measured (Genet et al. 2003). It has been only one of the real conditions which differ from the ideal situation and assumptions of the theory. Others are imperfect reflection, nonzero temperature, and a geometry different from the parallel plates. The temperature effect has been neglected, because it is significant at large distances, while roughness corrections are more necessary at the smallest distances typical of the experiments. The proximity force approximation has been tested on the case where the force is measured between a plane and a sphere. The approximation leads to correct results when the radius of the sphere is much larger than the distance of closest approach. In the case of metallic plates, the proximity force approximation is only valid for the roughness spectrum containing small enough wave numbers. While mean number of waves spanning the interplate distance (multiplied by 2π) may be informative of the accuracy of the Green-Function Approach approximation, Genet et al. (2003) have proposed a specific roughness sensitivity and have considered its expectation value. Many problems are formulated when the perfectly conducting plates of Casimir are replaced by other perfectly conducting surfaces. It can be utilized that the Casimir problem is not modified or generalized to dielectric media at the same time. For example, the rectangular cavity has been considered by Lukosz (1971) and Maclay (2000). The generalization to a system of conducting shells has also been realized, cf., (Plunien et al. 1986). The rectangular cavity of sides (a1 , a2 , a3 ) depends on the three parameters, Λ ≡ (a1 , a2 , a3 ). The system of conducting shells depends on another system of parameters Λ. Mazzitelli et al. (2003) have computed the Casimir interaction energy between two concentric cylinders. To this end they have used approximate semiclassical methods and the exact mode-by-mode summation method. They characterize a method according to Schaden and Spruch (1998, 2000). In this method the zeropoint radiation is described with trajectories of a particle, and so as a real radiation. They mention the well-known decomposition of the spectral density into a smooth term and the oscillating contribution. Periodic orbit theory relates oscillations in the quantum level density of a given Hamiltonian to the periodic orbits in the corresponding classical system. They derive an energy approximation using the periodic orbit theory, E sem = − c 4πa 2 b 1 b f vw 4 N , v, w , a w≥0 v≥˜v(w) v a where is the “quantization” length, a, b, b, are radii of the cylinders, v˜ (w) is a< the least positive integer v such that cos π wv > ab , f vw is 1 for v = 2, w = 0 and is 2 otherwise, / α − cos π wv α cos π wv − 1 . (4.328) N (α, v, w) ≡ 2 1 + α 2 − 2α cos π wv They further write sem sem E sem = E w=0 + E w≥1 , sem sem where E w=0 (E w≥1 ) are obtained from relation (4.327), where condition w ≥ 0 is sem replaced by the condition w = 0 (w ≥ 1). They show that, for ab ∼ 1, E sem ∼ E w=0 , where √ ab cπ 3 sem = − . (4.330) E w=0 360 (b − a)3 Mazzitelli et al. (2003) mention the proximity theorem (Derjaguin and Abrikosova 1957, Derjaguin 1960). For its application they assume two parallel plates of area A, 4 Microscopic Theories not of different areas. This difference does not suggest decision whether the larger or the smaller area should be chosen. Still the relation EP = − cπ 2 A 720 (b − a)3 (4.331) ? is applied to the plates “wound” into a cylinder. Here A = 2π a,√2π b. Obviously, the theory of periodic trajectories suggests the choice A = 2π ab. But only the numerical calculation shows that the approximation is relatively good for 1 < ab < 4. Further they compute the exact Casimir energy for the coaxial cylinders using the mode-by-mode summation method (Nesterenko and Pirozhenko 1997). The final result has the form 1 1 + c, (4.332) E ex = E 12 − 0.01356 a2 b2 where E 12 = − × c 2 2 2π ∞ kz / d k z2 − y 2 ln [Fn12 (iy, 1, α)] dy dk z , dy In (y)K n (αy) I (y)K n (αy) 1 − n , Fn12 (iy, 1, α) = 1 − In (αy)K n (y) In (αy)K n (y) α = ab , In (z) and K n (z) are the modified Bessel functions and the MacDonald functions, respectively. Again on the condition α ∼ 1 the relations simplify E ex ∼ E 12 1 cπ 3 1 ∼ − +O . 360a 2 (α − 1)3 (α − 1)2 The semiclassical approximation is valid. In contrast, the semiclassical energy for an isolated cylinder vanishes and the exact energy for a cylinder of radius a is E C = −0.01356 c . a2 They present also numerical results. Ahmedov and Duru (2003) have calculated the Casimir energies with respect to the previous work such as Mazzitelli et al. (2003) and Høye et al. (2001). Let us consider the region between two close coaxial cylinders. We assume that the cylinders have the radii r0 < r1 . Then the Casimir energy per unit height is Green-Function Approach E cyl π3 R =− 720Δ3 15 Δ2 1+ 2π 2 R 2 √ where Δ ≡ r1 − r0 , R = r0r1 . Let us imagine a flat space which is periodic in the z coordinate unlike the Euclidean space. Let us consider the region between two cylinders or tori in this space provided that the axis of these cylinders is the z axis. Then the Casimir energy is π3 RL 15 Δ2 1 + , (4.338) E tor = − 720Δ3 2π 2 R 2 where L is the length of the flat space measured parallel to the z axis. Let us analyse the ring with a rectangular cross-section. The Casimir energy is π3RL Rζ (3) + , (4.339) 3 720Δ 16Δ2 where L is the height of the cylinders and ζ (z) is the Riemann ζ -function. Let us consider two close concentric spheres. We assume that the spheres have the radii r0 < r1 . Then the Casimir energy is π 3 R2 5Δ2 1+ . (4.340) E sph = − 360Δ3 4π 2 R 2 E box ≈ − Let us evaluate two close coaxial cones. We assume that the cones have the apex angles θ0 < θ1 ≤ π2 . Then the Casimir energy per unit volume is Θπ 3 , 720r 4 Δ3 π2 + O(Δ−3 ). 1440r 4 Δ4 E ≈− √ where Δ ≡ θ1 − θ0 , Θ ≡ sin θ0 sin θ1 , and r is the distance from the common vertex. Dividing the right-hand side by 2πΘΔ to “correct” the energy density (Ahmedov and Duru 2003), we obtain that E This is similar to the solution of the wedge problem (Deutsch and Candelas 1979), 2 1 Δ2 π E =− − , (4.343) 1440r 4 Δ2 Δ2 π2 where Δ is the angle between the half-planes of the boundary. Geyer et al. (2003) have begun with the state of the research of the Casimir effect. They have also mentioned some difficulty with calculations of the temperature effect on the Casimir force between real metals of finite conductivity. They distinguish five different approaches. According to the fifth approach, the description of the thermal Casimir force can be obtained by the Leontovich surface impedance boundary condition. 4 Microscopic Theories Three domains of frequencies are distinguished: The region of the normal skin effect for low frequencies, the region of the anomalous skin effect, or relaxation domain for higher frequencies, and the region of the infrared optics for yet higher frequencies. In the region of the anomalous skin effect and in the relaxation domain metal cannot be described by any dielectric permittivity depending only on the frequency. The space dispersion is also essential. Otherwise, the theoretical basis is as follows. Boundary conditions are introduced Et = Z (ω)Bt × n, where Z (ω) is the surface impedance of the conductor, Et and Bt are the tangential components of the (Fourier transformed) electric and magnetic fields, and n is the unit normal vector to the surface (pointing inside the metal). For an ideal metal we have Z ≡ 0 and for real nonmagnetic metals |Z | 1 holds (Landau et al. 1984). The surface impedance is determined over the whole frequency axis, even though it is different in each of the three domains. It is respected that the main contribution to the Casimir free energy and force is given by the frequency region centred around the so-called characteristic frequency ωc = 2ac , where a is the space separation between two metal plates. Relation (43) in Geyer et al. (2003) is an analogue of relation (8.62) in the book (Milonni 1994). An approximate expression (45), which is not reproduced here, has been derived for the case of a sphere above a plate made of a real metal. Geyer et al. (2003) remind of the fact that at the temperature T = 0 only the anomalous skin effect and infrared optics occur. For Au the transition frequency Ω = 6.36 × 1013 rad/s is obtained. They determine the characteristic frequency ωc at each separation distance, and then they fix the proper impedance function. In a figure, which is not reproduced here, graphs of both impedance functions are plotted. A “transition” impedance function does not exist evidently. The correction factors EE(a) (0) (a) to the Casimir energy agree quite well in the region of the infrared optics and in the transition region. In the region of the anomalous skin effect, the results due to the right and wrong choice differ significantly. They further present numerical results for T = 70 K and T = 300 K. At these temperatures, the normal skin effect occurs already, but only at larger separations. The relative thermal correction (Geyer et al. 2003) is calculated for 0 < a ≤ 5 μm and only by the use of the impedance of infrared optics and of anomalous skin effect. Raabe et al. (2003) have underscored that one-dimensional quantization schemes are not rigorous enough when the Casimir force between absorbing multilayer dielectrics is calculated. At the beginning they warn that the “mode summation” method, which was employed by H. Casimir himself, cannot be generalized to the case of absorbing bodies, because in such bodies there are no modes. Then they characterize three procedures: (1) The electromagnetic field and the material bodies are treated macroscopically. Explicit field quantization is not performed, but the field correlation functions are written down in conformity with statistical thermodynamics. Green-Function Approach (2) The electromagnetic field and the material bodies are quantized at a microscopic level. The bodies are described by appropriate model systems. Simplifying assumptions are made. (3) The electromagnetic field and the material bodies are described macroscopically as in the first procedure. But the medium-assisted electromagnetic field is quantized by using an infinite set of appropriately chosen bosonic basic fields. Raabe et al. (2003) have reserved the first method for Lifshitz (1955, 1956). In this context they have mentioned (Schwinger et al. 1978) and have characterized the paper (Matloob and Falinejad 2001). The mentions about the second method comprise the note that the calculations were carried out only for one-dimensional systems. Let us refer only to Kupiszewska and Mostowski (1990), Kupiszewska (1992). The third method is used in Raabe et al. (2003), but the authors also refer to Tomaˇs (2002). Then they add two further methods. One is the surface-mode approach in the nonretarded limit (van Kampen et al. 1968) and including retardation (see references in the cited paper). The other is the scattering approach (Jaekel and Reynaud 1991). Raabe et al. (2003, 2004) have reproduced the essential traits of their quantization scheme. Then they describe the multilayer structure. They consider n − 1 layers of thicknesses dl > 0, l = 1, . . . , n − 1. These layers have the boundaries zl , l = 1, . . . , n, which have the properties zl+1 = zl + dl , l = 1, . . . , n − 1. They introduce z 0 = −∞, z n+1 = +∞, and so, inclusive of the substrate and the superstrate, there are n + 1 layers. The permittivity is (r, ω) = l (ω) for zl < z < zl+1 , l = 0, . . . , n. For the tensor-valued Green function, we are referred to the paper (Tomaˇs 1995). From the expression of the Green tensor it follows that it conserves its form on the intervals (zl , zl+1 ) × (zl , zl +1 ), l = 0, . . . , n. If both spatial arguments are in the same layer, l = l , we let Gl (r, r , ω) denote the form G(r, r , ω). We introduce the scattering part Glscat (r, r , ω) = Gl (r, r , ω) − Glbulk (r, r , ω) for r = r, where Glbulk (r, r , ω), the bulk part, is the solution for the case that the medium of the lth layer fills up the whole space. The values of the scattering part of the Green tensor for r = r are obtained in the coincidence limit of the position vectors. It is assumed that a “cavity”, which separates walls, is the jth layer, 1 < j < n − 1. The walls are composed of j − 1 layers l = 1, . . . , j − 1 and of n − 1 − j layers l = j + 1, . . . , n − 1. If some of the walls are semi-infinite, the numbering may differ a little. Before the Casimir force is calculated from the stress tensor, a more general tensor T(r, r , t) = Te (r, r , t) + Tm (r, r , t) 1 − 1 Tr{Te (r, r , t) + Tm (r, r , t)} 2 4 Microscopic Theories is defined, where 1 is the second-rank unit tensor and ˆ t)E(r ˆ , t) , Te (r, r , t) = D(r, ˆ t)H(r ˆ , t) (r = r ). Tm (r, r , t) = B(r, (4.348) (4.349) The expectation values are calculated in thermal equilibrium, a stationary state of the field. (i) Basic equation. For finite temperatures T , they employ the statistical operator 0 1 ˆ H −1 , ρˆ = Z exp − kB T where ˆ H Z = Tr exp − kB T 1: , and kB is the Boltzmann constant. In relations (4.348) and (4.349), ρˆ may be written explicitly. On substituting relation (4.350) into the modified relation (4.348), we get 2 ∞ ω ω coth Im{(r, ω)G(r, r , ω)} dω (4.352) Te (r, r ) = π 0 2kB T c2 and on substituting relation (4.350) into the modified relation (4.349), we obtain ← − ∞ ω Tm (r, r ) = − coth (4.353) ∇ × Im{G(r, r , ω)} × ∇ dω. π 0 2kB T On the left-hand side of relations (4.352) and (4.353), the time argument t has been dropped, since the right-hand sides do not depend on t. Although the generalization to a “cavity” containing dielectric medium has been known, Raabe et al. (2003, 2004) have restricted themselves to the free space between two stacks. The Casimir force (per unit area) is given by the zz-component of the stress tensor (4.324). We may modify relations (4.347), (4.352), and (4.353) by the way of writing the superscripts scat. Then we introduce the stress tensor Tscat (r, r ). Tscat (r) = lim r →r With respect to a layer, it is suitable to introduce the tensor Tscat Tscat j (r) = lim j (r, r ). r →r Green-Function Approach Now a number of concepts and pieces of notation are introduced. Propagation constants ω2 l (ω) βl = βl (q, ω) = − q2 (4.356) c2 and reflection coefficients for σ -polarized waves at the top (+) and bottom (−) of the jth layer are defined that are σ rn+ = 0, σ = s, p and are calculated from the recurrence relations βl βl s − 1 + + 1 exp (2iβl+1 dl+1 ) r(l+1)+ βl+1 βl+1 s rl+ = , βl s + 1 + ββl+1l − 1 exp (2iβl+1 dl+1 ) r(l+1)+ βl+1 and p rl+ βl βl+1 βl βl+1 l l+1 + + βl βl+1 βl βl+1 l l+1 exp (2iβl+1 dl+1 ) r(l+1)+ p exp (2iβl+1 dl+1 ) r(l+1)+ σ are The coefficients rl− σ r0− =0 and the recurrencies for the others are analogous, which are formally obtained from relations (4.358) and (4.359) on replacements l → l, l + 1 → l − 1 and on the change of the subscript + of the reflection coefficient to −. Also denominators of the fractions for multiple reflections σ σ rl− exp (2iβl dl ) Dσ l = Dσ l (q, ω) = 1 − rl+ are introduced. Finally, ∞ ω scat Tzz, = − coth j 2π 2 0 2kB T 9 : ∞ −1 σ σ × Re qβ j exp 2iβ j d j Dσ j r j−r j+ dq dω . 0 scat As Tzz, j does not depend on the space point in the jth layer, the argument r has been dropped. 4 Microscopic Theories (ii) Imaginary frequencies. We introduce ξm = 2mπ kB T , m integer. Since the permittivity is positive on the positive imaginary frequency axis, we introduce κj = ξ 2 j (iξ ) + q 2. c2 Exploiting the analytical properties of the ω integrand in relation (4.363), we arrive at an expression of the integral with respect to ω by a residue series. Finally scat Tzz, j = ∞ kB T 1 1 − δm0 π m=0 2 . - ∞ −1 σ σ qκ j exp −2κ j d j Dσ j r j−r j+ dq × 0 which may be regarded as a generalization of the famous Lifshitz formula Lifshitz (1955), (1956). For m = 0, the term with ω = 0 is peculiar and it should be replaced scat may simby the limit ω → 0+. To obtain Tzz, j in the zero-temperature limit, we C ply repeat the derivation from relation (4.363). Of course, replacement ∞ m=0 → dξ can be realized in relation (4.366). 2π kB T (iii) One-dimensional systems. A comparison with the three-dimensional case is made only for T = 0. Contrary to the three-dimensional description, the sum with respect to σ is omitted, since in the one-dimensional system normal incidence occurs and the description can be restricted to a single polarization. Further one of the integrals is replaced by a multiplication with a constant, 1 4π 2 d2 q → 1 , A q where A is the normalization area. Also analytical expressions for specific distance laws in the zerotemperature limit are derived. For example, it is shown that the Casimir force between two single-slab walls behaves asymptotically as d −6 instead of d −4 in the Green-Function Approach large-distance asymptotic regime. Results for single-slab walls for periodic multilayer wall structure are illustrated in figures. Chen et al. (2003) study the difference of the thermal Casimir forces at different temperatures between real metals. If the temperatures are fixed, the difference of the Casimir forces increases with a decrease of the separation distance. The configurations of two parallel plates and a sphere above a plate are considered. In the case of two parallel plates, they utilize a perturbation result from the paper Bordag et al. (2000) to express the thermal Casimir force denoted by Fpp (a, T ), where a is the separation distance and T is a temperature. They concentrate on the difference ΔFpp ≡ ΔFpp (a, T1 , T2 ) = Fpp (a, T2 ) − Fpp (a, T1 ), where T1 and T2 are temperatures. For example, for Au, T1 = 300 K and T2 = 350 K, |ΔFpp | decreases with an increase of a. For an ideal metal |ΔFpp | does not depend on a. In the case of a sphere above a plate, they use a perturbation result after Klimchitskaya and Mostepanenko (2001). The thermal Casimir force is denoted by Fps (a, T ). They study the difference ΔFps ≡ ΔFps (a, T1 , T2 ) = Fps (a, T2 ) − Fps (a, T1 ). For example, for Au, T1 = 300 K and T2 = 350 K, |ΔFps | decrease with an increase of a. For an ideal metal |ΔFps | is a linear function of a. Then Chen et al. (2003) compare the chosen approach with that, e.g. after the paper (Brevik et al. 2002) (further references see Chen et al. 2003). They fix a = 0.5 μm and T1 = 300 K, while T1 ≤ T2 ≤ 350 K. The chosen approach exhibits an increase |ΔFps | with the temperature T2 . The difference is negative both for a real and for an ideal metal. The alternative approach provides a negative difference for an ideal metal, but a positive difference (more than six times larger at T2 = 350) for a real metal. Iannuzzi and Capasso (2003) have published a comment on the paper (Kenneth et al. 2002). They believed that, at distances relevant to Casimir force measurements and to nanomachinery, the Casimir force between two slabs in vacuum was always attractive. They have referred also to (Bruno 2002), a paper devoted to an attractive Casimir magnetic force. Kenneth et al. (2003) have replied to the comment (Iannuzzi and Cappasso 2003). They have declared the consensus that exploring the possible existence or design of materials with nontrivial magnetic properties for obtaining a repulsive Casimir force is important. Action of the Casimir force on magnetodielectric bodies embedded in media has been analysed in (Raabe and Welsch 2005). The consistency of expressions derived in the framework of the macroscopic theory with microscopic harmonic-oscillator models is shown. It could be startling that here Raabe and Welsch (2005) declare the macroscopic quantum electrodynamics themselves. We have chosen in this book that their theory is named microscopic just as the theory due to Hopfield (1958) and 4 Microscopic Theories Huttner and Barnett (1992a,b). It is consensual, even though with a reservation, which can be found below relation (68), which is not reproduced here, namely, that the level of representation is rather a mesoscopic one (Raabe and Welsch 2005). It may be controversial that they do not use micro- or mesoscopic for the model with the two auxiliary fields fe (r, ω), fm (r, ω). The exposition begins with the classical Maxwell equations with charges and currents, but without the constitutive relations. It is worthwhile to mention that the Lorentz force density f(r) = ρ(r)E(r) + j(r) × B(r) is written as f(r) = ∇ · T(r) − 0 ∂ [E(r) × B(r)], ∂t where T(r) is the stress tensor, 1 1 1 2 2 B(r)B(r) − B (r) 1. 0 E (r) + T(r) = 0 E(r)E(r) + μ0 2 μ0 The integral of the Lorentz force density f(r) over some space region (volume) V gives the total electromagnetic force F acting on the matter inside V f(r) d3 r. On integrating both sides of equation (4.371) over V we obtain that d T(r) · da(r) − 0 dt E(r) × B(r) d3 r, where da(r) is an infinitesimal surface element. If the volume integral on the righthand side of this equation does not depend on time, then the total force reduces to the surface integral where dF(r) = da(r) · T(r) = T(r) · da(r). The tensor T(r) may be decreased by a constant term, i.e. a constant, space independent tensor (it suffices to consider the position on the surface ∂ V ). Raabe and Welsch (2005) have commented on the role of Minkowski’s stress tensor, which Green-Function Approach is considered in much work devoted to the related topic, be it under this name or only as a “stress tensor”. Relation (4.371) can be generalized to characterize the density of a generalized Lorentz force f(r, r ) + f(r , r) = ∇ r+r · {T(r, r ) + T(r , r)} 2 − 0 ∂ E(r) × B(r ) + E(r ) × B(r) , ∂t where f(r, r ) means the density of a generalized Lorentz force f(r, r ) = ρ(r)E(r ) + j(r) × B(r ), ∇ r+r = ∇ + ∇ ≡ ∇r + ∇r , (4.378) (4.379) and T(r, r ) is a generalized stress tensor 1 T(r, r ) = 0 E(r)E(r ) − 1E(r) · E(r ) 2 1 1 + B(r)B(r ) − 1B(r) · B(r ) . μ0 2 Raabe and Welsch (2005) expound the quantum theory of the electromagnetic field as described also in this book in Section 4.2.6. They have presented also commutation relations between the charge density operator, the current density operator, and the electromagnetic-field operators. They have calculated correlation functions of some operators in thermal states of the field. Calculation of the expectation value of the Lorentz force, which is not reproduced here, follows. In our opinion, the operator of the Lorentz force has not been presented in such an explicit form as its expectation value. The latter is denoted by the same notation as the corresponding classical stress tensor. As the expectation value of the stress tensor operator is infinite before a quantum correction, the notation is generalized to the form T(r, r ) (just the same notation as in the classical theory), ˆ E(r ˆ ) + T(r, r ) = 0 E(r) 1 ˆ ˆ B(r)B(r ) 1 ˆ 1 ˆ ˆ ˆ B(r) · B(r ) . − 1 0 E(r) · E(r ) + 2 μ0 In other words, we can quantize relation (4.380) to introduce a generalized stress ˆ r ) and to write relation (4.381) in the form tensor T(r, ˆ r ) . T(r, r ) = T(r, 4 Microscopic Theories To make contact with microscopic approaches, Raabe and Welsch (2005) consider a harmonic-oscillator medium and derive that the (steady-state) Lorentz force acting on such a medium in some space region V is ˆ t) + ˆj(r, t) × B(r, ˆ t) d3 r, ρ(r, ˆ t)E(r, F = lim where again the charge density operator ρ(r, ˆ t) and the current density operator ˆj(r, t) are appropriately expressed. They apply the theory to a planar magnetodielectric structure. Its definition is specific in that homogeneity of the dielectric in an interspace (“cavity”) 0 < z < d j is required, where the subscript j has been used in conformity with Raabe et al. (2003, 2004). Raabe and Welsch (2005) give the relevant stress tensor element Tzz (r) in the interspace 0 < z < d j . Their relations (75) and (76) for this element, which are not reproduced here, are rather complicated. They include also a criticism of basing the calculations on Minkowski’s stress tensor (Tomaˇs (2002), which leads to a relatively simple relation (81), which is not repeated here too. It can be believed that the warning is helpful, since one prefers simpler formulae to more complicated ones, provided that the simpler ones are not wrong. To calculate the Casimir force on a plate in a nonempty cavity, Raabe and Welsch (2005) choose five regions, finite (1, 2, 3) or semi-infinite (0, 4). Let us assume that the two walls and the plate are almost perfectly reflecting. The generalization of Casimir’s well-known formula is 1 1 1 cπ 2 μ 2 − + , (4.384) F= 240 3 3μ d34 d14 where dk are thicknesses of regions k = 1, 3. Let us also assume that μ = 1. Then the counterpart of the previous formula based on the Minkowski stress tensor is F cπ 2 1 = √ 240 1 1 − 4 4 d3 d1 which is just one of the formulae underlying the critique. Pitaevskii (2006) defends the validity of the paper (Dzyaloshinskii et al. 1960), which has been disqualified or underestimated by the criticism in Raabe and Welsch (2005). It is important for the theory of the van der Waals–Casimir forces inside a dielectric fluid. The tensor of the van der Waals–Casimir forces was obtained by summation of an appropriate set of Feynman diagrams for the free energy and its variation with respect to the density (Dzyaloshinskii and Pitaevskii 1959). On the condition of mechanical equilibrium this tensor differs from a Minkowski-like one by a constant tensor. Dzyaloshinskii et al. (1960) obtained the same force between solid bodies, separated by a dielectric fluid, as Barash and Ginzburg (1975) and Schwinger et al. (1978). Pitaevskii (2006) discusses the reason why, in his opinion, the approach of Raabe and Welsch (2005) is incorrect. Raabe and Welsch (2006) Green-Function Approach maintain their position that the Casimir force should be calculated on the basis of the Lorentz force. The paper (Leonhardt and Philbin 2007a) is interesting for its use of the notion of a transformation medium. This notion seems to belong to classical optics essentially and to be formed after the general relativity theory. A concise quantum theory of light in spatial transformation media has been developed in (Leonhardt and Philbin 2007b). Leonhardt and Philbin (2007a) calculate the Casimir force for a dispersive medium in their set-up inspired by Casimir’s original idea. They consider two perfect conductors with a metamaterial sandwiched in between. The repulsive Casimir force of a left-handed material may balance the weight of one of the mirrors, letting it levitate on zero-point fluctuations. The simple formula for the Casimir force has been compared with the result of the more sofisticated Lifshitz theory. Chen et al. (2006) have measured the Casimir force between a gold-coated sphere and two Si samples of higher and lower resistivity. The lowering of resistivity corresponded to enhancement of carrier density by several orders of magnitude. Each measurement was compared with theoretical results using the Lifshitz theory with different dielectric permittivities (Bordag et al. 2001, Chen et al. 2005, 2006, Lamoreaux 2005) and found to be consistent with this theory. Lenac and Tomaˇs (2007) have considered the Casimir effect between metallic plates assuming them to be dispersive and lossless and separated by a medium with the (Gaussian) unit permittivity. They have taken two very different permittivities for media outside the plates, i.e. 2 = 1 or 2 = ∞ (perfect conductor). They have analysed the contributions of system eigenmodes with great attention to surface plasmon polariton modes. When the separation between the metallic plates is small, the surface plasmon polariton modes influence the Casimir effect dominantly except the case of thin layers that are supported by a highly reflective medium. Messina and Passante (2007a) have calculated the Casimir–Polder force density on an uncharged, perfectly conducting plate placed in front of a neutral atom. To this aim first-order perturbation theory and the quantum operator associated to the classical electromagnetic stress tensor have been used. The result of Casimir and Polder (1948) has been rederived by integration of the force density. This integration is not an argument against the well-known nonadditivity of the Casimir–Polder forces (Milonni 1994 and references therein), and it has been discussed appropriately. Munday and Capasso (2007) have performed precision measurements of the Casimir–Lifshitz force between two metal surfaces (gold) separated by a fluid (ethanol). For this situation, the measured force is attractive and is approximately 80% smaller than the force predicted for ideal metals in vacuum. The results were found to be consistent with Lifshitz’s theory. There exists a geometry well suited to the aim of an accurate theory–experiment comparison, namely, that with parallel and periodic corrugations of the metallic surfaces. The Casimir force is a superposition of the usual normal component and a lateral one in this situation. In general, vacuum-induced torque is present (Rodrigues et al. 2006a). Rodrigues et al. (2007a) have studied the lateral Casimir force arising between two corrugated metallic plates. They assume that corrugations are imprinted on both 4 Microscopic Theories plates with the same period and along the same direction, but with a spatial mismatch. They have used the scattering theory in a perturbative expansion in powers of the corrugation amplitudes. The result is valid provided that these amplitudes are smaller than L (mean separation distance), λC (corrugation wavelength), and λP (plasma wavelength). Limiting cases such as the proximity-force approximation limit and the perfect reflection limit are recovered when the length scales L, λC , and λP obey some specific orderings. In the development of ever smaller atomic magnetic traps carbon nanotubes have been considered to become the elementary building blocks. It is well known that an atom held in a magnetic trap near an absorbing dielectric surface will undergo thermally induced spin–flip transitions. Some of these transitions lead to trapping losses. Fermani et al. (2007) have calculated atomic spin-flip lifetimes and have estimated tunneling lifetime corresponding to the sum of the Casimir–Polder potential and the magnetic trapping potential. Their analysis indicates that the Casimir–Polder force is the dominant loss agent. Fulling (2007) have presented results on the Casimir force in one-dimensional piston models. These models are applications of quantum graphs (Roth 1985, Kuchment 2004). They have characterized the quantum star graphs mainly. A finite quantum graph consists of B one-dimensional undirected bonds or edges of length L j ( j = 1, . . . , B) and some vertices. Either end of each bond ends at one of these vertices, and the valence of a vertex is defined as the number of bonds meeting there. At the univalent vertices either a Dirichlet or a Neumann boundary condition is imposed. For instance, the space may consist of B one-dimensional rays of large length L attached to a central vertex. In each ray a piston is located a distance a from the vertex. At the central vertex the field has the Kirchhoff (generalized Neumann) behaviour. In fact, the pistons are treated as univalent vertices. If at each piston the field obeys the Neumann boundary condition, then the force is ( = 1 = c) F= (B − 3)π . 48a 2 When B = 1 or 2, the result is related to an ordinary Neumann interval of length a or 2a, respectively. When B > 3, the force is repulsive. The pistons will tend to move outward. Fulling et al. (2007) have discussed a periodic-orbit approach to calculations of the Casimir forces. They have also numerically examined the rate of convergence of the periodic-orbit expansion. Rodriguez et al. (2007a,b) have developed a numerical method to compute the Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials. They have based their approach on the familiar result due to Lifshitz and Pitaevskii (1980), Dzyaloshinskii et al. (1961), and Pitaevskii (2006). The Casimir force is obtained in terms of the stress tensor integrated over space and imaginary frequency. The vacuum expectation value of the stress tensor is calculated in terms of the Green function, which is automatically regularized on application of the finite-difference method to solve for the Green function. The geometries that have been considered have the property that the bodies have not a contact and they are in the free space. Green-Function Approach Then it is innocent to evoke the Minkowski stress tensor, since a contour or surface around the body of interest lies in the free space. But also the Maxwellian stress tensor is named. Messina and Passante (2007b) have paid attention to fluctuations of the Casimir– Polder force between a neutral atom and a perfectly conducting wall. They have made use of the method of time-averaged operators introduced by Barton and widely used by him in his papers on fluctuations of the Casimir forces for macroscopic bodies (Barton 1991a,b). They have also calculated the Casimir–Polder force fluctuations for an atom between two conducting walls. This situation has been investigated already by Barton (1987). The force operator has been derived from an effective interaction Hamiltonian (Passante et al. 1998). To this end the effective interaction energy operator is differentiated with respect to the distance from the atom to the wall. Intravaia et al. (2007) remind that the Casimir effect, at short distances, is dominated by the coupling between the surface plasmons that are present on two metallic mirrors (Van Kampen et al. 1968). The Casimir energy is calculated in terms of quasielectrostatic (or nonretarded) field modes. When the mirror separation increases, retardation must be taken into account. Intravaia et al. (2007) use the method of Schram (1973). They choose the dielectric function (ω) = 1 − to describe the metal. They calculate dispersion relations for the relevant modes numerically. They distinguish bulk modes, propagating cavity modes, and evanescent modes. For the TE polarization all modes are propagating, but for the TM polarization two modes are evanescent in at least some range of wave vectors. These modes are referred to as “plasmonic”. A plasmonic contribution to the Casimir Aω energy is denoted by E p . The short-distance asymptotics is − L 2 p , where A is the √ A ω c area of the mirrors. The large-distance asymptotics is + L 5/2 p . This is balanced in the total Casimir energy by the contribution of photonic modes (cavity and bulk modes), which yields the negative, binding, energy again. Passante and Spagnolo (2007) have evaluated the Casimir–Polder potential between two atoms in the presence of an infinite perfectly conducting plate and at nonzero temperature. They assume the wall located at z = 0 and let r A and r B denote the positions of atoms A and B, respectively. First they outline the method used by reproducing the Casimir–Polder potential energy between two atoms in a thermal field c ∞ 3 ck k α A (k)α B (k) coth W AB (R) = π 0 2kB T × V(k, R) : τ (k, R) dk, (4.388) (cf. Wennerstr¨om et al. 1999), where R = |R| is the distance between the two atoms, R = r B − r A , k is a wavenumber, α A (k) (α B (k)) is the dynamical polarizability of 4 Microscopic Theories the atom A (B) (Power and Thirunamachandran 1993), RR 1 1 − 3 [cos(k R) + k R sin(k R)] R3 RR RR 2 2 − 1− k R cos(k R) , RR R R sin(k R) τ (k, R) = 1 − RR kR RR cos(k R) sin(k R) + 1−3 − 3 3 . RR k2 R2 k R V(k, R) = Then they derive and discuss their results for the retarded atom–atom Casimir– Polder interaction when both a thermal field and a boundary condition are present. The interaction energy is ¯ = W AB (R) + W AB ( R) ¯ W AB (R, R) ∞ ck c 3 k α A (k)α B (k) coth − π 0 2kB T ¯ + V(k, R) · τ (k, R) ¯ dk, × σ : τ (k, R) · V(k, R) ¯ is the distance between one atom and the image of the other atom where R¯ = |R| ¯ = r B −σ ·r A , and σ is the reflection tensor on the conducting reflected on the plate, R plate, supposed orthogonal to the z axis. The analysis of most Casimir force experiments using a sphere-plate geometry has relied on the proximity-force approximation (PFA), which expresses the Casimir force between a sphere and a flat plate in terms of the Casimir energy between two parallel plates. Krause et al. (2007) have conducted an experimental assessment of the range of applicability of the proximity-force approximation. They have measured the Casimir force and force gradient between a gold-coated plate and five gold-coated spheres with different radii using a microelectromechanical torsion oscillator. Specifically, according to the proximity-force approximation, the Casimir force between a sphere of radius R and a flat plate separated by a distance z R can be written as F(z) ≈ FPFA (z) ≡ 2π RE pp (z), where E pp (z) is the Casimir energy per unit area between two parallel plates separated by the distance z. If the bodies are smooth and perfectly conducting, the exact Casimir force may be expanded in powers of Rz (Scardicchio and Jaffe 2006), 2 z z , FCasimir (z, R) = 2π RE pp (z) 1 + β + O R R2 Green-Function Approach where β is a dimensionless parameter and the Landau notation O( f (x)) means that O( f (x)) is bounded for x → 0. An effective pressure P eff (z, R) may be expanded f (x) similarly, but a new dimensionless parameter is denoted by β . The roughness and conductivity effects can be included and the modified notation, β(z) and β (z), respects a general dependence on z. For separations z < 300 nm, Krause et al. (2007) have found that |β (z)| < 0.4 at the 95% confidence level. Rodrigues et al. (2006b) have presented a novel theoretical approach to the lateral Casimir force beyond the regime of validity of the proximity-force approximation. They have related the results of the new approach to the measured values (Chen et al. 2002a,b). Unfortunately, the novel approach also has its region of validity and the illustration chosen does not fit into it. Besides, the complete proximity-force approximation has led to a happy coincidence of the theoretical value of 0.33 pN with the average of measured values of 0.32 pN, while the expectation value belongs to the interval 0.32 ± 0.077 pN (at 95% confidence). This situation is reflected in the comment (Chen et al. 2007) and the reply (Rodrigues et al. 2007b). Emig (2007) has explored the lateral Casimir force between two parallel periodically patterned metal surfaces. It is assumed that the surfaces are set into relative oscillatory motion so that their normal distance is a periodic function of time. This scenario resembles the so-called ratchet systems (Reimann (2002). It is demonstrated that the system allows for directed lateral motion of the surfaces. These results show that Casimir interactions offer contactless translational actuation schemes for nanomechanical systems. Emig et al. (2007) have developed a systematic method for computing the Casimir energy between arbitrary compact dielectric objects. Casimir interactions are completely characterized by the scattering matrices of the individual bodies. As an example they compute the Casimir energy between two identical dielectric spheres at any separation. The thermal part of the Casimir force was subject to discussions (Milton 2004). On the assumption of real metals the Lifshitz formula is used. For L T large com, where L is the distance between the parallel plates and T is the pared with c kB temperature of the system, the assumption of ideal metals leads to a result which is twice the Lifshitz one. Svetovoy (2007) has analysed the repulsive thermal Casimir force between two metals and a metal and a high-permittivity dielectric. The repulsion discussed in such work has the meaning of a negative thermal correction to the force at zero temperature, but the total force is always attractive. The force is calculated using the Lifshitz formula written via real frequencies (Landau and Lifshitz 1963). Two contributions of the fluctuating fields, propagating waves and evanescent waves, are distinguished. For both material configurations the repulsive s-polarized evanescentwave contribution dominates for L T small. Here L is the distance between parallel plates and T is the temperature of the system. In this case, the force between ideal metals is attractive and small (Mehra 1967, Brown and Maclay 1969). The ideal metal is rather the limit case of a superconductor than of a normal metal (Antezza et al. 2006). 4 Microscopic Theories Rizzuto (2007) considers a neutral two-level atom uniformly accelerated in a direction parallel to an infinite mirror and calculates the atom-wall Casimir–Polder interaction between the accelerated atom and the mirror. The mirror is modelled as Dirichlet boundary conditions on a massless scalar field. The author evaluates the vacuum fluctuation (vf) and radiation reaction (rr) contributions to the atom-wall Casimir–Polder interaction energy. First she expresses only the contributions to the radiative shifts of the atomic levels. Let us assume the atom to be at rest. The Casimir–Polder interaction energy between the atom at rest and the wall is obtained by considering only the z 0 (vf) (rr) and E CP , in the vacuum fluctuation and in the radiation dependent terms, E CP reaction contributions, respectively, (vf) = E CP μ2 2z 0 2ω0 z 0 , − π cos , 2 f ω 0 64π 2 c2 z 0 c c μ2 2ω0 z 0 (rr) cos , E CP = 64πc2 z 0 c (4.394) (4.395) where ω0 corresponds to the energy difference of the levels of the atomic system, ω0 , z 0 is the distance of the atom from the mirror, and f (α, β) = 0 sin(αx) dx. x +β In the case of a uniformly accelerated atom, with the acceleration in a direction parallel to the reflecting plate, a generalization of relations (4.394) and (4.395) yields (vf) E CP (rr) E CP μ2 2c −1 z 0 a = 2 f ω0 , sinh 64π 2 c2 z 0 N a c2 2ω0 c z0a sinh−1 − π cos a c2 z a 2ω0 c a 0 1 − cos sinh−1 , − ω0 c a c2 μ2 2ω0 c −1 z 0 a = cos sinh , 64πc2 z 0 N a c2 / 2 where N = 1 + zc02a and a is the proper acceleration of the atom. (4.397) (4.398) Chapter 5 Microscopic Models as Related to Macroscopic Descriptions The models expounded in Chapter 4 are often labelled as macroscopic, since apparently they do not allow one to consider the Clausius–Mossotti or the Lorentz–Lorenz relation. Even though we shall not, even here, expound this interesting subject, which, however, is notorious in the classical theory, we shall mention the quantum models, whose microscopic character cannot be doubted. The role of macroscopic averages, which is analyzed in the classical theory as well, is discussed from the quantal viewpoint. 5.1 Quantum Optics in Oscillator Media A quantum-optical experimental setup may consist of active and passive devices, active devices to generate light of certain properties (e.g. nonclassical light) and passive ones to modify and apply it. It is an interesting and nontrivial problem to study how quantum statistical properties of light are influenced by passive optical devices like mirrors, resonators, beam splitters, or filters. Kn¨oll and Leonhardt (1992) have continued also the paper (Kn¨oll et al. 1987), where the medium is nondispersive and lossless, but they now intend to consider dispersion and losses. On introducing the Hamiltonian for the complete system, the Heisenberg equations of motion for field operators and medium (not source) quantities are derived. The complete system under consideration consists of the subsystems: optical field, medium atoms, and sources. The field is described by the electric-field-strength ˆ ˆ operator E(x, t) and the electromagnetic vector-potential operator A(x, t) in the Coulomb gauge. The medium is modelled by the damped harmonic oscillators {qˆ μ (t), pˆ μ (t)}, namely, the oscillators coupled to reservoirs composed of bath oscillators {qˆ μB (t), pˆ μB (t)}, the quanta of whose energy may be, for example, phonons. The medium oscillators are localized at xμ , they have the same mass m and the elasticity (force) constant k. The bath oscillators are characterized by masses m B and the elasticity constants k B and the coupling is expressed by the coupling constants σ B . The atomic sources are described by a current operator jˆ (x, t), but its dynamics A. Lukˇs, V. Peˇrinov´a, Quantum Aspects of Light Propagation, C Springer Science+Business Media, LLC 2009 DOI 10.1007/b101766 5, 5 Microscopic Models as Related to Macroscopic Descriptions need not be specified, e is the electron charge. For simplicity, a one-dimensional model is considered only. The Hamiltonian of the complete system is ˆ M (t) + H ˆ RS (t) + H ˆ S (t), ˆ (t) = H ˆ R (t) + H H ˆ M (t) is that of the medium ˆ R (t) is the Hamiltonian of the optical field, H where H ˆ RS (t) describes the interaction between the optical field and sources, atoms, and H ⎡ 12 ⎤ 0 ˆ ∂ A(x, t) 0 ⎣ˆ2 ˆ R (t) = ⎦ dx, H E (x, t) + c2 2 ∂x [ pˆ μ (t) − e A(x ˆ μ , t)]2 k ˆ + qˆ μ2 (t) HM (t) = 2m 2 μ . 2 pˆ μB (t) k B 2 2 σB + qˆ μB (t) + + qˆ μ (t) − qˆ μB (t) , 2m B 2 2 B ˆ RS (t) = − jˆ (x, t) A(x, ˆ H t) dx, (5.3) (5.4) ˆ S (t) is the Hamiltonian of the atomic sources which is left unspecified. The and H usual commutation rules are ˆ ˆ , t)] = iδ(x − x )1, ˆ [ A(x, t), −0 E(x ˆ [qˆ μ (t), pˆ μ (t)] = iδμμ 1, ˆ [qˆ μB (t), pˆ μ B (t)] = iδμμ δ B B 1. The Heisenberg equations of motion for field operators, medium operators, and bath operators have been obtained. As a result of a Wigner–Weisskopf approximation for the interaction of medium oscillators with the bath operators, quantum Langevin equations have been obtained. By eliminating the medium quantities from equations for field operators and further by a usual procedure, a generalized wave equation for the vector potential is obtained. Using a Green function, this wave equation is solved. The decomposition of the time-ordered quantum correlation functions into timeordered correlation functions of the source operators and the free-field operators has been derived. The notation of positive time-ordering T ≡ T+ (see, (2.110)) and the ordering symbol O ≡ O+ are used. The property (2.120) and the consequence for the time-ordered quantum correlations have been recalled. The time-dependent Green function for a dielectric layer as the simplest optical device is calculated. The field behind the layer is discussed and represented by the negative-frequency part of the field, its expectation value and the normally ordered quadrature variance determined for the sake of squeezing analysis. Problem of Macroscopic Averages 5.2 Problem of Macroscopic Averages Assuming that the medium is constituted by atoms, which do not form a continuum, we can aproximate a continuum using the macroscopic averaging. This idea is illustrated by a simple model. It is indicated that many material constants would require appropriately complicated derivations. Inclusion of losses enhances requirements on the microscopic model as well, even though we will restrict our discourse on a one-dimensional standing wave in a cavity. The macroscopic averaging can be generalized even to this case. 5.2.1 Conservative Oscillator Medium Dutra and Furuya (1997) have investigated a simple microscopic model for the interaction between an atom and radiation in a linear lossless medium. It is a guest two-level atom inside a single-mode cavity with a host medium composed of other two-level atoms that are approximated by harmonic oscillators. There is an intention to show, in general, that the ordinary quantum electrodynamics suffices, at least in principle, and there is no need to quantize the phenomenological classical Maxwell equations. If a macroscopic description is possible, it should appear as an approximation to the fundamental microscopic theory under certain conditions. Such a “macroscopic” approximation is obtained and conditions of its validity are derived. All of the medium harmonic oscillators are represented by a collective harmonic oscillator, then two modes of a polariton field are defined. The macroscopic average is regarded as filtering out higher spatial frequencies. The field that influences the guest atom is modified and the characteristics of the effect of such a microscopic field are calculated. A condition is pointed out under which the contribution of the atoms to the quantum noise appears only through a frequency-dependent dielectric constant. An effective description is obtained by leaving out the polariton mode which is approximately equal to the collective mode of the medium. Dutra and Furuya (1997) have introduced a microscopic model of a material medium that they have adopted, N two-level atoms having the same resonance frequency ω0 in a single-mode cavity of the resonance frequency ω. They consider a guest two-level atom of the resonance frequency ωa and strongly coupled to the field so that it will not be approximated by a harmonic oscillator. The operator of the displacement field in the cavity is given by the relation ω ω0 ˆ ˆ + aˆ † (t)], (5.6) sin x [a(t) D(x, t) = L c where L is the length of the cavity. It is assumed that ω and L for the single-mode cavity are in the relation ω π = . c L 5 Microscopic Models as Related to Macroscopic Descriptions The operator of the polarization of the medium is given by ˆ P(x, t) = † d j δ(x − x j )[bˆ j (t) + bˆ j (t)], where dj = qj 2ω0 m 0 † are electric dipole moments in eigenstates |1, s j of the operators [bˆ js (t) + bˆ js (t)], † large s, weakly converging to [bˆ j (t) + bˆ j (t)] for s → ∞ (s, the greatest number of quanta, has been introduced on the normalization ground). In (5.9), m 0 is an † effective mass, q j are effective charges, products d j [bˆ j (t) + bˆ j (t)] are the electricdipole-moment operators of the atoms of the medium that are located at x j . The † ˆ operators a(t), aˆ † (t), bˆ j (t), and bˆ j (t) satisfy the bosonic commutation relations, in particular, † ˆ [bˆ j (t), bˆ j (t)] = δ j j 1, ˆ [bˆ j (t), bˆ j (t)] = 0, (5.10) (5.11) † ˆ and a(t), aˆ † (t) commute with bˆ j (t), bˆ j (t). The Hamiltonian is given by the relation ˆ (t) = ωaˆ † (t)a(t) ˆ + ω0 H + 1 † bˆ j (t)bˆ j (t) − 0 j=1 ˆ ˆ D(x, t) P(x, t) dx ωa ˆ + aˆ † (t)][σˆ (t) + σˆ † (t)], σˆ z (t) + Ω[a(t) 2 where σˆ z (t) and σˆ (t) are the pseudo-spin operators and Ω = −d ω ωm a sin xa 0 L c is the Rabi frequency of the guest atom located at xa whose electric dipole moment (in the eigenstate |1 of the operator [σˆ (t) + σˆ † (t)]) is d and the effective mass m a . We notice that 1 − 0 ˆ ˆ D(x, t) P(x, t) dx = N j=1 ˆ + aˆ † (t)][bˆ j (t) + bˆ †j (t)], g j (ω)[a(t) Problem of Macroscopic Averages where g j (ω) = −d j ω ωm 0 sin xj . 0 L c In simplifying the Hamiltonian, Dutra and Furuya (1997) denote F G N G G(ω) = H [g j (ω)]2 , j =1 the coupling constant between the field and the collective harmonic oscillator described by the annihilation operator 1 g j (ω)bˆ j (t). G(ω) j=1 N ˆ B(t) = In the Hamiltonian (5.12), the self-energy terms (Cohen-Tannoudji et al. 1989) have been neglected. This results in the condition 4[G(ω)]2 ≤ ωω0 . Further, the original problem is reduced to the case of a single atom coupled to two polariton / modes. The 2 , frequencies of these modes are denoted by Ω1 and Ω2 so that Ω1 ω 1 − 4 [G ω(ω)] 2 0 ω and Ω2 → ω0 when ω → 0, ∞, respectively, with G (ω) = ω0 G(ω). ω In other words, Ω1 < Ω2 for ω < ω0 , Ω1 > Ω2 for ω > ω0 . The dressed operators are † ˆ ˆ and B(t) denoted by cˆ k (t) and cˆ k (t), k = 1, 2, and the operators a(t) are expressed in their terms (Chizhov et al. 1991). The problem of the extra quantum noise introduced by the atoms of the medium is discussed. In the case when the atoms of the medium are only weakly coupled to the field, i.e. G (ω), ω ω0 , it holds that (Glauber and Lewenstein 1991) ˆ + aˆ † (t)]}2 ≈ {Δ[a(t) √ r , ˆ and r is the relative perˆ + aˆ † (t)] = a(t) ˆ + aˆ † (t) − a(t) ˆ + aˆ † (t) 1, where Δ[a(t) mittivity r ≈ 1 + 4 [G (ω)]2 . ω02 From the relation (5.19), the variance of the operator of the electric displacement ˆ field D(x, t) (cf., (Glauber and Lewenstein 1991)) can be calculated. The adoption 5 Microscopic Models as Related to Macroscopic Descriptions of the continuous distribution of atoms in the medium instead of the realistic discrete one implies a greater variance (Rosewarne 1991). Let us address the problem of macroscopic averages. The macroscopic theories of quantum electrodynamics in nonlinear media, have often, by the “definition” avoided discussing the macroscopic averaging procedure. The quantum-mechanical averaging advocated by Schram (1960) removes the quantum fluctuations from the macroscopic theory. The problem of the macroscopic theory what should be the macroscopic averaging procedure resisted, for many years, the solution. It was Lorentz who in the beginning of the twentieth century first tried such a derivation, cf., chapters (de Groot 1969, van Kranendonk and Sipe 1977). Robinson (1971, 1973) has proposed a different kind of macroscopic average. He regards a macroscopic description as a description where the spatial Fourier components of the field variables above some limiting spatial frequency k0 are irrelevant. Dutra and Furuya (1997) take the Fourier components with the spatial frequencies above ωc for irrelevant in a macroscopic description. They arrive at the following expression for the macroscopic polarization ˆ¯ P(x, t) = −2G(ω) ω 0 ˆ + Bˆ † (t)]. sin x [ B(t) ωLm 0 c The macroscopic “average” does not change the operator of the electric displaceˆ ment field D(x, t) and the macroscopic electric-field-strength operator is given by the relation 1 ˆ ˆ¯ ˆ¯ t) − P(x, t)]. E(x, t) = [ D(x, 0 ˆ¯ The calculation of the variance of a quantity typical of the operator E(x, t) is larger −3 than r 2 , which agrees with Rosewarne’s result (Rosewarne 1991). Thus, it has been shown that the contribution from the atoms to the quantum noise of the field does not restrict itself to inclusion of the dielectric constant. We are going to report the suitable macroscopic theory of electrodynamics in a material medium which does not suffer from the problems which are discussed here. It is shown that under certain conditions a macroscopic description incorporating the frequency dependence of the relative permittivity provides a good approximation. In this domain, Milonni’s result has been recovered (Milonni 1995). The guest atom is not affected by the polariton mode if the frequency of the atom is far from Ω2 . An analysis of the probability of this mode inducing transitions shows that such a probability is negligible when Ω −ω | Ω2 2 Ω2 ω0 |Ω2 − ωa |. (Ω22 − ω2 )2 + 4ω0 ωG 2 Problem of Macroscopic Averages In the regime described by the relation (5.23), the leaving out of the polariton † mode described by cˆ 2 and cˆ 2 and the macroscopic averaging leads to the macroscopic Hamiltonian ˆ mac (xa , t)[σˆ (t) + σˆ † (t)], ˆ mac (t) = Ω1 cˆ † (t)ˆc1 (t) + ωa σˆ z (t) − d D H 1 2 0 where ˆ mac (x, t) = D √ Ω1 0 r r √ Ω1 † r sin x [ˆc1 (t) + cˆ 1 (t)], Lγ c with γ = √ d (Ω1 r ), dΩ1 is the macroscopic displacement field. By the relation (5.26), γ is the ratio between the speed of light in the vacuum and the group velocity in the medium. The macroscopic polarization Pˆ mac (x, t) is given by the relation (5.21) simplified by leaving † out its cˆ 2 (t), cˆ 2 (t) polariton component. Then, from the relation 1 ˆ Dmac (x, t) − Pˆ mac (x, t) , Eˆ mac (x, t) = 0 the macroscopic electric field is obtained. It is stated that the results of Dutra and Furuya (1997) for the macroscopic fields coincide with those derived by Milonni (1995) for the case of one and more modes. de Lange and Raab (2006) recall series decompositions of D and H comprising macroscopic densities of multipole moments 1 1 Di = 0 E i + Pi − ∇ j Q i j + ∇k ∇ j Q i jk + . . . , 2 6 1 −1 Hi = μ0 Bi − Mi + ∇ j Mi j + . . . , 2 (5.28) (5.29) where Pi (Q i j , Q i jk , . . .) is an electric dipole (quadrupole, octupole, . . . ) density and Mi (Mi j , . . .) is a magnetic dipole (quadrupole, . . . ) density, and, for monochromatic fields and on concentration on nonmagnetic media, 1 1 Pi = αi j E j + ai jk ∇k E j + . . . + G i j B˙ j + . . . , 2 ω Q i j = aki j E k + . . . , Q i jk ≈ 0, 1 Mi = − G ji E˙ j + . . . + χi j B j + . . . , Mi j ≈ 0, ω (5.30) (5.31) (5.32) 5 Microscopic Models as Related to Macroscopic Descriptions where αi j , ai jk , . . . , G i j , . . . are material constants. For harmonic fields, these decompositions can be written as 1 1 Di = 0 E i + αi j E j + ikk ai jk E j − ik j aki j E k + . . . 2 2 −iG i j B j + . . . , Hi = −iG ji E j + ... + μ−1 0 Bi − χi j B j + . . . . (5.33) (5.34) The authors show a lack of translational invariance. On using the Maxwell equations, these series can be recast into the form G i j 1 − ωε jkl akli 2 Bj + . . . , Di = 0 E i + αi j E j + . . . − i 1 Hi = −i G ji − ωεikl akl j E j + . . . + μ−1 0 Bi + . . . , 2 (5.35) (5.36) which contains the origin-independent material constants. For monochromatic fields and to describe magnetic media, it is necessary that relations (5.30), (5.31), and (5.32) are extended by other terms, Pi = 1 ˙ 1 αi j E j + a ∇k E˙ j + . . . + G i j B j + . . . , ω 2ω i jk 1 Q i j = − aki j E˙ k + . . . , ω 1 Mi = G ji E j + . . . + χij B˙ j + . . . . ω (5.37) (5.38) (5.39) The original terms are included in the ellipses. Here αi j , αi jk , . . ., G i j , χij , . . . are other material constants. For harmonic fields, the above decompositions can be extended as 1 1 Di = −iαi j E j + kk ai jk E j + k j aki j E k + . . . + G i j B j + . . . , 2 2 Hi = −G ji E j + . . . + iχij B j + . . . . (5.40) (5.41) The translational invariance is assessed also here. Using the Maxwell equations, these series can be recast into the form 1 Di = −iαi j E j + kk (ai jk + a jki + aki j )E i + . . . 3 1 1 + G i j − G ll δi j − ωε jkl akli B j + . . . , 3 6 Problem of Macroscopic Averages 1 1 Hi = −G ji + G ll δi j + ωεikl akl j E j + . . . 3 6 + (iχi j + ?)B j + . . . , where ? means nonmagnetic polarizability densities of electric multipole order 8 and magnetic multipole order 4. This form comprises the origin-independent material constants. 5.2.2 Kramers–Kronig Dielectric Dutra and Furuya (1998a) have pointed out that the Huttner–Barnett model at the stage after the diagonalization of the Hamiltonian for the polarization field and reservoirs can operate with a larger class of dielectric functions than that admitted by the original microscopic model. At this stage, the relative permittivity is expressed in dependence on the dimensionless coupling function ζ (ω), (ω) = 1 + ωc2 lim 2ω ε→+0 |ζ (ω )|2 dω , − ω − iε)ω where the notation from Section 4.1.1 has been used. For example, the permittivity obtained in the Lorentz oscillator model (Klingshirn 1995) can be recovered by adopting √ √ κ ±i2ω ω ζ (ω) = 2 √ , 2 ω − ω˜ 0 − i2κω π where ω˜ 0 means the frequency of the damped oscillations and κ is the frequencyindependent absorption rate. The relative permittivity for the original Huttner– Barnett microscopic model is of the form (ω) = 1 − ω2 − ωc2 + ω˜ 02 ω˜ 0 2 F(ω) ≡ lim ε→+0 −∞ |V (ω )|2 dω . ω − ω − iε It is indicated that, in the Lorentz oscillator model, equation (5.46) yields the solution F(ω) = i 4ωκ , ω˜ 0 5 Microscopic Models as Related to Macroscopic Descriptions but the integral equation (5.47) with this left-hand side (≡ replaced by =) is not solvable to yield a coupling function V (ω). This is the main difficulty, because from the relation + ζ (ω) = i ω˜ 0 ωV (ω) , ω2 − ω˜ 02 + ω˜20 F ∗ (ω) we obtain that 4ωκ π ω˜ 0 √ ω˜ 0 κ V (ω) = ±ρ √ . ω π |V (ω)|2 = or we could determine v(ω) = ρ This presents a restriction of the Huttner–Barnett microscopic model, which is nevertheless mentioned by Huttner and Barnett (1992a,b). 5.2.3 Dissipative Oscillator Medium Let us recall that in the microscopic model, the electromagnetic-field operators are given by integrals both over k and ω. Huttner and Barnett (1992a) say themselves that they lose the relationship between the frequency and the wave vector k. This observation is relative to the macroscopic theories, where the Dirac delta function suitable for the expression of such a relationship is never replaced by another (generalized) function. The quantities such as (4.49), (4.51), (4.52), and (4.53) are formulated in dependence on the relative permittivity (ω). Dutra and Furuya (1998b) suggest a simplification of the expression (ω) = 1 + ωc2 lim 2ω ε→+0 ∞ −∞ ξ (ω ) dω , − ω − iε where (Dutra and Furuya 1998b) ξ (ω) = ω˜ 0 ω |V (ω)|2 , |ω2 − ω˜ 02 + ω˜20 F(ω)|2 with F(ω) defined by the relation (5.47). They try to calculate the relative permittivity for the Huttner–Barnett microscopic model by means of classical electrodynamics. The Huttner–Barnett approach is applied to the particular case, where the coupling strength is a slowly varying function of frequency. In continuation of Dutra and Furuya (1997), a modified version of a simple model takes account of absorption. The inclusion of losses necessarily introduces Problem of Macroscopic Averages a continuum of modes in the model. Consequences are minimized by the adoption of the standard elimination of the reservoir. The interaction between the radiation field and the medium is described by a dipole-coupling Hamiltonian, where the canonically conjugated field is the displacement field instead of a minimal-coupling Hamiltonian, where the canonically conjugated field is the electric field. For simplicity, a Lorentzian shape for |V (ω)|2 is assumed, given by the relation V (ω) = iΔ ω − ω0 + iΔ κ , π where ω0 Δ κ. The Hamiltonian incorporating absorption is assumed to be ˆ (t) = ωc aˆ † (t)a(t) ˆ + ω0 H N ∞ + 0 1 † bˆ j (t)bˆ j (t) − j=1 ˆ ˆ D(x, t) P(x, t) dx ˆ (Ω, t)W ˆ j (Ω, t) dΩ ΩW j j=1 N ∞ † ˆ j (Ω, t) + V ∗ (Ω)W ˆ † (Ω, t)bˆ j (t) dΩ, V (Ω)bˆ j (t)W j ˆ † (Ω, t), W ˆ j (Ω, t) where ωc means the same as ω in the relations (5.6), (5.7), etc., W j are reservoir creation and annihilation operators that commute with every other operator except for the commutation relation ˆ ˆ † (Ω , t)] = δ j j δ(Ω − Ω )1. ˆ j (Ω, t), W [W j Substituting the relations (5.6) and (5.8) into the Hamiltonian (5.55) and introducing appropriate collective operators, the total Hamiltonian becomes a sum of two uncoupled Hamiltonians ˆ 2 (t). ˆ (t) = H ˆ 1 (t) + H H ˆ 2 (t) describes (N − 1) damped collective excitations The second Hamiltonian H of the medium to which the field does not couple. The field and the single damped collective excitations of the medium, to which the field couples, are described by ˆ 1 (t) alone. This Hamiltonian is given by the relation the Hamiltonian H ˆ 1 (t) = H ˆ em (t) + H ˆ mat (t) + H ˆ int (t), H ˆ em (t) = ωc aˆ † (t)a(t) ˆ H 5 Microscopic Models as Related to Macroscopic Descriptions is the Hamiltonian of the field, ∞ ˆ + ˆ mat (t) = ω0 Bˆ † (t) B(t) ΩYˆ † (Ω, t)Yˆ (Ω, t) dΩ H 0 ∞ ˆ V (Ω) Bˆ † (t)Yˆ (Ω, t) + V ∗ (Ω)Yˆ † (Ω, t) B(t) dΩ + is the Hamiltonian of medium, and ˆ int (t) = G(ωc ) aˆ † (t) + a(t) ˆ + Bˆ † (t) ˆ H B(t) ˆ is their interaction Hamiltonian. The collective annihilation operators B(t) and Yˆ (Ω, t) are given by the relations ˆ B(t) = φ j bˆ j (t) and Yˆ (Ω, t) = ˆ k (Ω, t), φk W where φj = g j (ωc ) G(ωc ) and g j (ωc ) and G(ωc ) are given by equations (5.15) and (5.16), respectively, with ω replaced by ωc . Dutra and Furuya (1998b) have a (conventional) strictly microscopic model, where the medium is not continuous, but discrete. They admit the practicality of the macroscopic average of the physical quantities. Following Robinson (1971, 1973), they understand the macroscopic averaging as filtering out of higher spatial frequencies. A classical Hamiltonian is considered which is identical to the relation (5.58) (Ω,t) ˆ √ , B(t) √ , and Y √ ˆ . The except that a(t), B(t), and Yˆ (Ω, t) will be replaced by a(t) standard elimination of the reservoir variables (which corresponds to the standard treatment in the quantum theory, but with the bonus that the reservoir not being initially excited leads to a great simplification) is performed and the real variables D(t) = a(t) + a ∗ (t), B(t) = −i[a(t) − a ∗ (t)], ∗ P(t) = −i[B(t) − B (t)], X (t) = B(t) + B ∗ (t) (5.65) (5.66) (5.67) (5.68) Problem of Macroscopic Averages are introduced. The variables D(t), B(t), and X (t) are related to the electric displacement field D(x, t), the magnetic field M(x, t), and the polarization field P(x, t) by ω 0 ωc c sin x D(t), L c ω 0 ωc c M(x, t) = −c cos x B(t), L c ω 0 c P(x, t) = −2G (ωc ) sin x X (t). ω0 L c D(x, t) = (5.69) (5.70) (5.71) From the classical Hamiltonian, the classical equations of motion for X (t) and P(t) are obtained, d X (t) = −κX (t) + ω0 P(t), dt d P(t) = −ω0 X (t) − κP(t) + 2GD(t). dt (5.72) (5.73) These equations along with equations of motion for D(t) and B(t), which are not given here, admit a solution oscillating at the frequency ω, with the property that dX (t) = −iωX (t), dP(t) = −iωP(t), dD(t) = −iωD(t). All the solutions have the dt dt dt property X (t) = − 2ω0 G(ωc ) D(t). + κ 2 − ω2 − i2κω From the relation D(x, t) = (ω)[D(x, t) − P(x, t)], we obtain that (ω) = 1 + 4[G (ωc )]2 , ω0 − ω2 − i2κω 2 ω02 = ω02 + κ 2 − 4[G (ωc )]2 is the modified resonance frequency of the medium. The further topic in (Dutra and Furuya 1998b) is essentially the relation (4.76) due to Gruner and Welsch (1995). In particular, it is shown that also in the case of the simple microscopic model of the medium used by Dutra and Furuya (1998b), ˆ mat (t) and then the total Hamiltonian (5.58). The it is possible to diagonalize first H ˆ mat (t) is achieved in terms of the continuous diagonal form of the Hamiltonian H 5 Microscopic Models as Related to Macroscopic Descriptions ˆ operators B(ν, t), ˆ ˆ + B(ν, t) = α(ν) B(t) β(ν, Ω)Yˆ (Ω, t) dΩ, where α(ν) and β(ν, Ω) are some coefficients. The diagonal form of the total Hamilˆ tonian (5.58) is achieved in terms of the continuous operators A(ω, t), ˆ ˆ + α2 (ω)aˆ † (t) A(ω, t) = α1 (ω)a(t) ∞ ˆ + t) + β2 (ω, ν) Bˆ † (ν, t) dν, β1 (ω, ν) B(ν, where α1 (ω), α2 (ω), β1 (ω, ν), β2 (ω, ν) are some coefficients. Suitable operators, namely, those of the electric displacement field and of the macroscopic electric strength field are defined, such that ω 1 c ˆ ¯ˆ D(x, t) + 2 sin x E(x, t) = 0 (ω) ω0 0 L c ∞ ˆ α ∗ (ω) A(ω, t) dω + H. c. . (5.80) × G (ωc ) 0 The difference from the relation (4.76) arises, because Dutra and Furuya (1998b) have only a single mode of the field, use a dipole-coupling Hamiltonian instead of a minimal-coupling Hamiltonian, and have defined their field operators in terms of different quadratures of the annihilation and creation operators. Using the microscopic approach, Hillery and Mlodinow (1997) devoted themselves to the standard optical interactions and derived an effective Hamiltonian describing counterpropagating modes in a nonlinear medium. On considering multipolar coupled atoms interacting with an electromagnetic field, a quantum theory of dispersion has been obtained whose dispersion relations are equivalent to the standard Sellmeir equations for the description of a dispersive transparent medium (Drummond and Hillery 1999). Independently, the theory of light propagation in a Bose–Einstein condensate and a zero-temperature noninteracting Fermi–Dirac gas has been developed (Javanainen et al. 1999). Ruostekoski (2000) has theoretically studied the optical properties of a Fermi–Dirac gas in the presence of a superfluid state. He also considered diffraction of atoms by means of light-stimulated transitions of photons between two intersecting laser beams. Optical properties could possibly signal the presence of the superfluid state and determine the value of the Bardeen–Cooper–Schrieffer parameter in dilute atomic Fermi–Dirac gases. Crenshaw and Bowden (2000a,b) have derived effects of the Lorentz local fields on spontaneous emission in dielectric media. Bloch–Langevin operator equations have been obtained for two-level atoms embedded in a host dielectric medium using the macroscopic and microscopic quantizations and the macroscopic formulation has been criticized (Crenshaw and Bowden 2002). Single-Photon Models Crenshaw (2003) has presented a real-space derivation of the macroscopic quantum Hamiltonian from the microscopic quantum electrodynamic model of a dielectric. Crenshaw (2004) has transformed the macroscopic real-space Hamiltonian to momentum space. The microscopic model has been reduced to the macroscopic Hamiltonian (Ginzburg 1940) by way of the reciprocal-space model of the field in a dielectric (Hopfield 1958). Cerboneschi et al. (2002) have shown that the electromagnetically induced transparency is related to very small group velocities for the probe pulse also in an open system. The modifications of the atomic momentum produced by laser interactions have been taken into account. Tanaka et al. (2003) have observed a negative delay (positive advance) of the peak of an optical pulse in case the pulse is tuned to the anomalous dispersion region. The negative velocity of the peak is not the velocity of energy flow. 5.3 Single-Photon Models Guo (2007) has discussed propagation of one incident photon through a medium as the multiple-scattering process from the medium. The medium is assumed to be an ensemble of identical two-level atoms. Interaction with a two-level test atom outside the medium is considered as well. It is assumed that all the atoms are in the ground state when t = 0. Initially, there is one photon in a mode. The system is treated in the Schr¨odinger picture. An integral form of the evolution is presented. As in Guo (2005), the kernel of the integral transformation is expanded. It is assumed that counter-rotating terms are superfluous and will be ignored. Photon propagation through the atomic ensemble results in states S1 , S1→2 , S1→2→3 ,..., which come from the first-order scattering of the incident photon, from the secondand third-order scatterings of the incident photon, respectively. The photon is scattered into the states (modes) |1α , |1β , |1γ ,... in the time order. The author refers to classical works (Mandel and Wolf 1995) and (Loudon 2000) for an ambiguity of the photon phase. But a possibility for the photon to become entangled with the atom has been declared. The author expresses the time-dependent probability amplitude A for the test atom to transit from its ground state to an excited state. On the assumption of isotropy and uniformity of the medium, the conservation of polarization and wavenumber of the photon is derived. On these conditions A may be written in a form reminding of the expression for the electric field E(r, t) in Guo (2002). The factor 4π αatom k 2 n 0 reappears as 4πnk02 Patom , where k0 = ωc0 , ω0 is the energy difference between the energy states of a two-level medium atom, n is the density of the atoms, and Patom is the resonant component of the polarizability of the atoms in the ensemble. Berman (2007) has considered the problem of a source atom radiating into a medium of dielectric atoms using a microscopic model. The model system consists of a source atom embedded in a dielectric. The source atom is modelled as a 5 Microscopic Models as Related to Macroscopic Descriptions two-level atom with a ground, lower, state |1 and an excited, upper, state |2 . These states are separated in frequency by ω0 . The dielectric is contained within a sphere of radius R0 , ω0 Rc0 1. This medium consists of a uniform distribution of atoms. A bath density may be introduced and N is let to denote this density. Bath atom j is modelled as a four-level atom in the inverted tripod configuration with a ground state |g ( j) and three excited states |m ( j) , m = −1, 0, 1. The frequency separation between the ground state and any of the three excited states is ω. It is assumed that |Δ| 1, where Δ = ω0 − ω, Δ = 0. The positive frequency component of the ω electric-field operator at the space point R, |R| = R, is ˆ + (R) = i E ωk (λ) e aˆ kλ eik·R , 20 V k where aˆ kλ is an annihilation operator for a photon having the propagation vector k and the polarization e(λ) k , ωk = kc, V is the quantization volume, and (1) e(1) k ≡ e (k) = cos(θk ) cos(φk )ex + cos(θk ) sin(φk )e y − sin(θk )ez , (2) e(2) k ≡ e (k) = − sin(φk )ex + cos(φk )e y are unit polarization vectors, with k , θk ≡ θ (k) = cos−1 ez · |k| φk ≡ φ(k) = arg (ex + ie y ) · k . In relation (5.81) the time is not written, since the Schr¨odinger approach has been adopted. The source atom is excited by a pulse of a classical driving field, and the average field amplitude and the intensity radiated by the source atom are determined to the first order in the dielectric density N . The expectation value of the field is ˆ + (R, t) , where the time is given along with the position again. This denoted as E expectation value is ˆ + (R, t) = i E ωk (λ) i(k·R−ωk t) ∗ bkλ (t)b1,0 (t), e e 20 V k where b1,0 (t) is the amplitude to find all atoms in their ground states and to find no photons in the field and bkλ (t) is the amplitude to find all atoms in their ground states and a photon having the wave vector k and the polarization λ in the field. It is assumed that the exciting field is weak enough to approximate b1,0 (t) ≈ 1. Single-Photon Models Berman (2007) has found the average field μω02 (sin θ)e(1) ei(k0 R−ω0 t) 4π 0 c2 R 1 ∂ R0 ∂ R (0) × 1 + 2i + iδnk0 R0 − δn b2,0 , t− ω0 ∂t c ∂t c ˆ + (R, t) ≈ − E where θ ≡ θ (R), e(1) ≡ e(1) (R), k0 = ω0 , c 1 δn = − 4π 0 2π N μ2 Δ , μ (μ ) is a charac- (0) teristic of the source atom (a bath atom), and b2,0 (t) is the excited state amplitude in the absence of the medium (Berman 2004). Macroscopic theories are expected to give ˆ + (R, t) d = E μ(sin θ )e(1) eik0 (R+δn R0 ) 4π 0 c2 R 2 ∂ δn R0 −iω0 t R (0) × 2 b2,0 t − − e , ∂t c c where the subscript d stands for dielectric, δn = n − and n is the index of 1, (0) t − Rc | 1, this refraction in the medium. If |k0 δn R0 | 1 and | δncR0 ∂t∂ b2,0 expression can be expanded to the first order in δn as μω02 (sin θ)e(1) ei(k0 R−ω0 t) 4π 0 c2 R 1 ∂ R0 ∂ R (0) × 1 + 2i + iδnk0 R0 − 3δn b2,0 , t− ω0 ∂t c ∂t c ˆ + (R, t) d ≈ − E which does not differ seriously from relation (5.86). Using relation (5.85), Berman (2007) has derived that the average field intensity is equal to ˆ + (R, t) = | E ˆ + (R, t) |2 , Eˆ − (R, t) · E ˆ − (R, t) = E ˆ †+ (R, t) and Eˆ + (R, t) is given by relation (5.86). He has given where E a microscopic derivation of the fact that the field intensity propagates with a reduced velocity in the medium. He has tried to elucidate the nature of the retardation mechanism and has shown that the modifications of the emitted field are closely correlated with nearly forward scattering in the Chapter 6 Periodic and Disordered Media The periodic and disordered media have first been studied in the condensed-matter physics. In this physics field, electrons are paid attention to. The concepts of periodic and disordered media depend on whether electrons or photons are investigated. Nevertheless there is an analogy between the wave function of a single electron and the classical electromagnetic field. Therefore it was possible to achieve many results for periodic and disordered media in photonics using this similarity. The periodic media are conceptually simpler. Here we shall mainly speak of the macroscopic approach to quantization of the electromagnetic field in a periodic medium, but we shall also mention the papers, which have utilized specific approaches. Finite periodic media can be coupling media of free-space modes. In applications the corrugated waveguides are important. Photonic crystals are infinite or finite periodic media. The literature on one-dimensional periodic media abounds, since also fabrication of such media is simpler. We shall mention the quantization of the electromagnetic field in a disordered medium mainly in connection with various physical studies. The macroscopic approach to the quantization of the electromagnetic field suffices usually, but a description of the disordered or random medium is not easy. We shall cite the papers, whose authors have restricted themselves to the quantum input–output relations and the detection theory. This is also related to application of results of further fields of the quantum physics. Many papers have reported the random lasers. Even though we intend to review rather the theory, we see that we can only provide a partial review. In the essence of the matter, it is that much theoretical work is not published as the optical physics. 6.1 Quantization in Periodic Media Media whose dielectric constant is periodic are fabricated as microstructured materials with promising photonic band-gap structures. A number of methods for studying these media originate from the solid-state theory, where they are used in investigation of ordinary, electronic crystals. This is a possible explanation why periodic media are called photonic crystals. A. Lukˇs, V. Peˇrinov´a, Quantum Aspects of Light Propagation, C Springer Science+Business Media, LLC 2009 DOI 10.1007/b101766 6, 6 Periodic and Disordered Media The idea of photonic crystals was tested by the early experiments of Yablonovitch and Gmitter (1989). Since then the periodic media have been treated not only in the books devoted to optics (Born and Wolf 1999, Yeh 1988, Pedrotti and Pedrotti 1993) but also in monographs on the photonic crystals, e.g. (Joannapoulos 1995). For illustration, we will treat a one-dimensional photonic crystal as Mishra and Satpathy (2003), whose contribution consists in an interesting method of solution, which is not reproduced here. We combine the assumption of a periodic medium with that of a rectangular waveguide. Waveguides are useful optical devices. An optical circuit can be made using them and various optical couplers and switches. Classical theory of optical waveguides and couplers has been elaborated in 1970s (Yariv and Yeh 1984), nonclassical light has been proposed as a source for improving performance, and the quantum theory has been gaining importance. Recently, quantum entanglement has been pointed out as another resource. Quantum descriptions may be very simple, but essentially, they ought to be based on a perfect knowledge of quantization. By way of paradox, quantization is based on classical normal modes. Therefore, it is appropriate to concentrate ourselves on normal modes of rectangular mirror waveguide. It will be assumed that the waveguide is filled with homogeneous refractive medium. As this has the only effect of changing the speed of light, it will be assumed that a nonhomogeneity in a finite segment of the waveguide is present to model a 6.1.1 Classical Description of Electromagnetic Field Vast literature has been devoted to the solution of the Maxwell equations, and their value for the wave and quantum optics cannot be denied. Depending on the system of physical units used, the Maxwell equations have several forms. Let us mention only two of them, appropriate to the SI units and the Gaussian units. The timedependent vector fields, which enter these equations, are E(x, y, z, t), the electric strength vector field, and B(x, y, z, t), the magnetic induction vector field. In fact, other two fields are used, but they can also be eliminated through the so-called constitutive relations. Saying this we make some simplifying assumptions, but we are tacit about them. The so-called monochromaticity assumption E(x, y, z, t) = E(x, y, z; ω) exp(iωt), B(x, y, z, t) = B(x, y, z; ω) exp(iωt) allows one to treat the time-independent Maxwell equations. As announced, we restrict ourselves to a rectangular mirror waveguide. We assume that it has an infinite length, a width 2ax , and the height 2a y . The coordinate system is chosen so that the z-axis is the axis of the waveguide and the x-, y-axes are parallel with sides of the waveguide. We deal with nonvanishing solutions of the time-independent Maxwell equations ∇× 1 B(x, y, z; ω) − iω(x, y, z; ω)E(x, y, z; ω) = 0, μ(x, y, z; ω) Quantization in Periodic Media ∇ × E(x, y, z; ω) + iωB(x, y, z; ω) = 0, ∇ · [(x, y, z; ω)E(x, y, z; ω)] = 0, ∇ · B(x, y, z; ω) = 0, (6.3) (6.4) (6.5) where E(x, y, z; ω) and B(x, y, z; ω) are vector-valued functions in a domain G = {(x, y, z) : −ax < x < ax , −a y < y < a y , −∞ < z < ∞} and ω > 0 is a parameter. The desired solutions are to obey the boundary conditions n(x, y, z) × E(x, y, z; ω) = 0, n(x, y, z) · B(x, y, z; ω) = 0, (6.7) (6.8) where n(x, y, z) is any unit outer-pointing normal vector at the point (x, y, z) ∈ ∂G, with ∂G = {(x, y, z) : −ax ≤ x ≤ ax ∧ |y| = a y ∧ −∞ < z < ∞ ∨ |x| = ax ∧ −a y ≤ y ≤ a y ∧ −∞ < z < ∞ }. The boundary conditions (6.7) and (6.8) are a formal expression of the fact that the walls of the waveguide are perfect mirrors. Here μ(x, y, z; ω) = μ0 , (x, y, z; ω) is a function defined up to a finite number of z-values such that (x, y, z; ω) = 0 r0 , for z < 0, z > L , = 0 ¯r (z), for 0 < z < L , with μ0 > 0, μ0 = 4π × 10−7 Hm−1 the free-space magnetic permeability, 0 > 0 the free-space electric permittivity, r0 > 0, ¯r (z) are relative electric permittivities of the medium. The medium electric permittivity ¯ (z) = 0 ¯r (z) has a period Λ, DL Λ | L, or ¯ (z) = ¯ (z + Λ), and is a positive function, 0 (z) dz = L0 r0 . It is a formal expression of the idea that a plate with a periodically modulated permittivity is contained in the waveguide. 6.1.2 Modal Functions (i) Homogeneous refractive medium We assume that the waveguide is filled with a homogeneous nonmagnetic refractive medium, which is also nondispersive and lossless for simplicity. This assumption holds on infinite intervals (−∞, 0) and (L , ∞) in the z-coordinate. On the finite 6 Periodic and Disordered Media interval (0, L), the medium is not homogeneous, but it is periodic. On average, its electric permittivity equals to that of the homogeneous medium assumed. For illustration, we will consider examples where solutions have finite norms, in Section 6.1.4. Let us assume that the relation (x, y, z; ω) = 0 r0 holds everywhere in G. We will express the solution in the form E(x, y, z; ω) = E(x, y) exp (−ik z z), B(x, y, z; ω) = B(x, y) exp(−ik z z), with k z = 0. In analogy with the electromagnetic-field theory, from equations (6.2), (6.3), (6.4), and (6.5) we derive the time-independent wave equation Δ+ ω2 v2 C = 0, where v=√ 1 , 0 r0 μ0 and C ≡ C(x, y, z; ω) stands for E and B substitutionally. Respecting (6.11), we may rewrite (6.12) in the form ∂2 ∂2 + 2 2 ∂x ∂y ω2 2 − k z C = 0. v2 Introducing the notation Er (x, y), Br (x, y), r = x, y, z, for the components of the vectors E(x, y), B(x, y), respectively, we have (Greiner 1998, p. 366) mπ nπ (x + ax ) sin (y + a y ) , 2ax 2a y mπ nπ β sin (x + ax ) cos (y + a y ) , 2ax 2a y mπ nπ iγ sin (x + ax ) sin (y + a y ) , 2ax 2a y mπ nπ iα sin (x + ax ) cos (y + a y ) , 2ax 2a y mπ nπ iβ cos (x + ax ) sin (y + a y ) , 2ax 2a y mπ nπ γ cos (x + ax ) cos (y + a y ) , 2ax 2a y E x (x, y) = α cos E y (x, y) = E z (x, y) = Bx (x, y) = B y (x, y) = Bz (x, y) = Quantization in Periodic Media where m, n = 0, 1, . . . , ∞. Equation (6.14) yields the relation among m, n, k z , and ω, nπ 2 ω2 mπ 2 + + k z2 = 2 . (6.17) 2ax 2a y v No solution of this kind exists if ω < ωg , where nπ 2 mπ 2 ωg = ωmn = v + . 2ax 2a y We can specify two linearly independent solutions for m, n = 0, 1, . . . , ∞, m + n ≥ 1, ky γ , + k 2y TE kx = −iω 2 γ , k x + k 2y TE αTE = iω βTE αTM βTM k x2 γTE = 0, kx kz = 2 γTM , k x + k 2y k y kz = 2 γTM , k x + k 2y γTM = γTM , kx kz γ , + k 2y TE k y kz = 2 γ , k x + k 2y TE αTE = βTE k x2 γTE = γTE , iωk y 1 αTM = 2 2 γTM , v k x + k 2y 1 −iωk x βTM = 2 2 γTM , v k x + k 2y γTM = 0. Here TE means transverse electric and TM transverse magnetic. For m = 0, a TE solution exists, but no TM solutions exist; for n = 0, the same occurs and otherwise, , ky ≡ both solutions exist. In (6.19) and (6.20), k x , k y are abbreviations, k x ≡ mπ 2ax nπ . Let us remark that γ and γ are complex parameters. TM TE 2a y (ii) Plate with a periodically modulated permittivity We generalize the first of relations (6.11) to the form E(x, y, z; ω) = E(x, y)u(z), where u(z) is an unknown function, but E(x, y) is given by relations (6.15). We distinguish the case of a TE solution from the case of a TM solution. In the latter case, this form cannot be required, since in the case of a TM solution, E(x, y) depends on k z . In fact the components k x , k y are meaningful in the inhomogeneous medium with the z-dependence of the dielectric constant, not the component k z . In the case of a TE solution, the relation ∇ · E(x; ω) = 0 6 Periodic and Disordered Media holds, where x ≡ (x, y, z). The Maxwell equations are equivalent to a Helmholtz equation. It is derived that the function u(z) obeys the ordinary differential equation d2 u(z) + k˜ z2 (z)u(z) = 0, −∞ < z < ∞, dz 2 where k˜ z2 (z) = k˜ 2 (z) − (k x2 + k 2y ), with k˜ 2 (z) ≡ a solution of the equation ω2 (z). c2 r A solution will be “sewn” of d2 u(z) + k z2 u(z) = 0, − ∞ < z < ∞ dz 2 and a solution of the equation d2 u(z) + k¯ z2 (z)u(z) = 0, − ∞ < z < ∞, dz 2 2 where k¯ z2 (z) = k¯ 2 (z) − (k x2 + k 2y ), with k¯ 2 (z) ≡ ωc2 ¯r (z). The general solution of equation (6.25) has the form u(z) = cB f B (z)eikB z + cF f F (z)e−ikB z , where cB and cF are arbitrary complex numbers, eikB Λ is a Bloch factor, with either π or Im kB > 0. The functions f B (z) and f F (z) fulfil the conditions 0 < kB < Λ f B (0) = f F (0) = 1 and have the period Λ. On differentiating relation (6.26) with respect to z, it can be seen that also dzd u(z) has the same form in principle. And so the Bloch functions u(z) = f B (z)eikB z , f F (z)e−ikB z are to be determined as the solutions of Equation (6.25) having along with the , , and obeying the conditions respective derivatives some initial data u(0), du dz 0 , , du ,, du ,, u(Λ) = λu(0), =λ , dz ,z=Λ dz ,z=0 where λ is a complex number. From the fact that equation (6.25) does not contain , , d u(z), it can be derived that the transition matrix from the initial data u(0), du(z) dz dz , z=0 to the respective values at z = Λ is unimodular. And so λ = exp (±ikB Λ). Since the transition matrix is real, kB is either real or pure imaginary. Quantization in Periodic Media For illustration we may consider a Kronig–Penney dielectric (Mishra and Satpathy 2003). In this case, it holds k¯ z (z) = k1z , 0 < z < Λ2 , k2z , Λ2 < z < Λ. The equation for kB is obtained in the form Λ Λ cos(kB Λ) = cos k1z cos k2z 2 2 1 k1z Λ Λ k2z − + sin k2z . sin k1z 2 k2z k1z 2 2 We assume that the general solution of equation (6.23) has the form ⎧ (−) ik z cB e z + cF(−) e−ikz z , for z < 0, ⎪ ⎪ ⎪ ⎪ ⎨ u(z) = cB(0) f B (z)eikB z + cF(0) f F (z)e−ikB z , for 0 < z < L , ⎪ ⎪ ⎪ ⎪ ⎩ (+) ikz z cB e + cF(+) e−ikz z , for z > L . Here f B (z), f F (z) are periodic functions such that f B (0) = f F (0) = 1. The solution along with its first derivative is continuous at z = 0, L, which can be expressed as relations between cB(−) , cF(−) , cB(0) , cF(0) , cB(+) , cF(+) . These coefficients are arbitrary otherwise. The continuity of the (original) function at z = 0, L can be written as cB(−) + cF(−) = cB(0) + cF(0) , cB(0) eikB L + cF(0) e−ikB L = cB(+) eikz L + cF(+) e−ikz L . The continuity of the derivative of the function at z = 0, L can be expressed as the relations k z cB(−) − k z cF(−) = ˜f B cB(0) + ˜f F cF(0) , ˜f B c(0) eikB L + ˜f F c(0) e−ikB L = k z c(+) eikz L − k z c(+) e−ikz L , B F B F , , , , ˜f B = −i d f B (z)eikB z , , ˜f F = −i d f F (z)e−ikB z , . , , dz dz 0 0 One of parametrizations of the solution of equation (6.23) is a dependence on cB(−) , cF(−) , which is 6 Periodic and Disordered Media 1 ˜f F − ˜f B cF(0) ⎛ ⎞ ⎛ (−) ⎞ ˜f F − k z ˜f F + k z cB ⎠⎝ ⎠, ×⎝ (−) ˜ ˜ − fB + kz − fB − kz cF ⎞ ⎛ (−) ⎞ ⎛ (+) ⎞ ⎛ sBB sBF cB cB ⎠=⎝ ⎠⎝ ⎠, ⎝ (+) (−) s s cF cF FB FF ⎝ where 2 −ikz L cos(kB L)k z − ˜f F + ˜f B e D 2 + i e−ikz L sin(kB L) k z2 − ˜f B ˜f F , D 2 = eikz L cos(kB L)k z − ˜f F + ˜f B D 2 − i eikz L sin(kB L) k z2 − ˜f B ˜f F , D 2 −ikz L = −i e sin(kB L) k z + ˜f B k z + ˜f F , D 2 ikz L =i e sin(kB L) k z − ˜f B k z − ˜f F , D sBB = sBF sFB (6.39) (6.40) (6.41) with D = 2k z − ˜f F + ˜f B . Another of parametrizations of the solution (6.32) is a dependence on cB(+) , cF(−) . Of remaining four coefficients, we first determine only cB(−) , cF(+) . Their form is interesting as an input–output relation ⎛ ⎝ cB(−) cF(+) t r r t ⎞⎛ ⎠⎝ cB(+) cF(−) ⎞ ⎠, where sBF sFB 1 , r = , t = , sBB sBB sBB sBB sFF − sFB sBF 1 t= = , sBB sBB r =− (6.44) (6.45) Quantization in Periodic Media since the determinant of the “scattering matrix” is , , , sBB sBF , , , , sFB sFF , = 1. Finally, we may also express cB(0) and cF(0) , ⎛ t ⎠= ⎝ ˜ f F − ˜f B cF(0) ⎛ ⎞ cB(+) ˜f F − k z e−i(kB +kz )L ˜f F + k z ⎠. ⎝ − ˜f B + k z ei(kB −kz )L − ˜f B − k z (−) cF In the case of a TM solution, we proceed quite generally. We introduce unknown functions α(z), β(z), γ (z) such that E x (x, y, z; ω) = α(z) cos [k x (x + ax )] sin k y (y + a y ) , E y (x, y, z; ω) = β(z) sin [k x (x + ax )] cos k y (y + a y ) , (6.48) E z (x, y, z; ω) = iγ (z) sin [k x (x + ax )] sin k y (y + a y ) . The functions α(z) and β(z) are linearly dependent, − k y α(z) + k x β(z) = 0, −∞ < z < ∞. Equation (6.23) is replaced by d k x α(z) + k y β(z) = −iγ (z)k˜ z2 (z), dz d ˜2 k (z)γ (z) = −ik˜ 2 (z) k x α(z) + k y β(z) , dz ˜ ¯ ˜ = k(z), k˜ z (z) = k¯ z (z) for where k(z) = k, k˜ z (z) = k z for z < 0, z > L and k(z) ¯ ¯ 0 < z < L, where k(z), k z (z) have the period Λ. The equations d k x α(z) + k y β(z) = −iγ (z)k z2 , dz d 2 k γ (z) = −ik 2 k x α(z) + k y β(z) dz have a general solution k x α(z) + k y β(z) = cB k z eikz z + cF k z e−ikz z , k 2 γ (z) = −cB k 2 eikz z + cF k 2 e−ikz z , where cB and cF are arbitrary complex numbers. Let us note properties of the equations 6 Periodic and Disordered Media d k x α(z) + k y β(z) = −iγ (z)k¯ z2 (z), dz d ¯2 k (z)γ (z) = −ik¯ 2 (z) k x α(z) + k y β(z) . dz The general solution of equations (6.53) has the form k x α(z) + k y β(z) = cB f B⊥ (z)eikB z + cF f F⊥ (z)e−ikB z , k¯ 2 (z)γ (z) = cB f B3 (z)eikB z + cF f F3 (z)e−ikB z , where cB and cF are arbitrary complex numbers. The functions f B⊥ (z), f F⊥ (z) fulfil the conditions f B⊥ (0) = f F⊥ (0) = k z . It can be expected, e.g., that the inclusion of magnetic properties, with the neglect of frequency dependence, should lead to a small complication and a similar treatment of the TE solution. For α (z), β(z), γ (z), the Bloch factor eikB Λ is of importance. For the Kronig– Penney dielectric, the Bloch factor can be determined from the equation Λ Λ cos(kB Λ) = cos k1z cos k2z 2 2 2 2 k2 k1z Λ Λ 1 k1 k2z + sin k . (6.56) − sin k 1z 2z 2 k22 k1z 2 2 k12 k2z It is well known (Mishra and Salpathy 2003) that the relations (6.31) and (6.56) can be written as a single one when appropriate notation is adopted. In fact, we observe the change k1z → k12 k2 , k2z → 2 . k1z k2z We assume a general solution, which is sewn from a solution of equations (6.51) for z < 0, from a solution of equations (6.53) for 0 < z < L, and from another solution of equations (6.51) for z > L. Then we obtain modified formulae (6.44) and (6.45) adopting the changes kz → k2 ˜ 1 1 , f B → − f B3 (0), ˜f F → − f F3 (0). kz kz kz Quantization of the electromagnetic field propagating in an ideal uniform waveguide is (for a fixed transverse mode) assumed in the article (Marinescu 1992), and it is shown that the appropriate scalar field obeys a Klein–Gordon type equation. Even a Dirac-type equation for the waveguides is considered in that article. Quantization in Periodic Media 6.1.3 Method of Coupled Modes To any ω > min{ω10 , ω01 }, there exists only a finite number of modal functions (6.11), with (6.15) and (6.16). We will let J (ω), J (ω) ⊂ J , denote the corresponding index set. We can partition this set into disjoint sets, J (ω) = J+ (ω) ∪ J0 (ω) ∪ J− (ω), where J+ (ω) is the set of indices that have k z > 0, J0 (ω) is the set of indices that have k z = 0, and J− (ω) is the set of indices that have k z < 0. To any modal function E jin (x) with jin ∈ J+ (ω), one (on the basis of physical knowledge of propagation) looks for a solution of the problem formulated in Section 6.1.2 as follows. If z < 0 then E(x) = A j (0)E j (x), B j (0)B j (x), j∈{ jin }∪J− (ω) B(x) = j∈{ jin }∪J− (ω) where A j (0) = B j (0) = 1 for j = jin . If z > L then E(x) = A j (L)E j (x), B j (L)B j (x). j∈J+ (ω) B(x) = j∈J+ (ω) The coupled-mode method determines a form of the solution even for z ∈ [0, L] and it finds complex numbers A j (0), j ∈ J− (ω), and A j (L), j ∈ J+ (ω). This method is approximate. If 0 ≤ z ≤ L then E(x) = A j (z)E j (x), B j (z)B j (x). j∈J+ (ω)∪J− (ω) B(x) = j∈J+ (ω)∪J− (ω) When the dependence of A j (z), B j (z) on the z-coordinate is weak (their variation is slow), we may consider EF,B (x) = A j (z)E j (x), B j (z)B j (x). j∈J± (ω) BF,B (x) = j∈J± (ω) 6 Periodic and Disordered Media A j (z) = , , ,E j (x⊥ , z),2 d2 x⊥ E∗j (x⊥ , z) · EF,B (x)d2 x⊥ . Similarly B j (z) is determined. We assume TE waves. A possible use of the method for TM waves means another, rough, approximation. We concentrate on the electric field and, therefore, we solve the Helmholtz equation ∇ 2 E(x) + r (z) ω2 E(x) = 0. c2 We will expound a means of describing the transformation E(x) → [r (z) − r0 ]E(x), 0 ≤ z ≤ L . In fact we assume that [r (z) − r0 ]E(x) = E (x) = Aj (z)E j (x), j∈J+ (ω)∪J− (ω) {[r (z) − r0 ] E(x)}F,B = EF,B (x) = Aj (z)E j (x), E∗j (x⊥ , z) · EF,B (x)d2 x⊥ . j∈J± (ω) where Aj (z) = , , ,E j (x⊥ , z),2 d2 x⊥ [r (z) − r0 ]E j (x) = K j j (z)E j (x), j∈J+ (ω)∪J− (ω) E j (x) F,B = K j j (z)E j (x), j ∈J± (ω) K j j (z) = , , ,E j (x⊥ , z),2 d2 x⊥ E∗j (x⊥ , z) · Ej (x) F,B d2 x⊥ , i.e., Aj (z) = j ∈J+ (ω)∪J− (ω) K j j (z) A j (z). Quantization in Periodic Media Further, the coupled-mode theory comprises an approximate expression (Yariv and Yeh 1984, Section 6.4, p. 197) ∇ 2 E(x) = h j (x), j∈J+ (ω)∪J− (ω) where h j (x) = ∇ 2 A j (z)E j (x) + 2∇ A j (z) · ∇E j (x) + A j (z)∇ 2 E j (x) ≈ −2ik j z d ω2 A j (z)E j (x) − r0 2 A j (z)E j (x), dz c or ∇ 2 E(x) ≈ −2i k jz j∈J+ (ω)∪J− (ω) d ω2 A j (z)E j (x) − r0 2 E(x). dz c On substituting it into (6.69) and using (6.71), (6.77), we obtain a system of differential equations − 2ik j z d ω2 A j (z) = − 2 dz c K j j (z) A j (z), j ∈J+ (ω)∪J− (ω) to be solved on the boundary conditions A j (0) = 1 for j = jin , 0 for j ∈ J+ (ω)\{ jin }, A j (L) = 0 for j ∈ J− (ω). (6.82) (6.83) We will reduce the equations to the form k jz d i ω2 A j (z) = − 2 |k j z | dz 2c j ∈J+ (ω)∪J− (ω) K j j (z) A j (z). |k j z | On physical grounds, it is expected that j ∈J+ (ω)∪J− is independent of z. We have k jz | A j (z)|2 |k | j z (ω) 6 Periodic and Disordered Media i ω2 2 c2 j ∈J+ k jz d A j (z) A∗j (z) + c.c. |k j z | dz K j j (z) A j (z) A∗j (z) + c.c. ≡ 0, |k | jz (ω)∪J (ω) where c.c. means complex conjugation, or the expected property is K ∗j j (z) K j j (z) = . |k j z | |k j z | 6.1.4 Normalized Modes of the Electromagnetic Field The normalization of modal functions of the electromagnetic field, which is made for the sake of quantization, can be based, in optics, on a simple connection of the vector potential with the electric-field strength vector. This connection follows from the use of the so-called Coulomb gauge. First, we assume that only the electromagnetic field is present in the cavity (in the so-called nonrelativistic approximation). In quantum optics, the simple instance of a perfectly closed cavity or resonator is often considered, which can be described mathematically in terms of the subset G of the usual Euclidean space R 3 with the boundary ∂G. ˆ In optics, the quantization is a definition of the vector potential operator A(x, t) by the relation (phot) (phot)∗ † ˆ (x, t)aˆ j , A j (x, t)aˆ j + A j (6.88) A(x, t) = j∈J where J is an index set, and the photon annihilation and creation operators aˆ j and † aˆ j in the jth mode fulfil the commutation relations † ˆ [aˆ j , aˆ j ] = [aˆ † , aˆ † ] = 0. ˆ [aˆ j , aˆ j ] = δ j j 1, j j (phot) A j (x, t) u j (x) exp(−iω j t), 20 ω j with the reduced Planck constant, 0 the vacuum (free-space) electric permittivity, ω j and u j (x) satisfying the Helmholtz equation ∇ 2 u j (x) + ω2j c2 u j (x) = 0, the transversality condition ∇ · u j (x) = 0, Quantization in Periodic Media and the boundary conditions , nx × u j (x) ,∂G = 0, , , nx · ∇ × u j (x) ,∂G = (nx × ∇) · u j (x),∂G = 0, (6.93) (6.94) where nx is the normal vector at the point x. It is required that the modal functions u j (x) be orthogonal and normalized as expressed by the relation u j (x) · u j (x) d3 x = δ j j . From relation (6.90), it is seen that the harmonic time dependence has the form exp(−iωt), not exp(iωt) as in relation (6.1). We have adopted this change although it may be a source of obscurity. (i) Empty rectangular cavity For illustration, we will assume that G = {x : −ax < x < ax , −a y < y < a y , −az < z < az }, where ax , a y , az are positive. It can be proved that the index set J is a collection of j = (n x , n y , n z , s), where nr ∈ 0, 1, . . . , ∞, r = x, y, z, s = TE, TM, n x > 0 or s = TE, n y > 0 or s = TE, and n z > 0 or s = TM, n x + n y + n z ≥ 2. The solutions ω j have the form / ω j = c k x2 + k 2y + k z2 , with c= √ nr π 1 , kr = , 0 μ0 2ar and the solutions u j (x) are connected with the classical solutions nyπ nx π E j x (x) = α j cos (x + ax ) sin (y + a y ) 2ax 2a y nz π (z + az ) , × sin 2az nyπ nx π (x + ax ) cos (y + a y ) E j y (x) = β j sin 2ax 2a y nz π (z + az ) , × sin 2az 6 Periodic and Disordered Media nyπ nx π (x + ax ) sin (y + a y ) 2ax 2a y nz π × cos (z + az ) , 2az E j z (x) = γ j sin to the equivalent boundary-value problem ωj E j = 0, c2 ∇ · B j = 0, ∇ × E j − iω j B j = 0, , , nx · B j (x, ω j ),∂G = 0, nx × E j (x, ω j ),∂G = 0. ∇ · E j = 0, ∇ × B j + i (6.100) (6.101) (6.102) Here α j = −iω j k x2 ky kx γ j , β j = iω j 2 γ , 2 + ky k x + k 2y j γ j = 0, for s = TE, k y kz kx kz γj, βj = − 2 γ j , for s = TM, αj = − 2 k x + k 2y k x + k 2y (6.103) (6.104) with γ j , γ j complex parameters. The connecting relation is u j (x) = −i 20 (phot) E (x), ω j j where (phot) (x) = E jr (x)er , r=x,y,z (phot) with E jr (x) given by the formulae (6.99), (6.103), (6.104), in which k x2 + k 2y ζ, γ j = 2 0 ω j (1 + δkx 0 )(1 + δk y 0 )(1 − δkz 0 )V j k x2 + k 2y γ j = 2c ζj 0 ω j (1 − δkx 0 )(1 − δk y 0 )(1 + δkz 0 )V are substituted, V = 8ax a y az , |ζ j | = |ζ j | = 1. It can be easily derived that the vector-valued functions u j (x), j ∈ J , satisfy a completeness relation, Quantization in Periodic Media u j (x)u∗j (x ) = δ(x − x )1 − ∇x ∇x G(x, x ), where G(x, x ) is a Green’s function for the Laplace operator (the Dirichlet problem). (ii) Rectangular cavity filled with a refractive medium In this case, the quantization can be performed according to the relations (6.88), (6.89), (6.90), with ω j and u j (x) satisfying the Helmholtz equation ∇ 2 u j (x) + r0 ω2j c2 u j (x) = 0, where r0 is the relative electric permittivity of the medium, the transversality condition (6.92), and the boundary conditions (6.93) and (6.94). It is required that the modal functions u j (x) be orthogonal and normalized as expressed by the relation r0 u∗j (x) · u j (x) d3 x = δ j j . For illustration, we will assume that G is defined by (6.96). The solutions ω j have the form / ω j = v k x2 + k 2y + k z2 , with v=√ 1 nr π , kr = , r = x, y, z, 0 r0 μ0 2ar and the solutions u j (x) are connected with the solutions of the form (6.99) to the equivalent boundary-value problem ωj E j = 0, c2 ∇ · B j = 0, ∇ × E j − iω j B j = 0, , , nx · B j (x, ω j ),∂G = 0, nx × E j (x, ω j ),∂G = 0. ∇ · E j = 0, ∇ × B j + ir0 The connecting relation is (6.105) with (6.106), where E jr formulae (6.99), (6.103), (6.104), in which (6.114) (6.115) (6.116) (x) are given by the 6 Periodic and Disordered Media k x2 + k 2y =2 ζ, 0 r0 ω j (1 + δkx 0 )(1 + δk y 0 )(1 − δkz 0 )V j k x2 + k 2y γ j = 2c ζj 0 r0 ω j (1 − δkx 0 )(1 − δk y 0 )(1 + δkz 0 )V γ j are substituted. It can be easily derived that the vector-valued functions u j (x), j ∈ J , satisfy a completeness relation r0 u j (x)u∗j (x ) = δ(x − x )1 − ∇x ∇x G(x, x ), (6.119) j∈J where G(x, x ) is a Green’s function for the Laplace operator (the Dirichlet problem). (iii) Rectangular waveguide filled with a refractive medium and located in a flat space We will consider a subset G = G ⊥ × S1 (−az ≤ z < az ), with the boundary ∂G = ∂G ⊥ × S1 (−az ≤ z < az ), of a flat non-Euclidean space R2 × S1 (−az ≤ z < az ), where S1 (−az ≤ z < az ) means a topological circle of the length 2az . In this case, the quantization can be performed according to the relations (6.88), (6.89), (6.90), with ω j and u j (x) satisfying the Helmholtz equation of the form (6.110), the transversality condition (6.92), and the boundary conditions (6.93) and (6.94). It is required that the modal functions u j (x) be orthogonal and normalized as expressed by the relation (6.111). For illustration, we will assume that G = {x : −ax < x < ax , −a y < y < a y , −az ≤ z < az }, where ax , a y , az are positive. It can be proved that the index set J is a collection of j = (n x , n y , n z , s), where nr ∈ {0} ∪ N, r = x, y, n z ∈ Z, s = TE, TM, n x > 0 or s = TE, n y > 0 or s = TE, and n x + n y ≥ 1. The solutions ω j have the form / (6.121) ω j = v k x2 + k 2y + k z2 , with nr π nz π 1 , kr = , r = x, y, k z = , v=√ 0 r0 μ0 2ar az and the solutions u j (x) are connected with the classical solutions nyπ nx π (x + ax ) sin (y + a y ) E j x (x) = α j cos 2ax 2a y nz π (z + az ) , × exp i az Quantization in Periodic Media nyπ nx π (x + ax ) cos (y + a y ) 2ax 2a y nz π × exp i (z + az ) , az nyπ nx π (x + ax ) sin (y + a y ) E j z (x) = iγ j sin 2ax 2a y nz π × exp i (z + az ) , az E j y (x) = β j sin to the equivalent boundary-value problem of the forms (6.114), (6.115), (6.116). (phot) The connecting relation is (6.105) with (6.106), where E jr (x) are given by the formulae (6.123), (6.103), (6.104), in which k x2 + k 2y 2 (6.124) ζ, γj = 0 r0 ω j (1 + δkx 0 )(1 + δk y 0 )V j k x2 + k 2y 2 γj = c (6.125) ζj 0 r0 ω j (1 − δkx 0 )(1 − δk y 0 )V are substituted. It can be easily derived that the vector-valued functions u j (x), j ∈ J , satisfy a completeness relation (6.119). Usually the scalar product is considered r (z)u∗ (x) · u(x) d3 x. (6.126) G With these solutions of the problem defined, they have finite norms. (iv) Rectangular waveguide filled with a homogeneous refractive medium We will consider a subset G = G ⊥ × R1 , with the boundary ∂G = ∂G ⊥ × R1 , of the usual Euclidean space R3 . In optics, the quantization may be a definition of the ˆ vector potential operator A(x, t) by the relation ∞ (phot) ˆ A(x, t) = A j⊥ (x, k z , t)aˆ j⊥ (k z ) j⊥ ∈J⊥ (phot)∗ † A j⊥ (x, k z , t)aˆ j⊥ (k z ) dk z , where J⊥ is an index set and the photon annihilation, and creation operators aˆ j⊥ (k z ) † and aˆ j⊥ (k z ) in the mode ( j⊥ , k z ) fulfil the commutation relations † ˆ [aˆ j⊥ (k z ), aˆ j (k z )] = δ j⊥ j⊥ δ(k z − k z )1, ⊥ † † ˆ [aˆ j⊥ (k z ), aˆ j⊥ (k z )] = [aˆ j⊥ (k z ), aˆ j (k z )] = 0. ⊥ 6 Periodic and Disordered Media Further (phot) A j⊥ (x, k z , t) = u j (x, k z ) exp[−iω j⊥ (k z )t], 20 ω j⊥ (k z ) ⊥ with 0 the vacuum (free-space) electric u j⊥ (x, k z ) satisfying the Helmholtz equation ∇ 2 u j⊥ (x, k z ) + r0 ω2j⊥ (k z ) c2 u j⊥ (x, k z ) = 0, (6.129) ω j⊥ (k z ) the transversality condition ∇ · u j⊥ (x, k z ) = 0, and the boundary conditions , nx × u j⊥ (x, k z ) ,∂G = 0, , , nx · ∇ × u j⊥ (x, k z ) ,∂G = (nx × ∇) · u j⊥ (x, k z ),∂G = 0, (6.132) (6.133) where nx is the normal vector at the point x. It is required that the modal functions u j⊥ (x, k z ) be orthogonal and normalized as expressed by the relation r0 u∗j⊥ kz (x, k z ) · u j⊥ kz (x, k z ) d3 x = δ j⊥ j⊥ δ(k z − k z ). (6.134) G For illustration, we will assume that G = {x : −ax < x < ax , −a y < y < a y , −∞ < z < ∞}, where ax , a y are positive. It can be proved that the index set J⊥ is a collection of j⊥ = (n x , n y , s), where n r ∈ {0} ∪ N, r = x, y, s = TE, TM, n x > 0 or s = TE, n y > 0 or s = TE, and n x + n y ≥ 1. The solutions ω j⊥ (k z ) have the form / (6.136) ω j⊥ (k z ) = v k x2 + k 2y + k z2 , with v=√ nr π 1 , kr = , r = x, y, 0 r0 μ0 2ar and the solutions u j⊥ (x, k z ) are connected with the classical solutions nx π E j⊥ x (x, k z ) = α j⊥ (k z ) cos (x + ax ) 2ax nyπ × sin (y + a y ) exp (ik z z) , 2a y Quantization in Periodic Media nx π E j⊥ y (x, k z ) = β j⊥ (k z ) sin (x + ax ) 2ax nyπ × cos (y + a y ) exp (ik z z) , 2a y nx π (x + ax ) E j⊥ z (x, k z ) = iγ j⊥ (k z ) sin 2ax nyπ × sin (y + a y ) exp (ik z z) , 2a y to the equivalent boundary-value problem ω j⊥ (k z ) E j⊥ = 0, c2 ∇ · B j⊥ = 0, ∇ × E j⊥ − iω j⊥ (k z )B j⊥ = 0, , , nx · B j⊥ ,∂G = 0, nx × E j⊥ ,∂G = 0. ∇ · E j⊥ = 0, ∇ × B j⊥ + ir0 (6.139) (6.140) (6.141) Here E j⊥ ≡ E j⊥ (x, k z , ω j⊥ (k z )), B j⊥ ≡ B j⊥ (x, k z , ω j⊥ (k z )), α j⊥ (k z ) = −iω j⊥ (k z ) k x2 ky kx γ j⊥ , β j⊥ (k z ) = iω j⊥ (k z ) 2 γ , 2 + ky k x + k 2y j⊥ γ j⊥ (k z ) = 0, for s = TE, k y kz kx kz γ j , β j⊥ (k z ) = − 2 γj , α j⊥ (k z ) = − 2 k x + k 2y ⊥ k x + k 2y ⊥ γ j⊥ (k z ) = γ j⊥ , for s = TM, with γ j⊥ , γ j⊥ complex parameters. The connecting relation is u j⊥ (x, k z ) = −i where (phot) E j⊥ (x, k z ) = 20 (phot) E (x, k z ), ω j⊥ j⊥ E j⊥ r (x, k z )er , r=x,y,z (phot) with E j⊥ r (x, k z ) given by the formulae (6.138), (6.142), (6.143), in which k x2 + k 2y γ j⊥ (k z ) = ζ , 0 r0 ω j⊥ (k z ) (1 + δkx 0 )(1 + δk y 0 )π V⊥ j⊥ k x2 + k 2y γ j⊥ (k z ) = c ζj 0 r0 ω j⊥ (k z ) (1 − δkx 0 )(1 − δk y 0 )π V⊥ ⊥ 6 Periodic and Disordered Media are substituted, V⊥ = ax a y , |ζ j⊥ | = |ζ j⊥ | = 1. The vector-valued functions u j⊥ (x, k z ), j⊥ ∈ J⊥ , satisfy a completeness relation, j⊥ ∈J⊥ ∞ −∞ r0 u j⊥ (x, k z )u∗j⊥ (x , k z ) dk z = δ(x − x )1 − ∇x ∇x G(x, x ), where G(x, x ) is a Green’s function for the Laplace operator (the Dirichlet problem). (v) Rectangular waveguide filled with a nonhomogeneous refractive medium Quantum electrodynamics in periodic dielectric media has been treated several times (Caticha and Caticha 1992, Kweon and Lawandy 1995, Tip 1997). For the quantization true modal functions may be useful which we have found in the Section 6.1.2(ii). They have not been presented just as needed, but complex conjugated. We must consider another difficulty that they are not orthogonal. 6.1.5 Quantization in Linear Nonhomogeneous Nonconducting Medium Tip (1997) reminds one of the fact that a quantization of the electromagnetic field is made for a complete quantum description of interaction of radiation with atoms or molecules. These however may be placed in a “photonic material”, and so a quantization of the electromagnetic field in the material is purposeful. Tip (1997) points out the fact that photonic crystals are classical dielectric media with a periodic dielectric permittivity. The periodicity may give rise to a band structure. If a band gap is present and an embedded atom has a transition frequency in the gap, a single-photon emission is inhibited. However, even if no gap develops, decay rates may differ much from their free values. Free decay rates can be obtained through an application of Fermi’s golden rule. Generalization for any electric permittivity and magnetic permeability uses the so-called local density of states related to the classical electromagnetic field again. The author expounds a general approach to the quantization of linear evolution equations. Such an equation for F(t) from a separable, real Hilbert space has the form ∂ F (t) = N F(t) − G(t), N † = −N . ∂t The matter is a quantization of the quantity F(t). Tip (1997) assumes a linear, nonhomogeneous, nonconducting material medium including external currents or a Schr¨odinger quantum particle system. We have mostly adopted the notation and have changed it in part only. In particular, Quantization in Periodic Media 0 = μ0 = 1 upon the choice of units. The case = μ = 1 is called the vacuum case and, with arbitrary and μ, one may still speak of a free electromagnetic field. We will still use the asterisk for complex conjugates, the centred dot in multiplication of matrices, which indicates that they are treated as tensors. In general, an n-component field over the space Rd × R is considered, which satisfies the equation ∂ f(x, t) = M (x, ∇) · f(x, t), ∂t where x ∈ Rd , and M ≡ M (x, ∇) is an n × n matrix, whose entries are real partial differential operators with, in general, variable coefficients. There exists a bounded, real, invertible matrix ρ(x) with bounded inverse such that M† · ρ 2 (x) + ρ 2 (x) · M = 0, or the “energy” E= [ρ(x) · f(x, t)]2 d3 x can be introduced. Considering F(x, t) = ρ(x) · f(x, t), N=ρ·M·ρ (6.153) (6.154) where N ≡ N (x, ∇), we obtain the equations ∂ F(x, t) = N (x, ∇) · F(x, t), ∂t N† = −N. (6.155) (6.156) On introducing, in addition, a norm F(x, t)2 = 2E, equation (6.155) describes a unitary time evolution on the real space Hr = L 2 (Rd , dx; Rn ) F(x, t) = exp(Nt) · F(x, 0). Tip (1997) intends to work with a real Hilbert space as long as possible. He begins with two examples of field equations with a conservation law. Proceeding with Maxwell’s equations for a nonconducting material medium, he writes the Maxwell equations, which are equations for real waves originally. He assumes that 6 Periodic and Disordered Media (x) and μ(x) are real, smooth, bounded from below and above by positive constants. For J = 0, the energy is conserved, 1 1 2 2 [B(x, t)] d3 x E= (x)[E(x, t)] + 2 μ(x) 1 (6.159) |F(x, t)|2 d3 x, = 2 where F(x, t) = √ E(x, t) F1 (x, t) . = 1 √ B(x, t) F2 (x, t) μ Relation (6.154) becomes N= 0 N12 N21 0 0 − √1 (ε · ∇) √1μ 1 1 √ (ε · ∇) √ 0 μ 1 , where ε is the Levi-Civit`a pseudotensor, or N = W · N0 · W, where N0 = 0 −ε · ∇ ε·∇ 0 √1 1 0 √1 1 μ 1 . The orthogonal eigenprojector of the matrix N at the eigenvalue 0 is √ √ P1 0 ∇[∇ · ∇]−1 ∇ 0 √ √ . P0 = = 0 P2 μ∇[∇ · μ∇]−1 ∇ μ 0 Here [∇ · ∇]−1 and [∇ · μ∇]−1 can be expressed by an integral transform in the vacuum case. Tip (1997) pays attention to the Helmholtz operators and to the scattering theory, which is not reproduced here. The Helmholtz operators are 1 1 1 H1 = −N12 N21 = √ (ε · ∇) · (ε · ∇) √ , μ 1 1 1 H2 = −N21 N12 = √ (ε · ∇) · (ε · ∇) √ . μ μ Let uλα denote eigenvectors of H1 , H1 · uλα = λ2 uλα , Quantization in Periodic Media where λ ≥ 0 and α distinguish eigenvectors at the same eigenvalue. As H1 is a real operator, u∗λα differs from some uλβ by a constant factor only. We can always use −1 real eigenvectors. As H2 = N−1 12 · H1 · N12 = N21 · H1 · N21 , the eigenvector H2 can be obtained for instance as N21 · uλα . The exposition of the Lagrange formalism is introduced by a warning that in the case 0 N12 † , N21 = −N12 , (6.167) H = H1 ⊕ H2 , N = N21 0 equation (6.149) for G = 0 is not obtained using the Lagrange formalism, if we F1 , take F for the coordinate field. For the two components of the vector F = F2 separate equations are obtained and their connection is lost. It is recommended to use ξ = N −1 F as coordinate field. In the scalar wave case, this means to define the coordinate field using solution of a generalization of the Poisson equation. In the Maxwell case, N cannot be inverted due to the zero eigenvalue. Tip (1997) reminds one of projection upon the propagating modes. In the vacuum situation ⊥ E ξ1 (6.168) = N−1 0 ξ2 B is introduced. The quantity −ξ 1 is the vector potential A in the Coulomb gauge. The theory comprises the relations ∂A E⊥ = − , B = ∇ × A, ∂t ρ(x ) d3 x , ρ(x) = ∇ · E(x), E = −∇Φ, Φ(x) = 4π|x − x | where ρ(x) is the external charge density. In general case, we let P mean the projector upon the null space N = N (N ) of N and Q = 1 − P. It holds that Q1 0 P1 0 (6.170) ,Q= , Q j = 1 − Pj , P= 0 P2 0 Q2 G1 where P j acts in H j , j = 1, 2. We choose G = . As G2 0 N12 N21 0 −1 0 −N21 −1 −N12 0 a quantity −1 Q 2 F2 ξ˜ = N21 6 Periodic and Disordered Media is introduced satisfying the generalization of the Coulomb gauge condition P1 ξ˜ = 0. In terms of this quantity, propagating components can be expressed, ∂ ξ˜ −1 Q2 G2, − N21 ∂t Q 2 F2 = −N21 ξ˜ . Q 1 F1 = − Tip (1997) also considers a generalization of a gauge transformation ξ = ξ˜ + P1 η, η ∈ H1 . −1 −1 We make a replacement N21 Q 2 G 2 → N21 Q 2 G 2 − P1 ∂η , ∂t Q 1 F1 = − ∂η ∂ξ −1 Q 2 G 2 + P1 − N21 ∂t ∂t and Q 2 F2 = −N21 ξ. With respect to the application to the Maxwell equations, we assume that G= G1 0 , P2 F2 |t=0 = 0. Equations (6.176) and (6.177) simplify, ∂η ∂ξ + P1 , ∂t ∂t F2 = −N21 ξ. Q 1 F1 = − (6.179) (6.180) Tip (1997) introduces a generalization ξ0 of the scalar potential Φ of the Maxwell theory. He considers another real Hilbert space H3 and an invertible operator M from H3 into P1 H1 . Then ξ0 = −M ∂η . F1 + ∂t He introduces a generalization ρ of the charge density, ρ = −M † P1 F1 . Quantization in Periodic Media He lets (. , .) j mean the inner product in H j and presents the Lagrangian L= ∂ξ ∂ξ + Mξ0 , + Mξ0 ∂t ∂t − 1 1 (N21 ξ, N21 ξ )2 2 + (G 1 , ξ )1 − (ρ, ξ0 )3 . Tip (1997) considers generalizations of the Coulomb, Lorentz, and temporal gauges. The field η has specific properties in each particular gauge. Expounding the C gauge, he writes the condition P1 ξ = ∂ 2ξ − N12 N21 ξ = Q 1 G 1 , ∂t 2 M † Mξ0 = ρ. which leads to equations He eliminates ξ0 by expressing it in terms of ρ and presents a Lagrangian, the canonical momentum field associated with ξ , π = ξ˙ , and a Hamiltonian. Expounding the L gauge, he needs the condition ∂ξ0 − M † P1 ξ = 0, ∂t ∂ 2 ξ0 + M † Mξ0 = ρ, ∂t 2 which leads to equations ∂ 2ξ − N12 N21 ξ + M M † P1 ξ = G 1 . ∂t 2 He writes a Lagrangian, the momentum field π0 = ξ0 , and a Hamiltonian. Expounding the T gauge, the author writes the condition (and the equation) ξ0 = 0, ∂ 2ξ − N12 N21 ξ = G 1 . ∂t 2 which leads to an equation He presents a Lagrangian and a Hamiltonian. At this level, Tip (1997) reminds one of the familiar method of canonical quantization and concentrates himself on the C gauge case and the L gauge case. The C 6 Periodic and Disordered Media gauge case is discussed in full detail. He chooses an orthonormal basis {u j } in the subspace Q 1 H1 ⊂ H1 and decomposes ξ= ξju j, π = πju j, where ξ j = ξ (u j ) = (ξ, u j )1 , π j = π (u j ) = (π, u j )1 . He expresses the Hamiltonian in terms of ξ j , π j , which obey the Poisson brackets {ξ j , πk } = δ jk . A quantization is accomplished by replacing the Poisson brackets by the commutators ˆ [ξˆ (u j ), πˆ (u k )] = iδ jk 1. We utilize hats here, although Tip (1997) does not write them, or lets them mean something else. He defines the complexifications of the Hilbert spaces and the Fock space F(H) over any Hilbert space H. He introduces the annihilation (creation) ˆ operator a(ϕ) (aˆ † (ϕ)), which acts in F(H), where ϕ is the wave function of an annihilated (created) boson. In a rather complicated manner, it must be shown that ˆ j ), aˆ † (u j ), which act in F(H), the Hamiltonian can be expressed in terms of a(u where H is the complexification of H1 . In the comment on the L gauge case, Tip (1997) chooses, in addition, an orthonormal basis {v j } in the subspace P1 H and such a basis {w j } in the complexification H of the real space H3 . He expounds that the Hamiltonian can be expressed ˆ j ) and their Hermitian conjugates. Here b(w ˆ j ) is the ˆ j ), b(w ˆ j ), a(v in terms of a(u annihilation operator, which acts in F(H ). Applying this formalism to the Maxwell equations, he determines that √ ∂A √ √ ξ = − A, π = − , ξ0 = Φ, M = ∇. ∂t √ √ √ √ √ Moreover, it holds that M† = −∇ , M† ·M = −∇ · ∇, MM† = − ∇∇ . In the C gauge, the condition (6.184) becomes ∇ · A = 0, leading to the equations 1 ∂ 2 (A) + ∇ × (∇ × A) = Q1 · J, ∂t 2 μ √ √ ∇ · ∇Φ = −ρ, (6.196) (6.197) Corrugated Waveguides where J is the external current density. Tip (1997) also presents a Lagrangian and a Hamiltonian. The Poisson brackets have the form {ξ (x), π (y)} = Q1 (x, y), where Q1 (x, y) = δ(x − y) − P1 (x, y), P1 (x, y) are kernels associated with the projectors Q1 and P1 , respectively. In the L gauge, the condition (6.186) becomes ∂Φ + ∇ · A = 0, ∂t leading to the equations of motion ∂ 2Φ − ∇ · ∇Φ = ρ, ∂t 2 1 1 ∂ 2A 1 + ∇ × (∇ × A) − ∇ 2 A = J. 2 ∂t μ A Lagrangian, the momentum field π 0 = The Poisson brackets have the form ∂Φ , ∂t (6.200) (6.201) and a Hamiltonian are also presented. {Φ(x), π 0 (y)} = δ(x, y), {ξ (x), π (y)} = 1δ(x, y). Tip (1997) devotes an appropriate amount of place to the application to the atomic radiative decay in dielectrics. Under the usual assumptions on the dielectric permittivity, a quantization of the Hamiltonian formalism of the electromagnetic field using a method close to the microscopic approach was performed by Tip (1998). A proper definition of band gaps in the periodic case and a new continuity equation for energy flow was obtained, and an S-matrix formalism for scattering from absorbing objects was ˇ worked out. In this way, the generation of Cerenkov and transition radiation have been investigated. 6.2 Corrugated Waveguides The use of dielectric optical waveguides (Marcuse 1974, Yariv and Yeh 1984) and the coupled-mode theory appropriate in the case of corrugated waveguides lead naturally to the task of describing the propagation of a general quantum state in these devices. The question of possibility and impossibility of quantizing the classical description is not posed usually. The apology for it is that the copropagation does not make difficulties. The description can be analyzed in the framework of the theoretical mechanics with the time variable replaced by the propagation distance. We work easily with the Heisenberg and Schr¨odinger pictures and formulate respective 6 Periodic and Disordered Media equations. The quantum momentum operator can be derived with respect to the modal orthonormalization property on a cross section of an optical waveguide (Li˜nares and Nistal 2003). The difficulties due to counterpropagation are regularly disregarded. In fact, the classical description is free of difficulties, and the theoretical mechanics that has been already modified by the replacement of the time variable with the space variable can be extended to involve the counterpropagation (Luis and Peˇrina 1996b). It has been concluded that the quantization will not be successful in the case of counterpropagation in a nonlinear medium (except optical parametric processes). The propagation in a linear dielectric, and in the devices based on such materials, can be quantized in time (Dalton et al. 1996, Dalton et al. 1999b, Dalton and Knight 1999a,b). Here we shall not criticize the use of operators in situations which are classical essentially, since the literature abounds with this (Janszky et al. 1988, Peˇrina 1995a,b, Peˇrina and Peˇrina, Jr. 1995b,c, Korolkova and Peˇrina 1997c). The dependence of outputs on inputs can be formulated in the spatial Heisenberg and Schr¨odinger pictures without respective differential equations. This restriction is due to the peculiar nature of quantization. In the copropagation, the spatio-temporal description (cf. (Lukˇs and Peˇrinov´a 2002)) has not been used and therefore the unusual description has not been used by anybody even in the counterpropagation, where it is hardly dispensable. The simplified quantization enables one to employ knowledge of quantum mechanical descriptions as follows. An amplifier should be described in the Schr¨odinger picture, if we use quasidistributions and the antinormal ordering (the Husimi functions) for the expression of the input–output dependence. An attenuator ought to be described in the Schr¨odinger picture, if we employ the quasidistributions and the normal ordering (the Glauber diagonal representation) to a similar goal. To show this, we present a formal definition of an amplifier and that of an attenuator. These definitions operate with the integrated quantum-noise terms which are needed for the input–output relations to preserve the commutators. The integrated quantumnoise terms can be formally decomposed into creation operators in amplifier case and into annihilation operators in the attenuator case. We believe that we can provide more than such a formal expansion considering fields of modes or mode densities coupled to the counterpropagating modes. These fields are well-known quantum reservoirs, e.g., (Louisell 1973), whose frequency dependence has been replaced by the position dependence. In a more complicated case when in the mode a (let us say) an attenuation proceeds and in the mode b an amplification occurs (Severini et al. 2004), a more complicated behaviour in the Schr¨odinger picture can be expected, there is not a guide to choose the ordering. To describe the amplification and attenuation, one needs a full Heisenberg–Langevin approach. Distributed feedback laser has attracted attention as a device, which in the framework of coupled-wave theory, deserves quantization (Toren and Ben Aryeh 1994). The quantization produced by the authors seems to be very complicated. Unfortunately, Toren and Ben Aryeh (1994) have not developed an overarching quantization of the analysis of amplification and that of the contradirectional coupling. Corrugated Waveguides 6.2.1 Lossless Propagation in a Waveguide Structure Peˇrina Jr., et al. (2004) study an optical parametric process, namely a second-order process. They had studied photonic band-gap structures and continue the work (Tricca et al. 2004) for the second-harmonic generation in a planar nonlinear corrugated waveguide. They consider both the influence of the corrugation of the waveguide on the longitudinal confinement of the signal and idler modes (a modification of (Tricca 2004)) and this influence on the phase matching of the process. They present the decomposition of the electric-field amplitude related to photons of the respective modes E(x, y, z, t) = i ωm em 20 ¯r L × { Am (z) f m (x, y) exp[i(kmz z − ωm t)] − c. c.} , where Am is the amplitude of the mth mode, f m means the transverse eigenfunction of the mth mode, em stands for the polarization vector, ωm denotes the frequency, and km is the wave vector of the mth mode. The mean permittivity of the waveguide is denoted as ¯r , 0 stands for the vacuum permittivity, is the reduced Planck constant, and L is the length of the structure. The abbreviation c. c. stands for complex-conjugated terms. The function f m (x, y) is a solution of the equation 2 f m (x, y) + μ0 ¯r ωm2 f m (x, y) = 0. ∇T2 f m (x, y) − kmz At present, one has not yet realized a perfect (or an imperfect) quantization. The function f m (x, y) is normalized, has the property | f m (x, y)|2 dx dy = 1. The electric-field amplitude E ≡ E(r, t) satisfies the wave equation inside the waveguide ∇ 2 E − μ0 r ∂ 2E ∂ 2 Pnl = μ , ∂t 2 ∂t 2 where 0 r stands for the permittivity of the waveguide, μ denotes the vacuum permeability, and Pnl describes the nonlinear polarization of the medium. The relative permittivity r (r) can be written as follows r (x, y, z) = ¯r (x, y) + Δεr (x, y, z). 6 Periodic and Disordered Media Here Δεr (x, y, z) are small variations of the permittivity related to the corrugation. These variations decompose into harmonic functions Δεr (x, y, z) = 2π εq (x, y) exp iq z , ε0 (x, y) = 0, Λl q=−∞ ∞ εq (x, y) are coefficients of the decomposition and Λl is the spatial period of the grating. The polarization Pnl of the medium is determined using the second-order susceptibility tensor χ, Pnl = 0 χ : EE, where : denotes a contraction, i.e. a double sum to be carried out after the tensors are replaced by their components and products of the corresponding components are formed. On the assumption of three monochromatic components, a substitution of the amplitude (6.203) into the wave equation (6.206) provides three coupled Helmholtz equations for these components. We consider two directions of propagation for each of the monochromatic components. We have six modes: the signal forward-propagating mode (with amplitude AsF ), the signal backward-propagating mode (AsB ), the idler forward-propagating mode (AiF ), the idler backwardpropagating mode ( AiB ), the pump forward-propagating mode ( ApF ) and, finally, the pump backward-propagating mode ( ApB ). In the framework of the coupled-mode theory, we represent each of the three Helmholtz equations with two ordinary differential equations for the amplitudes dAsF dz dAiF dz dAsB dz dAiB dz dApF dz dApB dz = iK s exp(−iδs z) AsB + K F exp(iδF z) ApF A∗iF , = iK i exp(−iδi z) AiB + K F exp(iδF z) ApF A∗sF , = −iK s∗ exp(iδs z) AsF − K B exp(−iδB z) ApB A∗iB , = −iK i∗ exp(iδi z) AiF − K B exp(−iδB z) ApB A∗sB , = iK p exp(−iδp z) ApB − K F∗ exp(−iδF z) AsF AiF , = −iK p∗ exp(iδp z) ApF + K B∗ exp(iδB z) AsB AiB , where K p = 0 and δa = |kaF z | + |kaB z | − δl , a = s, i, 2π , δl = Λl δb = |kpb z | − |ksb z | − |kib z |, b = F, B. Corrugated Waveguides The linear coupling constants K s and K i are given as Ka = μ0 ωa2 2|kaF z | ε1 (x, y) f a∗F (x, y) f aB (x, y) dx dy, a = s, i, where we have assumed that |kaF z | = |kaB z |. The expressions for the nonlinear coupling constants K F and K B are μ0 ωs ωp ωi ωs .. f pb (x, y) f i∗b (x, y) f s∗b (x, y) dx dy χ .ep ei es Kb = |ksb z | 20 ¯r L μ0 ωi ωp ωs ωi .. f pb (x, y) f s∗b (x, y) f i∗b (x, y) dx dy χ .ep es ei = |kib z | 20 ¯r L μ0 ωp ωs ωi ωp .. f s∗b (x, y) f i∗b (x, y) f pb (x, y) dx dy, (6.213) χ .es ei ep = |kpb z | 20 ¯r L . where .. denotes a contraction, i.e. a treble sum to be carried out after the tensors are replaced by their components and products of the corresponding components are ω formed. We have assumed that |kωs sz | ≈ |kωi iz | ≈ |kp pz | . b b b The dependencies of the solutions to equations (6.210) on the boundary data AaF (0), AaB (L), a = s, i, p (of the solutions to the boundary-value problem) can be considered as classical input–output relations. These are significant for investigation of the effect of stochastic boundary data. Approximate results can be obtained by considering variations of the classical solutions δ Aab , a = s, i, p, b = F, B. The variations verify linear equations dδ AsF dz dδ AiF dz dδ AsB dz dδ AiB dz dδ ApF dz dδ ApB dz = iK s exp(−iδs z)δ AsB + K F exp(iδF z)δ(ApF A∗iF ), = iK i exp(−iδi z)δ AiB + K F exp(iδF z)δ(ApF A∗sF ), = −iK s∗ exp(iδs z)δ AsF − K B exp(−iδB z)δ(ApB A∗iB ), = −iK i∗ exp(iδi z)δ AiF − K B exp (−iδB z)δ(ApB A∗sB ), = iK p exp(−iδp z)δ ApB − K F∗ exp(−iδF z)δ(AsF AiF ), = −iK p∗ exp(iδp z)δ ApF + K B∗ exp(iδB z)δ(AsB AiB ), where the variations δ(X Y ) are to be further transformed using the Leibniz formula δ(X Y ) = Y δ X + X δY . These or the resulting equations do not depend on whether or not solutions to the boundary-value problem for (6.210) or those to the problem with the initial data AaF (0), AaB (0), which seems to be easy, are the case. 6 Periodic and Disordered Media The consideration of variations leads to an approximate quantization. The input– output relations for variations are linear and, on certain conditions, they may be ˆ ab , a = s, i, p, interpreted even as input–output relations for quantum corrections δ A b = F, B. Let us suppose that we first express a solution of the initial-value problem for (6.214). On introducing the notation ⎞ δ Asb (z) ⎜ δ A∗s (z) ⎟ b ⎟ ⎜ ⎜ δ Aib (z) ⎟ ⎟ δAb (z) = ⎜ ⎜ δ A∗i (z) ⎟ , b = F, B, b ⎟ ⎜ ⎝ δ Apb (z) ⎠ δ A∗pb (z) ⎛ this solution is δAF (L) δAB (L) VFF (L) VFB (L) VBF (L) VBB (L) δAF (0) . δAB (0) Then only we determine a solution of the boundary-value problem as δAF (L) δAB (0) δAF (0) , δAB (L) where U= −1 −1 VFF (L) − VFB (L)VBB (L)VBF (L) VFB (L)VBB (L) . −1 −1 (L)VBF (L) VBB (L) −VBB The input–output relations for quantum corrections are δ Aˆ F (L) δ Aˆ B (0) δ Aˆ F (0) , δ Aˆ B (L) where δ Aˆ b (z), b = F, B, are given by relation (6.215), but with operators instead of the classical variables. The complex conjugates are replaced with the Hermitian ones. Nonclassical properties of the device are assessed by the operators ˆ ab (z), z = 0, L . ˆ ab (z) = Aab (z)1ˆ + δ A A Let the input operators, the components of the vectors δ Aˆ F (0) and δ Aˆ B (L), fulfil the usual boson commutation relations. In other words, the first, the third, and the fifth component are annihilation operators, the second, the fourth, and the sixth one are creation operators. Then also the output operators, the components of the vectors δ Aˆ F (L) and δ Aˆ B (0), obey the boson commutation relations. Corrugated Waveguides A sufficient condition for it to be possible to carry out this approximate quantization is the existence of a suitable momentum function G int (z) and the expression of the equations (6.210) in the form i dX = [X, G int ]. dz Here X (as well as Y in what follows) is any function of the variables Aab , Aa∗b , and G int (z) is a real function of these variables and of the coordinate z. With respect to a complicated expression of the bracket [X, Y ], operators are written about and the bracket has the usual meaning of a commutator in the paper (Peˇrina, Jr., et al. 2004). We will prefer the classical variables and 0 1 ∂Y ∂ X ∂ X ∂Y − [X, Y ] = ∂ AaF ∂ Aa∗F ∂ AaF ∂ Aa∗F a=s,i,p 0 1 ∂Y ∂ X ∂ X ∂Y − − . (6.222) ∂ AaB ∂ Aa∗B ∂ AaB ∂ Aa∗B a=s,i,p The appropriate momentum function G int (z) is G int (z) = K s exp(iδs z) A∗sF AsB + K i exp(iδi z) A∗iF AiB + K p exp(iδp z) A∗pF ApB + c. c. − iK F exp(iδF z) ApF A∗sF A∗iF + iK B exp(−iδB z) ApB A∗sB A∗iB + c. c. . (6.223) In such a case, a search for conservation laws is facilitated. Equations (6.210) have the properties d | AsF |2 + | ApF |2 − | AsB |2 − | ApB |2 = 0, dz d | AiF |2 + | ApF |2 − | AiB |2 − | ApB |2 = 0. dz Peˇrina, Jr., et al. (2005) pay attention also to the production of the longitudinal confinement of the pump mode(s) through the corrugation. The equations (6.210) are utilized in full generality; the restriction K p = 0 has been lifted. The explanations (6.211) are completed with another one, δp = |kpF z | + |kpB z | − 2δl . The linear coupling constant K p is expressed according to (6.212), which is completed with a = p, and we have assumed that |kpF z | = |kpB z |. Peˇrina, Jr., et al. (2007) study degenerate optical parametric processes, namely second-harmonic and second-subharmonic generation. They have considered anisotropy of the waveguide. 6 Periodic and Disordered Media We present the decomposition of the electric-field amplitude related to photons of the respective modes E(x, y, z, t) = i ωm 20 L 1 × Am (z)¯ − 2 · fm (x, y) exp[i(kmz z − ωm t)] − c. c. , where fm is the vector-valued transverse eigenfunction of the mth mode. The mean permittivity tensor of the waveguide is denoted as ¯ . The vector-valued function fm (x, y) is a solution of the equation (I∇T2 − ∇T ∇T ) · fm (x, y) − iβm ∇T ez · fm (x, y) − iβm ez ∇T · fm (x, y) −βm2 (I − ez ez ) · fm (x, y) + μ0 ¯ ωm2 · fm (x, y) = 0. The function fm (x, y) is normalized; it has the property f∗m (x, y) · fm (x, y) dx dy = 1. The electric-field amplitude E(r, t) satisfies the wave equation inside the waveguide − ∇ × (∇ × E) − μ0 · ∂ 2E ∂ 2 Pnl = μ , ∂t 2 ∂t 2 where stands for the permittivity tensor of the waveguide. Every spectral component of the permittivity tensor (r, ω) can be written as follows (x, y, z, ω) = ¯ (x, y, ω) + Δε(x, y, z, ω). Here Δε(x, y, z, ω) are small variations of the permittivity tensor related to the corrugation. These variations decompose into tensor-valued harmonic functions 2π Δε(x, y, z, ω) = εq (x, y, ω) exp iq z , ε 0 (x, y, ω) = 0(1) , Λ l q=−∞ ∞ where εq (x, y, ω) are coefficients of the decomposition, and 0(1) is the second-rank zero tensor. The polarization Pnl (r, t) of the medium is determined using the secondorder susceptibility tensor χ (r), Pnl (r, t) = 0 χ (r) : E(r, t)E(r, t). Corrugated Waveguides A spectral component χ˜ (r, −2ω; ω, ω) of the second-order susceptibility can be expressed as χ˜ (x, y, z, −2ω; ω, ω) = 2π z , χ˜ q (x, y, −2ω; ω, ω) exp iq Λnl q=−∞ ∞ where Λnl describes the period of a possible periodical poling of the nonlinear material. On the assumption of two monochromatic components, a substitution of the amplitude (6.226) into the wave equation (6.229) provides two coupled Helmholtz equations for these components. We consider two directions of propagation for each of the monochromatic components. We have four modes: the signal forwardpropagating mode (with amplitude AsF ), the signal backward-propagating mode (AsB ), the pump forward-propagating mode ( ApF ), and, finally, the pump backwardpropagating mode ( ApB ). In the framework of the coupled-mode theory, we represent each of the two Helmholtz equations with two ordinary differential equations for the amplitudes dAsF dz dAsB dz dApF dz dApB dz = iK s exp(−iδs z) AsB + 2K F,q exp(iδF z) ApF A∗sF , = −iK s∗ exp(iδs z) AsF − 2K B,q exp(−iδB z) ApB A∗sB , ∗ exp(−iδF z) A2sF , = iK p exp(−iδp z) ApB − K F,q ∗ exp(iδB z) A2sB , = −iK p∗ exp(iδp z) ApF + K B,q where δa = |βaF | + |βaB | − δl , a = p, s, 2π , b = F, B. δb,q = |βpb | − 2|βsb | + q Λnl The linear coupling constants K p and K s are given as μ0 ωa2 Ka = 2|βa F | fa∗F (x, y) · ε 1 (x, y, ωa ) · faB (x, y) dx dy, a = p, s, where βa F |ez · faF (x, y)|2 dx dy = βaF − βaF + Im [∇T · fa∗F (x, y)][ez · faF (x, y)] dx dy, 6 Periodic and Disordered Media and we have assumed that |βa F z | = |βa B z |. The expressions for the nonlinear coupling constants K F and K B are μ0 ωs ωp ωs ωs 1 [¯ − 2 (ωp ) · fpb (x, y)] · χ˜ q (x, y, −ωp ; ωs , ωs ) K b,q = 2|βsb | 20 L :[¯ 2 (ωs ) · f∗sb (x, y)][¯ − 2 (ωs ) · f∗sb (x, y)] dx dy μ0 ωp ωs ωs ωp 1 [¯ 2 (ωp ) · fpb (x, y)] · χ˜ q (x, y, −ωp ; ωs , ωs ) = 2|βpb | 20 L 1 :[¯ − 2 (ωs ) · f∗sb (x, y)][¯ − 2 (ωs ) · f∗sb (x, y)] dx dy, 1 where we have assumed that |βωs | ≈ |β p | . sb pb The dependencies of the solutions to the equations (6.234) on the boundary data AaF (0), AaB (L), a = s, p (of the solutions to the boundary-value problem) can be considered as classical input–output relations. These are significant for the study of stochastic boundary data. Approximate results can be obtained by considering variations of the classical solutions δ Aab , a = s, p, b = F, B. The variations satisfy linear equations dδ AsF dz dδ AsB dz dδ ApF dz dδ ApB dz = iK s exp(−iδs z)δ AsB + 2K F,q exp(iδF z)δ(ApF A∗sF ), = −iK s∗ exp(iδs z)δ AsF − 2K B,q exp(−iδB z)δ(ApB A∗sB ), ∗ exp(−iδF z)δ(A2sF ), = iK p exp(−iδp z)δ ApB − K F,q ∗ exp(iδB z)δ(A2sB ). = −iK p∗ exp(iδp z)δ ApF + K B,q The consideration of variations leads to an approximate quantization. The input– output relations for variations are linear and, as in the previous case, they may lead ˆ ab , a = s, p, b = F, B. to input–output relations for quantum corrections δ A Let us suppose that we first express a solution of the initial-value problem for (6.239). On introducing notation ⎞ ⎛ δ Asb (z) ⎜ δ A∗s (z) ⎟ b ⎟ (6.240) δAb (z) = ⎜ ⎝ δ Apb (z) ⎠ , b = F, B, ∗ δ Apb (z) this solution becomes (6.216). Thereafter we determine a solution of the boundaryvalue problem as (6.217), with (6.218). The input–output relations for quantum corrections are (6.219), where δ Aˆ b (z), b = F, B, are given by relation (6.240), but with operators instead of the classical variables. The complex conjugates are replaced with the Hermitian ones. Nonclassical behaviour of a process is assessed by the operators (6.220). Corrugated Waveguides In this case, the equations (6.234) have the form (6.221), with 0 1 ∂ X ∂Y ∂Y ∂ X [X, Y ] = − ∂ AaF ∂ Aa∗F ∂ AaF ∂ Aa∗F a=s,p 0 1 ∂Y ∂ X ∂ X ∂Y − . − ∂ AaB ∂ Aa∗B ∂ AaB ∂ Aa∗B a=s,p The appropriate momentum function G int (z) is G int (z) = K s exp(iδs z) A∗sF AsB + K p exp(iδp z) A∗pF ApB + c. c. ∗2 − iK F exp(iδF z) ApF A∗2 sF + iK B exp(−iδB z) ApB AsB + c. c. . (6.242) Equations (6.234) have the property d |AsF |2 + 2|ApF |2 − | AsB |2 − 2|ApB |2 = 0. dz 6.2.2 Coupled-Mode Theory Including Gain or Losses We assume a monochromatic wave propagating in a waveguide in the form E(x, y, z, t) = Am (z)E m (x, y) exp[i(ωt − kmz z)], (6.244) m where Am (z), dzd Am (z) = 0, is the amplitude of the mth mode, ω is a frequency, and kmz are propagation constants (components along the direction of propagation of the wave vector of each mode). Let us note the difference from relation (6.203), where the wave is real. These eigenmodes have the electric vectors and the magnetic vectors of the form Em (x, y, z, t) = E m (x, y) exp[i(ωt − kmz z)], Hm (x, y, z, t) = Hm (x, y) exp[i(ωt − kmz z)], respectively. In fact, the fields are real, and they must be recovered as 12 [Em (x, y, z, t) +E∗m (x, y, z, t)], etc. It is understood that counterpropagating modes are orthogonal. The normalization and the orthogonality property of copropagating modes are expressed by the relation vgm 1 (6.246) (Ek × H∗m ) · ez dx dy = ωδkm , 2 L where ez is the unit vector in the direction of the z-axis, vgm is the group velocity, ∂ω |kz =kmz , L is a quantization length, and is the reduced Planck constant. vgm = ∂k z The arguments of Ek and H∗m have been omitted for convenience. The treatment 6 Periodic and Disordered Media will be restricted to dielectric structures, which consist of pieces of homogeneous and isotropic materials, or those with a small gradient of the refractive index. Then in (6.245), E m (x, y) and Hm (x, y) obey the vectorial wave equation ∂2 ∂2 2 2 + + ω μ¯ (x, y) − k mz U m (x, y) = 0, ∂x2 ∂ y2 where μ is the magnetic permeability and ¯ (x, y) = 0 ¯r (x, y). In the case of a piecewise homogeneous dielectric structure, equation (6.247) is valid separately in each of homogeneous domains. So the field must be determined separately in every domain and, then, the tangential components of the fields must be joined at each of the interfaces. Another important boundary condition for waveguide modes is the vanishing of the field amplitudes at infinity. For the boundary conditions to be satisfied at all the points of the interfaces between homogeneous media, the paraxial propagation constant kmz must be the same in the whole waveguide structure (Yariv and Yeh 1984). For definiteness, we will describe a planar waveguide and put Um = Em. The solutions should be determined in the core (guiding region), which we designate as D. We let C denote the boundary of D, which are two straight lines parallel to the y-axis. The quantity ¯r (x, y) is independent of y, and the solutions that are independent of y are looked for. The transverse electric (TE) modes and the transverse magnetic (TM) modes are distinguished. The TE modes have Emx (x, y) = Emz (x, y) = Hmy (x, y) = 0, where the first component is included by the definition of these modes, the second one is present here due to the condition ∇ · E = 0, and the third one follows from a Maxwell equation. The TM modes have Hmx (x, y) = Hmz (x, y) = Emy (x, y) = 0, where the first component is included by the definition of these modes, the second one is present here due to the condition ∇ · H = 0, and the third one follows from a Maxwell equation. For suitable kmz , the TE mode boundary conditions are the continuity requirements on Emy (x, y)|C and Hmz (x, y)|C . The TM mode boundary conditions are related to Hmy (x, y)|C and Emz (x, y)|C . Equation (6.247) is solved in the whole x–y plane excepting the point set C perhaps. We have neglected a term ∇ (∇ · E), which is justified if the change of the quantity ¯0 (x, y) over a wavelength is small. In the case of TE modes of planar dielectric waveguides ∇ · E = 0, since ∇ · (E) = ∇ · E + E · ∇ = 0, where E · ∇ = 0. In fact, has a jump in the x-direction, whereas E x vanishes. This implies that equation (6.247) is exact at C. In the case of TM modes of a planar waveguide, equation (6.247) does not hold at the interface C. In some cases, it is useful to consider complex dielectric permittivity. In this generalization, the definition of unperturbed modes is based on Re{¯r (x, y)}. Respecting Corrugated Waveguides it, the replacement of ¯ (x, y) by Re{¯r (x, y)} should be made, where it is appropriate. Particularly, relation (6.207) becomes r (x, y, z) = Re{¯r (x, y)} + Δεr (x, y, z). Now 2π εq(Re) (x, y) exp iq z , ε0(Re) (x, y) = 0, Λl q=−∞ ∞ 2π εq(Im) (x, y) exp iq z , Im{Δεr (x, y, z)} = Λl q=−∞ Re{Δεr (x, y, z)} = ε0(Im) (x, y) = Im{¯r (x, y)}. In the following, we restrict ourselves to the TE modes. Here E(x, y, z, t) (Em (x, y), Hm (x, y)) will be a shorthand for the component E y (x, y, z, t) (Emy (x, y), Hmy (x, y)) along the direction of y. The orthogonality condition of the modes reads ∗ ∗ |vgm | 2ω2 μ Em |Ek = (6.253) Em∗ (x, y) · Ek (x, y) dx dy = δkm . L |kmz | With respect to the quantum treatment, we introduce the negative- and positivefrequency parts 1 E(x, y, z, t), 2 E (+) (x, y, z, t) = [E (−) (x, y, z, t)]∗ , E (−) (x, y, z, t) = the corresponding envelope Am (z) = 1 ∗ A (z), 2 m and we note that relation (6.244) can be rewritten as E (+) (x, y, z, t) = Am (z)Em∗ (x, y) exp[−i(ωt − kmz z)]. This classical field is an eigenfunction (rather, eigenvalue that depends on parameters), , * , * , , 0 0 (+) (+) , ˆ E (x, y, z, t) , Am = E (x, y, z, t) ,, Am , (6.257) L L ˆ (+) ,where E (x, y, z, t) is the positive-frequency part of the electric strength operator, , 0 is a coherent state, 0 (L) corresponds to kmz > 0 (kmz < 0), and L is , Am L the length of the optical device. We consider expansion (6.256) with E (+) (x, y, z, t) ˆ m (z). replaced by Eˆ (+) (x, y, z, t) and Am (z) replaced by A 6 Periodic and Disordered Media From the viewpoint of the integrated optics, the interaction of modes or coupling of modes has been interesting and also the possibility of quantization is attractive. Here we have touched even the impossibility of quantization. Whenever a linear relation between pairs of complex amplitudes is appropriate (a linear canonical transformation), quantization is possible and the complex amplitudes can be interpreted as operators. In the literature (Milburn et al. 1984) concerning the nonlinear processes, it has been shown that such operators do not obey the usual commutation relations. In this case, we assert that the quantization has not been accomplished. Nevertheless, in this study, we interpret the input–output relations as quantal. A special use of quantum-mechanical descriptions has been mentioned in Section 3.1.4 already. The full Heisenberg–Langevin approach has been utilized, which is based on the concept of a quantum reservoir. We classify reservoirs as forward- and backward-propagating ones, even though it is not quite usual to combine these terms. But we do not utilize the concept of a quantum reservoir. We rely on the concept of quasi-continuous measurements (Ban 1994). It is sufficient to know that the quasi-continuous measurement is realized using a system of lossless beam splitters or using a system of parametric amplifiers. A transition to a continuous limit is possible, and the differential equation of its description coincides with the master equation for the description of a single mode obtained on an elimination of the quantum reservoir (Peˇrinov´a and Lukˇs 2000). Independent of this, a return to the classical description is possible provided that the quantum measurements are not studied. The system of lossless beam splitters represents an “attenuator”, since the energy of the light mode is by parts reflected by the beam splitters to detectors. The system of parametric amplifiers represents an “amplifier”, since photons of pump beam are converted to photon-twin pairs. One of each pair is directed to the detector, and the other supplies energy to the light mode going through aligned axes of nonlinear crystals. Similarly, repeated nondemolition measurements can be implemented. Another idea, which however requires the replacement of the time variable by the space variable, is the measurement of an observable of a cavity mode using Rydberg atoms. Literature is devoted to the useful case of repeated nondemolition measurements, but we image easily also an attenuator similar to the system of lossless beam splitters and an amplifier similar to the system of parametric amplifiers. This idea allows the transition to a continuous limit as well. Different frequencies of a quantum reservoir are replaced with different times when the atoms interact with the cavity mode. On the change of the time variable by the space variable, we describe the physical fact that the guided mode is in short but densely distributed segments of a waveguide coupled to the sources or sinks of the energy. We will approach the distributed feedback laser (Yariv and Yeh 1984, Toren and Ben Aryeh 1994) as a quantum amplifier. In the usual coupled-mode theory, it is assumed that the perturbation Δεr (x, y, z) of the dielectric permittivity is real, but the presence of a small gain can be also considered a perturbation and then Δεr (x, y, z) is to be held for a complex quantity. Describing a lossy medium, one assumes that the imaginary part of Δεr (x, y, z) is negative, but the gain medium exhibits a positive imaginary part of Δεr (x, y, z). We assume that modes 1 and 2 are Corrugated Waveguides coupled, and we let k1z , k2z denote the z-components of the respective wave vectors. We will treat the particular case when k1z ≈ lπλ , i.e., mode 1 is strongly coupled with the backward-propagating mode 2, k2z = −k1z . The classical description is based on the differential equations. We complex conjugate the differential equations of classical description (Yariv and Yeh 1984) and replace A∗j (z) by A j (z). This results in the differential equations γ d A1 (z) = iκ ∗ A2 (z) exp(2iδz) + A1 (z), dz 2 γ d A2 (z) = −iκA1 (z) exp(−2iδz) − A2 (z), dz 2 where 0 L (Re) E ∗1 (x, y) · ε−l (x, y)E 1 (x, y) dx dy, κ= 4|vg | 1 lπ δ = (k1z − k2z ) − , 2 λ 0 L ∗ γ = E 1 (x, y) · Im{¯r (x, y)}E 1 (x, y) dx dy, 2|vg | with vg the group velocity of light. The input–output relations are given as A1 (L) = u 11 (L)A1 (0) + u 12 (L)A2 (L), A2 (0) = u 21 (L)A1 (0) + u 22 (L)A2 (L). Here u jk (L), j, k = 1, 2, are given by generalized relations from (Peˇrinov´a et al. 1991), −1 Δ , u 11 (L) = exp (iδL) cosh(DL) + i sinh(DL) D κ∗ u 12 (L) = i exp (i2δL) sinh(DL) D −1 Δ , × cosh(DL) + i sinh(DL) D −1 κ Δ u 21 (L) = i sinh(DL) cosh(DL) + i sinh(DL) , D D (6.261) u 22 (L) = u 11 (L), with D= + γ |κ|2 − Δ2 , Δ = δ + i . 2 The quantization of the classical description in the case of no gain can be accomplished in the spatial Heisenberg picture using equations (6.260). The complex 6 Periodic and Disordered Media ˆ j (z), j = 1, 2, amplitudes A j (z) are replaced by the annihilation operators A z = 0, L. In general, the spatial Schr¨odinger picture is appropriate. Any input statistical operator ρ(0) ˆ evolves to the output operator ρ(L). ˆ In contrast, the operˆ 2 (L) do not change and that is why we abbreviate A ˆ1 ≡ A ˆ 1 (0), ˆ 1 (0) and A ators A ˆA2 ≡ A ˆ 2 (L) here. The statistical properties of the input and output fields are described equivalently by the characteristic functions. The definition of these functions depends on the ordering of field operators. We choose the antinormal ordering, which is suitable for the amplifier. The antinormal characteristic functions of the input and output state, respectively, are CA (β1 , β2 , 0) ˆ 1 − β2∗ A ˆ 2 ) exp(β1 A ˆ † + β2 A ˆ †) , = Tr ρ(0) ˆ exp(−β1∗ A 1 2 CA (β1 , β2 , L) ˆ 1 − β2∗ A ˆ 2 ) exp(β1 A ˆ † + β2 A ˆ †) . = Tr ρ(L) ˆ exp(−β1∗ A 1 2 The antinormal characteristic function for the output can be defined taking into account the coefficients u jk (L), j, k = 1, 2, in (6.261), ˆ CA (β1 , β2 , L) = Tr ρ(0) ˆ 1 − β1∗ u 12 (L) + β2∗ u 22 (L) A ˆ2 × exp − β1∗ u 11 (L) + β2∗ u 21 (L) A ˆ † + β1 u ∗12 (L) + β2 u ∗22 (L) A ˆ† × exp β1 u ∗11 (L) + β2 u ∗21 (L) A 1 2 ∗ ∗ ∗ ∗ (6.264) = CA β1 u 11 (L) + β2 u 21 (L), β1 u 12 (L) + β2 u 22 (L), 0 . Here we have reduced an alternative definition to a substitution into the characteristic function for the input. We may conclude with the inversion of the second equation in (6.263), 1 ˆ † + β2 A ˆ †) CA (β1 , β2 , L) exp(β1 A ρ(L) ˆ = 2 1 2 π ∗ˆ ∗ˆ 2 2 (6.265) × exp(−β1 A1 − β2 A2 ) d β1 d β2 , where d2 β j =d(Re {β j })d(Im {β j }), j = 1, 2. The above procedure is a completely positive map. We will not present a proof, which can be established similarly as below in the case of attenuator. With the characteristic functions, the quasidistributions are associated related to the same ordering 1 CA (β1 , β2 , 0) exp(A1 β1∗ + A2 β2∗ − c. c.) d2 β1 d2 β2 , ΦA (A1 , A2 , 0) = 4 π 1 CA (β1 , β2 , L) exp(A1 β1∗ + A2 β2∗ − c. c.) d2 β1 d2 β2 . ΦA (A1 , A2 , L) = 4 π (6.266) Corrugated Waveguides The relationships between the characteristic functions become the relationships between the quasidistributions. The latter are more complicated (one needs elements of the inverse to the matrix U (L) consisting of the elements u jk (L), j, k = 1, 2, and its determinant |U (L)|). According to our statement below the relation (6.262), when γ = 0, the quantization can be accomplished in the Heisenberg picture. Relationships between statistical operators may be intricate sometimes. Then we can adopt the Heisenberg– Langevin approach. We can formally define an amplifier to be a device which can be described with input–output relations ˆ 1 (0) + u 12 (L) A ˆ 2 (L) + M ˆ 1 (L), ˆ 1 (L) = u 11 (L) A A ˆ 2 (0) = u 21 (L) A ˆ 1 (0) + u 22 (L) A ˆ 2 (L) + M ˆ 2 (L), A ˆ j (L), j = 1, 2, are integrated quantum-noise terms. A characteristic propwhere M erty of the amplifier by the definition is that the commutator matrix 1 0 ˆ † (L)] [ M ˆ 1 (L), M ˆ † (L)] ˆ 1 (L), M 00 ˆ [M 1 2 ≤ 1, (6.268) ˆ 2 (L), M ˆ † (L)] [ M ˆ 2 (L), M ˆ † (L)] 00 [M 1 2 i.e., both its eigenvalues, when 1ˆ is factored out, are nonpositive. It means that we ˆ † (L), j = 1, 2, k = 3, 4, such that can find numbers u jk (L) and creation operators A k ˆ 1 (L) = u 13 (L) A ˆ † (0) + u 14 (L) A ˆ † (L), M 3 4 ˆ † (0) + u 24 (L) A ˆ † (L). ˆ 2 (L) = u 23 (L) A M 3 4 It is required that the commutation relations between input annihilation and creation operators (6.305) below and those between such operators related to the expression (6.269), ˆ ˆ † (0)] = [ A ˆ 4 (L), A ˆ † (L)] = 1, ˆ 3 (0), A [A 3 4 could be rewritten as those between annihilation and creation output operators in (6.305). It is assumed that the operators of distinct modes mutually commute. Instead of proving that the procedure for the derivation of the output statistical operator from the input one is a completely positive map, we can find expressions for integrated quantum-noise terms and prove the characteristic property of the amplifier (6.268). In Yariv and Yeh (1984), much attention is paid to the regime of light generation, defined by the condition cosh(DL) + i Δ sinh(DL) = 0. D In the explicit expression of the coefficients u jk (L), j = 1, 2, in (6.261), the division by zero occurs. It is obvious that the model is only tentative; for instance, the effect of saturation can be described by a nonlinear model, while the model under discussion is linear. 6 Periodic and Disordered Media As the reciprocity theorem is derived on the assumption of the lossless medium, the attenuator has not been elaborated in the coupled-mode theory, contrary to the amplifier. But it is rather reasonable to assume that a perturbation Δεr (x, y, z) is a complex quantity with a negative imaginary part. Using the same procedure as in the case of amplifier, we arrive at the differential equations γ d A1 (z) = iκ ∗ A2 (z) exp(2iδz) − A1 (z), dz 2 γ d A2 (z) = −iκA1 (z) exp(−2iδz) + A2 (z), dz 2 where κ is defined in (6.259) and 0 L E ∗1 (x, y) · Im{¯r (x, y)}E 1 (x, y) dx dy. γ =− 2|vg | Here we expound the usual approach (the coherent-state technique), which has been implicitly used also in the previous exposition. The solution of an initial problem for (6.272) can be obtained in the form A1 (z) = v11 (z)A1 (0) + v12 (z)A2 (0), A2 (z) = v21 (z)A1 (0) + v22 (z)A2 (0), where v jk (L), j, k = 1, 2, are given by generalized relations from Peˇrinov´a et al. (1991), Δ v11 (z) = exp (iδz) cosh(Dz) − i sinh(Dz) , D ∗ κ v12 (z) = i exp (iδz) sinh(Dz), D κ v21 (z) = −i exp (−iδz) sinh(Dz), D Δ v22 (z) = exp (−iδz) cosh(Dz) + i sinh(Dz) , (6.275) D with D= + γ |κ|2 − Δ2 , Δ = δ − i . 2 The input–output relations have the form (6.260), where u jk (L), j, k = 1, 2, are given in (6.261) except Δ, Δ = δ − i γ2 . The case of no losses coincides with the case of no gain, when the quantization is possible as mentioned in the previous exposition. In general, the Schr¨odinger picture can be recommended. With respect to (6.263), the input and output operators are denoted as ρ(0) ˆ and ρ(L), ˆ respectively. In the description, the normal ordering is used to simplify the form. The normal characteristic functions of the input and output states, respectively, are Corrugated Waveguides ˆ † + β2 A ˆ † ) exp(−β1∗ A ˆ 1 − β2∗ A ˆ 2) , CN (β1 , β2 , 0) = Tr ρ(0) ˆ exp(β1 A 1 2 ˆ † + β2 A ˆ † ) exp(−β1∗ A ˆ 1 − β2∗ A ˆ 2 ) .(6.277) ˆ exp(β1 A CN (β1 , β2 , L) = Tr ρ(L) 1 2 Intuitively, the normal characteristic function is CN (β1 , β2 , L) = Tr ρ(0) ˆ ˆ † + β1 u ∗12 (L) + β2 u ∗22 (L) A ˆ† × exp β1 u ∗11 (L) + β2 u ∗21 (L) A 1 2 ˆ 1 − β1∗ u 12 (L) + β2∗ u 22 (L) A ˆ2 × exp − β1∗ u 11 (L) + β2∗ u 21 (L) A (6.278) = CN β1 u ∗11 (L) + β2 u ∗21 (L), β1 u ∗12 (L) + β2 u ∗22 (L), 0 . Here we have expressed the statistical properties of the output through the normal characteristic function for the inputs. The procedure concludes with the inversion of the second equation in (6.277), 1 ˆ 1 − β2∗ A ˆ 2) CN (β1 , β2 , L) exp(−β1∗ A ρ(L) ˆ = 2 π ˆ † + β2 A ˆ † ) d2 β1 d2 β2 . × exp(β1 A (6.279) 1 We assert that we have defined a completely positive map. We will provide a proof of this proposition in what follows. Meanwhile, we remark that for some states, there also exist quasidistributions related to the normal ordering as ordinary functions ΦN (A1 , A2 , 0) = 1 π 4 × CN (β1 , β2 , 0) exp(A1 β1∗ + A2 β2∗ − c. c.) d2 β1 d2 β2 , ΦN (A1 , A2 , L) = 1 π 4 × CN (β1 , β2 , L) exp(A1 β1∗ + A2 β2∗ − c. c.) d2 β1 d2 β2 . (6.280) The relationships between the statistical operators may be involved sometimes. In response, we can adopt the Heisenberg–Langevin approach. We can formally define an attenuator to be a device which can be described with the input–output relations (6.267). A characteristic property of the attenuator by the definition is that the commutator matrix 0 ˆ † (L)] [ M ˆ 1 (L), M ˆ † (L)] ˆ 1 (L), M [M 1 2 ˆ 2 (L), M ˆ † (L)] [ M ˆ 2 (L), M ˆ † (L)] [M 1 2 00 ˆ 1, 00 6 Periodic and Disordered Media i.e., both its eigenvalues (let 1ˆ be factored out) are nonnegative. It means that we can ˆ k (L), j = 1, 2, k = 3, 4, such that find numbers u jk (L) and annihilation operators A ˆ 3 (0) + u 14 (L) A ˆ 4 (L), ˆ 1 (L) = u 13 (L) A M ˆ 2 (L) = u 23 (L) A ˆ 3 (0) + u 24 (L) A ˆ 4 (L). M It is required that the commutation relations (6.305) between input annihilation and creation operators and the relations (6.270) related to the expression (6.282) could be rewritten as those between annihilation and creation output operators in (6.311). The operators of different modes mutually commute, the input operators with the input ones and the output operators with the output ones. In place of proving that the procedure for derivation of the output statistical operator from the input one is a completely positive map, we can find expressions for integrated quantum noise terms and demonstrate the characteristic property of the attenuator (6.281). (i) Expressions for the integrated quantum-noise terms In order to take into account losses, we use the extended differential equations d A1 (z) = iκ ∗ A2 (z) exp(2iδz) − iG 1 (z, z), dz d A2 (z) = −iκA1 (z) exp(−2iδz) + iG 2 (z, z), dz d G 1 (ζ, z) = −iγ δ(ζ − z)A1 (z), dz d G 2 (ζ, z) = iγ δ(ζ − z)A2 (z), dz where G j (ζ, z), j = 1, 2, is a continuum of modes, which are right- and left-going in dependence on j = 1 and j = 2, respectively. In terms of these modes, losses are modelled. The detail that the losses of each mode A j (z), j = 1, 2, are modelled by coupling with modes propagating in the same direction is not essential, but it has been chosen for simplicity. Quite a novel thing in this description is that the coupling of the mode G j (ζ, z) is concentrated into the position z = ζ . Solving (6.284) for complex amplitudes G j (ζ, z), j = 1, 2, we obtain that ⎧ for ⎨ G 1 (ζ, 0) G 1 (ζ, z) = G 1 (ζ, 0) − i γ2 A1 (ζ ) for ⎩ G 1 (ζ, 0) − iγ A1 (ζ ) for ⎧ ⎨ G 2 (ζ, 0) + iγ A2 (ζ ) for G 2 (ζ, z) = G 2 (ζ, 0) + i γ2 A2 (ζ ) for ⎩ for G 2 (ζ, 0) z < ζ, z = ζ, z > ζ, z > ζ, z = ζ, z < ζ. Corrugated Waveguides As the contradirectional coupling presents more difficulties than the codirectional coupling, we may use the solutions of (6.285) to create apparent paradoxes. On substituting relation (6.285) into (6.283), we obtain differential equations γ d A1 (z) = iκ ∗ A2 (z) exp(2iδz) − A1 (z) − iG 1 (z, 0), dz 2 γ d A2 (z) = −iκA1 (z) exp(−2iδz) − A2 (z) + iG 2 (z, 0). dz 2 These equations will contradict Equations (6.272) after one omits the Langevin terms as is usual in the treatment of the co-propagation, where such a manipulation is correct. The solutions of the initial-value problem for equations (6.286) and (6.284) are of the form A1 (z) = v11f (z)A1 (0) + v12f (z)A2 (0) z −i v11f (z|ζ )G 1 (ζ , 0) dζ + i v12f (z|ζ )G 2 (ζ , 0) dζ , (6.287) 0 A2 (z) = v21f (z)A1 (0) + v22f (z)A2 (0) z v21f (z|ζ )G 1 (ζ , 0) dζ + i v22f (z|ζ )G 2 (ζ , 0) dζ , (6.288) −i 0 0 G 1 (ζ, z) = −γ θ(z − ζ ) iv11f (ζ )A1 (0) + iv12f (ζ )A2 (0) ζ v11f (ζ |ζ )G 1 (ζ , 0) dζ + 0 ζ − v12f (ζ |ζ )G 2 (ζ , 0) dζ + G 1 (ζ, 0), (6.289) 0 G 2 (ζ, z) = γ θ(z − ζ ) iv21f (ζ )A1 (0) + iv22f (ζ )A2 (0) ζ v21f (ζ |ζ )G 1 (ζ , 0) dζ + 0 ζ − v22f (ζ |ζ )G 2 (ζ , 0) dζ + G 2 (ζ, 0), (6.290) 0 where θ (z − ζ ) is the Heaviside (unit-step) function, v jkf (z) = v jkf (z|0), j, k = 1, 2, with γ , (6.291) v jkf (z|z ) = exp − (z − z ) v jk (z|z ),γ =0 , 2 where , v11 (z|z ),γ =0 = exp[iδ(z − z )] δ sinh D0 (z − z ) , × cosh D0 (z − z ) − i D0 6 Periodic and Disordered Media , κ∗ v12 (z|z ),γ =0 = i exp[iδ(z + z )] sinh D0 (z − z ) , D0 , κ , v21 (z|z ) γ =0 = −i exp[−iδ(z + z )] sinh D0 (z − z ) , D0 , v22 (z|z ),γ =0 = exp[−iδ(z − z )] δ sinh D0 (z − z ) , × cosh D0 (z − z ) + i D0 with D0 = + |κ|2 − δ 2 . Taking into account the relation ⎧ for z > ζ, ⎨ G 2 (ζ, L) G 2 (ζ, z) = G 2 (ζ, L) − i γ2 A2 (ζ ) for z = ζ, ⎩ G 2 (ζ, L) − iγ A2 (ζ ) for z < ζ, we replace equations (6.286) by the following equations d γ A1 (z) = iκ ∗ A2 (z) exp(2iδz) − A1 (z) − iG 1 (z, 0), dz 2 γ d A2 (z) = −iκA1 (z) exp(−2iδz) + A2 (z) + iG 2 (z, L). dz 2 The solutions of the boundary-value problem , ˆ j (0), j = 1, 2, ˆ j (z), =A A z=0 G 1 (ζ, z)|z=0 = G 1 (ζ, 0), G 2 (ζ, z)|z=L = G 2 (ζ, L), for equations (6.283) and (6.284) transformed to the form (6.295) and (6.296) are for z = L as follows L v11g (L|ζ )G 1 (ζ , 0) dζ A1 (L) = v11g (L)A1 (0) + v12g (L)A2 (0) − i 0 v12g (L|ζ )G 2 (ζ , L) dζ , A2 (L) = v21g (L)A1 (0) + v22g (L)A2 (0) L v21g (L|ζ )G 1 (ζ , 0) dζ + i −i 0 v22g (L|ζ )G 2 (ζ , L) dζ , (6.298) G 1 (ζ, L) = −iγ v11g (ζ )A1 (0) − iγ v12g (ζ )A2 (0) ζ v11g (ζ |ζ )G 1 (ζ , 0) dζ + G 1 (ζ, 0) −γ 0 ζ +γ v12g (ζ |ζ )G 2 (ζ , L) dζ + G 2 (ζ, L), 0 Corrugated Waveguides G 2 (ζ, 0) = −iγ v21g (ζ )A1 (0) − iγ v22g (ζ )A2 (0) ζ v21g (ζ |ζ )G 1 (ζ , 0) dζ + G 1 (ζ, 0) −γ 0 ζ +γ v22g (ζ |ζ )G 2 (ζ , L) dζ + G 2 (ζ, L), where v jkg (z) = v jkg (z|0) = v jk (z), j, k = 1, 2, with Δ v11g (z|z ) = exp[iδ(z − z )] cosh D(z − z ) − i sinh D(z − z ) , D ∗ κ v12g (z|z ) = i exp[iδ(z + z )] sinh D(z − z ) , D κ v21g (z|z ) = −i exp[−iδ(z + z )] sinh D(z − z ) , D v22g (z|z ) = exp[−iδ(z − z )] Δ (6.301) × cosh D(z − z ) + i sinh D(z − z ) . D The solutions of differential equations (6.286) and (6.284) conserve pseudonorms (Peˇrinov´a et al. 2006) 1 L 1 L |G 1 (ζ, L)|2 dζ − |G 2 (ζ, L)|2 dζ |A1 (L)|2 − | A2 (L)|2 + γ 0 γ 0 1 L 1 L 2 2 2 = | A1 (0)| − | A2 (0)| + |G 1 (ζ, 0)| dζ − |G 2 (ζ, 0)|2 dζ, (6.302) γ 0 γ 0 1 L 1 L 2 2 2 |A1 (L)| − | A2 (L)| + |G 1 (ζ, L)| dζ + |G 2 (ζ, 0)|2 dζ γ 0 γ 0 1 L 1 L = | A1 (0)|2 − | A2 (0)|2 + |G 1 (ζ, 0)|2 dζ + |G 2 (ζ, L)|2 dζ. (6.303) γ 0 γ 0 On the inversion of equations (6.297) through (6.300), we see that the input and output complex amplitudes conserve the norm 1 L |G 2 (ζ, 0)|2 dζ γ 0 0 1 L 1 L = | A1 (0)|2 + | A2 (L)|2 + |G 1 (ζ, 0)|2 dζ + |G 2 (ζ, L)|2 dζ (6.304) γ 0 γ 0 |A1 (L)|2 + | A2 (0)|2 + 1 γ |G 1 (ζ, L)|2 dζ + and the corresponding scalar product. We can establish the Heisenberg picture in the following sense. We assume that the input annihilation and creation operators obey the usual commutation relations ˆ ˆ (0)] = [ A ˆ 2 (L), A ˆ (L)] = 1, ˆ 1 (0), A [A 1 2 6 Periodic and Disordered Media where the operators in different modes commute. With respect to the noise operators, we make similar assumptions ˆ ˆ † (ζ , 0)] = [G ˆ 2 (ζ, L), G ˆ † (ζ , L)] = γ δ(ζ − ζ )1. ˆ 1 (ζ, 0), G [G 1 2 The output operators are ˆ 1 (0) + u 12g (L) A ˆ 2 (L) ˆ 1 (L) = u 11g (L) A A L ˆ 1 (ζ , 0) dζ − i w22g (L|ζ )G −i 0 ˆ 2 (ζ , L) dζ , w12g (L|ζ )G (6.307) ˆ 1 (0) + u 22g (L) A ˆ 2 (L) ˆ 2 (0) = u 21g (L) A A L ˆ w21g (L|ζ )G 1 (ζ , 0) dζ − i −i 0 ˆ 2 (ζ , L) dζ , w11g (L|ζ )G (6.308) where (6.309) u jkg (L) = u jk (L), j, k = 1, 2, Δ w22g (L|ζ ) = exp[iδ(L − ζ )] cosh(Dζ ) + i sinh(Dζ ) D −1 Δ × cosh(DL) + i sinh(DL) , D κ w21g (L|ζ ) = i exp(−iδζ ) sinh[D(L − ζ )] D −1 Δ , × cosh(DL) + i sinh(DL) D κ∗ w12g (L|ζ ) = i exp[iδ(L + ζ )] sinh(Dζ ) D −1 Δ × cosh(DL) + i sinh(DL) , D Δ w11g (L|ζ ) = exp(iδζ ) cosh[D(L − ζ )] + i sinh[D(L − ζ )] D −1 Δ × cosh(DL) + i sinh(DL) . (6.310) D The output operators obey the same commutation relations as (6.305) † ˆ ˆ (L)] = [ A ˆ 2 (0), A ˆ (0)] = 1. ˆ 1 (L), A [A 1 2 Corrugated Waveguides Now we see that the relations (6.307) and (6.308) have the form (6.267), where the integrated quantum noise terms are ˆ 1 (L) = −i M ˆ 2 (L) = −i M ˆ 1 (ζ |0) dζ w22g (L|ζ )G ˆ 2 (ζ |L) dζ , w12g (L|ζ )G ˆ 1 (ζ |0) dζ w21g (L|ζ )G ˆ 2 (ζ |L) dζ . w11g (L|ζ )G Their commutators are ˆ 1 (L), M ˆ † (L)] = [M 1 ˆ |w22g (L|ζ )|2 + |w12g (L|ζ )|2 dζ 1, 0 † ˆ 1 (L), M ˆ (L)] [M 2 ∗ ∗ ˆ (L|ζ ) + w12g (L|ζ )w11g (L|ζ ) dζ 1, w22g (L|ζ )w21g = 0 ˆ 2 (L), M ˆ † (L)] [M 1 ∗ ∗ ˆ (L|ζ ) + w11g (L|ζ )w12g (L|ζ ) dζ 1, w21g (L|ζ )w22g ˆ † (L)] = ˆ 2 (L), M [M 2 ˆ |w21g (L|ζ )|2 + |w11g (L|ζ )|2 dζ 1. The commutator matrix is 0 1 ˆ 1 (L), M ˆ † (L)] [ M ˆ 1 (L), M ˆ † (L)] [M 1 2 ˆ 2 (L), M ˆ † (L)] [ M ˆ 2 (L), M ˆ † (L)] [M 1 2 L ∗ |w22g (L|ζ )|2 w22g (L|ζ )w21g (L|ζ ) = dζ 1ˆ ∗ (L|ζ ) |w21g (L|ζ )|2 w21g (L|ζ )w22g 0 L ∗ |w12g (L|ζ )|2 w12g (L|ζ )w11g (L|ζ ) + dζ 1ˆ ∗ 2 (L|ζ )w (L|ζ ) |w (L|ζ )| w 11g 11g 12g 0 00 ˆ ≥ 1. 00 We have shown that the device under investigation is an attenuator according to the formal definition. It has the attenuator characteristic property (6.281). (ii) Derivation of normal characteristic 6 Periodic and Disordered Media We can pass from the Heisenberg picture to the Schr¨odinger picture using characteristic functionals CN (β1 , β2 , β1 (z), β2 (z), 0) = Tr ρˆ e (0) ˆ † + β2 A ˆ† + × exp β1 A 1 2 × exp ˆ1 −β1∗ A ˆ2 β2∗ A − 0 L 0 ˆ (z ) dz + β1 (z )G 1 ˆ 1 (z ) dz β1∗ (z )G − 0 ˆ (z ) dz β2 (z )G 2 ˆ 2 (z ) dz β2∗ (z )G , (6.316) CN (β1 , β2 , β1 (z), β2 (z), L) = Tr ρˆ e (L) ˆ † + β2 A ˆ† + × exp β1 A 1 2 ˆ 1 − β2∗ A ˆ2 − × exp −β1∗ A L 0 ˆ † (z ) dz + β1 (z )G 1 ˆ 1 (z ) dz − β1∗ (z )G ˆ † (z ) dz β2 (z )G 2 ˆ 2 (z ) dz β2∗ (z )G . (6.317) ˆ 1 (z, 0), G ˆ 2 (z) ≡ G ˆ 2 (z, L). Choosing the input quanˆ 1 (z) ≡ G Here we abbreviate G tum noise fields in the vacuum state, we have CN (β1 , β2 , β1 (z), β2 (z), 0) = CN (β1 , β2 , 0). We do not describe the output quantum noise fields and so we are interested in the normal characteristic function CN (β1 , β2 , L) = CN (β1 , β2 , 0, 0, L) = CN β1 u ∗11 (L) + β2 u ∗21 (L), β1 u ∗12 (L) + β2 u ∗22 (L), ∗ ∗ (L , z) + iβ2 w21g (L , z), iβ1 w22g ∗ ∗ iβ1 w12g (L , z) + iβ2 w11g (L , z), 0 = CN β1 u ∗11 (L) + β2 u ∗21 (L), β1 u ∗12 (L) + β2 u ∗22 (L), 0 . This already suffices for the proof that the procedure yields a completely positive map. In Nielsen and Chuang (2000), the notion of quantum fidelity of two states ρ, ˆ ρˆ is introduced, / 1 1 F = Tr ρˆ 2 ρˆ ρˆ 2 . (6.320) ˆ For pure states, ρ(0) ˆ = |ψ(0) ψ(0)|, ρ(L) ˆ = Here we put ρˆ = ρ(0), ˆ ρˆ = ρ(L). |ψ(L) ψ(L)|, and the formula for the fidelity simplifies F = | ψ(L)|ψ(0) |. Corrugated Waveguides Attenuated coherent states are coherent again, |ψ(0) = | A1 (0), A2 (L) = |A1in , A2in , |ψ(L) = |A1 (L), A2 (0) = | A1out , A2out , where (see (6.260)) A1in = A1 (0), A2in = A2 (L), A1out =A1 (L), A2out = A2 (0). It is known that for the coherent states (Peˇrina 1991) | A1out , A2out |A1in , A2in | 1 1 2 2 = exp − |A1out − A1in | − |A2out − A2in | . 2 2 The quantum fidelity should be applied in the time or space Schr¨odinger picture. In Severini et al. (2004), with focusing on mode 1, a transmission coefficient has been introduced T = ˆ 1 (L) ˆ † (L) A A 1 . † ˆ (0) A ˆ 1 (0) The quantum averages are calculated in the state |ψ(0) . The transmission coefficient ought to be utilized in the space Heisenberg picture. Both in this and in the Schr¨odinger picture the numerical analysis simplifies for |ψ(0) = |Ain , 0 . The transmission spectrum is a function of a mismatch coefficient δ (cf. Figs. 6.1, 6.2), T (δ) = |u 11 (L)|2 . The quantum fidelity spectrum is a function of the same coefficient 1 F(δ) = exp − |u 11 (L) − 1|2 + |u 21 (L)|2 |Ain |2 , 2 but we restrict ourselves to |Ain |2 = 1 in Fig. 6.3. Fig. 6.1 Transmission spectrum for a corrugated LiNbO3 planar waveguide as a function of the mismatch coefficient δ, for κ = 3π L 6 Periodic and Disordered Media Fig. 6.2 The same as in Fig. 6.1, but for κ = 4π L µ Fig. 6.3 Fidelity spectrum for a corrugated LiNbO3 planar waveguide as a function of the mismatch , coefficient δ. Here κ = 4π L but other parameters are the same as in Fig. 6.1. For , F(δ) = 1 is not |κ| = 3π L obtained We have calculated the semiclassical and quantum transmission and fidelity spectra for a corrugated LiNbO3 planar waveguide as functions of the mismatch coefficient δ between the spatial corrugation of the refractive index of the guide and the wavenumber of the propagating mode. We have used units [δ] = μm−1 . The spatial corrugation has caused a coupling, which is characterized by the coupling (Fig. 6.1) or κ = 4π (Figs. 6.2, 6.3). The waveguide length is constant κ, κ = 3π L L L = 1824.598 μm. We assume that + − 9 + 6.52 π ≤δ≤ L 9 + 6.52 π. L Here 6 is the number of maxima of the spectrum plotted in Fig. 6.1, and 6.5 is a value which makes the curves end with the next minimum approximately. The losses are characterized by the damping √ constant γ ≥ 0, which is chosen as 0 (in the case 5. Line a (b) corresponds to the damping coefficient without losses) and 0.01κ √ γ = 0 (γ = 0.01κ 5). Corrugated Waveguides The output two-mode state differs from the input state only by an inessential phase factor when either 2π 2π + 2 δ − |κ|2 ≡ 0 mod , , L L π 2π + 2 2π π mod , δ − |κ|2 ≡ mod . L L L L δ ≡ 0 mod or δ≡ , more generally, for |κ| = These conditions cannot be fulfilled for |κ| = 3π L π 2π 4π mod . It can be satisfied for |κ| = , more generally, |κ| = 0 mod 2π . In L L L L + 5π 4π 3π 2 2 Fig. 6.3 this condition is met for δ = L , |κ| = L , δ − |κ| = L (one of the Pythagorean triangles). In (Severini et al. 2004), a contradirectional coupler has been described by the following equations γ d A1 (z) = iK L A2 (z) exp(2iδz) − A1 (z), dz 2 γ d A2 (z) = −iK L A1 (z) exp(−2iδz) − A2 (z), dz 2 where K L ≡ κ ≥ 0. Although the authors specify that α ≡ γ2 characterizes leakage phenomena, they do not write signs conformable to the attenuator case, cf. equations (6.272). The second equation describes rather amplification in mode 2. The amplification has perhaps led to the derivation of “a state whose quantum properties are preserved”. Let us assume that in a medium, an attenuator in mode 1 and an amplifier in mode 2 occur, although we do not know of such a case. We could work with semiclassical and quantum noises as in the previous sections, but neither the normal ordering nor the antinormal one lead to a simple Schr¨odinger picture. With respect to the quantum noise of the amplifiers, one can assert that there is no state in which all the quantum properties are preserved. We have dealt with the problem of quantum description of light modes which propagate in a periodic medium in opposite directions (Peˇrinov´a et al. 2006). Although we believe that partial differential equations comprising both time and space derivatives would be appropriate for the description, we have neglected the time ones and retained the space ones. As the coupled-mode theory is a classical description by means of ordinary differential equations involving a space derivative which leads to a quantum description of copropagating modes, it is also widely used for such a simple quantum description of counterpropagating modes. We have included also the gain and losses which have not been described to our knowledge yet. As application we have treated the conditions that can be imposed on a waveguide for the output state to be the same as the incoming one. Bozhevolnyi et al. (2005) study propagation of long-range surface plasmon polaritons along periodically modulated medium both theoretically and 6 Periodic and Disordered Media experimentally. Surface plasmon polaritons are quasi-two-dimensional electromagnetic excitations that propagate along a dielectric–metal interface. Their application prospects are narrow. More complex excitations are an exception that are created in the configuration of two similar and very close metal–dielectric interfaces, such as surfaces of a thin metal film embedded in a dielectric. Then it is appropriate to speak of long-range surface plasmon polaritons. Similarity to dielectric symmetric waveguides suggests to realize the band-gap effect for the long-range surface plasmon polaritons. The metal films are periodically thickness modulated. This is achieved with a periodic array of metal ridges. Provided that we know the electric field E0 (r) propagating along the metal film and the electric field Green tensor G(r, r ) for the same structure, we can obtain the total electric field E(r) resulting in the process of multiple scattering by the ridges by solving the E(r) = E0 (r) + k02 G(r, r ) (r ) − ref (r ) E(r ) d2 r . Here k0 is the free-space wave number, is the dielectric constant of the total structure inclusive of the metal ridges, and ref is the dielectric constant of the reference structure (only a metal film embedded in a dielectric). The gap in transmission and the peak in reflection are centred at λg ≈ 2nΛ, where n is the refractive index of the dielectric and Λ is the grating period. For low ridges, the gap and the peak improve with the increase of the ridge height. For larger heights, the band-gap effect was not achieved. For n = 1.543 and Λ = 500 nm, we obtain λg = 1543 nm. Band gaps centred at 1550 nm and 20 nm wide have been simulated and experimentally investigated. The lengths of the structures were L = 20, 40, 80, 160 μm. The band-gap effect has been utilized for design and fabrication of a compact wavelength add-drop filter. Deng et al. (2006) have reported second-harmonic generation in a sample made of lithium niobate. Near the surface, a waveguide was fabricated applying the proton-exchange technique. Ultraviolet laser lithography was applied to make photonic band-gap gratings on the sample. On the sample, two different gratings are inscribed. The first one couples the pump into the waveguide and the pump wave may come at an angle of around 45o . The second one is the photonic band-gap grating. A numerical model utilizes the coupled-mode theory. The corrugation couples TM to TM modes. The authors have measured that the second-harmonic generation in a waveguide mode is very weak compared with the second harmonic radiated into the substrate from the Cherenkov condition. 6.3 Photonic Crystals The integral equation for quantum mechanical Green operator is a pattern for other integral equations of the field theory, in particular for the relation comprising the input and retarded electromagnetic fields (Białynicki-Birula and Białynicka-Birula Photonic Crystals 1975). A treatment of two- and three-dimensional photonic crystals may use a similar approach. (i) Quadrature-phase squeezing in photonic crystals The Green function method has been used for classical fields in Sakoda and Ohtaka (1996a,b). Sakoda (2002) has obtained the enhancement of a quantum optical process by use of a perturbation theory based on a Green function formalism. The results are related to degenerate optical parametric amplification, but the perturbation theory is not limited to this process and can be applied to other quantumoptical processes in the photonic crystals. The quantization proceeds according to Glauber and Lewenstein (1991). The eigenmode of the electric field is denoted as Ekn (r), where k is a wave vector in the first Brillouin zone and n is a band index. The eigenmodes are normalized with the condition (r)E∗kn (r)Ek n (r) d3 r = V δkk δnn , (6.332) V where V means the volume of the photonic crystal, and is a spatially periodic dielectric constant. The volume of unit cell is denoted as V0 . On expressing the electric-field operator in the form ωkn † ˆ t) = E(r, i (6.333) [aˆ kn (t)Ekn (r) − aˆ kn (t)E∗kn (r)], 2 V 0 k,n † where aˆ kn and aˆ kn are the usual photon annihilation and creation operators, respectively, and writing the magnetic-induction operator similarly, the total electromagnetic energy (in the volume) is reduced to a quantum-mechanical Hamiltonian 1ˆ † ˆ ωkn aˆ kn (t)aˆ kn (t) + 1 . (6.334) H= 2 k,n The nonlinear medium is described by a second-order susceptibility tensor χ (2) (r) that has the same spatial periodicity as (r). But χ (2) (r) is nonzero only in the region 0 ≤ z ≤ l = an z , where a is the lattice constant of the crystal and n z is a positive integer. It is assumed that a pump wave denoted Ep (r, t) and a signal wave denoted Es (r, t) propagate along the z-axis. Both waves are single mode. We let kpz and ksz denote the z-components of their wave vectors, and we introduce a phase mismatch Δk z , Δk z = kpz − 2ksz . The frequency of the pump wave, ωp , is twice that of the signal wave, ωs . We let vg denote the group velocity of the signal wave. Since Ep (r, t) = i A[Ep (r) exp(−iωp t + iθ ) − E∗p (r) exp(iωs t − iθ )] 6 Periodic and Disordered Media is a classical quantity and ˆ s (r, t) = i ωs [aˆ s (0)Es (r) exp(−iωs t) − aˆ s† (0)E∗s (r) exp(iωs t)], E 20 V where Ep (r) and Es (r) are eigenmodes of the electric field, A is the amplitude of † the pump wave, θ the shift of its phase, and aˆ s (0) and aˆ s (0) are photon annihilation and creation operators, respectively, can be considered to be an output electric-field operator, the integral equation for this operator is formulated, ˆ t) + P(r, ˆ t) = 0 (r)E ˆ s (r, t) + (r) 0 (r)E(r, V t ˆ , t ) sin ωkn (t − t ) d3 r dt , × ωkn Ekn (r) E∗kn (r ) · P(r V ˆ t) is the output nonlinear polarization operator, where P(r, ˆ t) ≈ 2χ (2) (r) : Ep (r, t)E(r, ˆ t). P(r, The solution of the integral equation (6.337) is of the form ˆ t) ≈ i E(r, ωs ˆ [bEs (r) exp(−iωs t) − bˆ † E∗s (r) exp(iωs t)], 20 V where bˆ = aˆ s (0) cosh(|β |l) + exp[i(θ + φ )]aˆ s† (0) sinh(|β |l), bˆ † = aˆ s† (0) cosh(|β |l) + exp[−i(θ + φ )]aˆ s (0) sinh(|β |l), with φ being the phase of the effective coupling constant (inclusive of the amplitude A), β = βηξ, β = ωs AFs,p,s , 0 vg a(n z − 1)Δk z , ξ = exp i , z 2 n z sin aΔk 2 1 Fs,p,s = E∗ (r) · χ (2) (r) : Ep (r)E∗s (r) d3 r. V0 V0 s lΔk z 2 The nonlinear properties of photonic crystals were reviewed in Slusher and Eggleton (2003). Photonic Crystals (ii) Parametric down conversion in a multilayered structure simply described A description of spontaneous parametric down conversion in finite-length multilayer structure has been developed using semiclassical and quantum approaches (Centini et al. 2005). The semiclassical model has allowed one to find the criterion for designing and optimizing the structure. The quantum model is related to the properties of emitted entangled photon pairs. One considers a one-dimensional dispersive lossless inhomogeneous medium, where both the dielectric constant, (z, ω), and the nonlinear susceptibility, d (2) (z), are functions of a single spatial coordinate z. The study is limited to s-polarized plane monochromatic waves that fall onto the interfaces in the normal direction. The planes z = 0 and z = L are the first and last interfaces of the structure embedded in air. The classical treatment begins with the following nonlinear, coupled Helmholtz equations: ωs2 s (z) ωs2 (2) d2 E s + E = −2 d (z)E i∗ E p , s dz 2 c2 c2 ωi2 i (z) ωi2 (2) d2 E i + E = −2 d (z)E s∗ E p , i dz 2 c2 c2 ωp2 p (z) ωp2 (2) d2 E p + E = −2 d (z)E i∗ E s , p dz 2 c2 c2 where n (z) ≡ (z, ωn ), ωn is the angular frequency of the field n, n = s, i, p, and s, i, and p stand for signal, idler, and pump, respectively. It is assumed that ωp = ωs + ωi . The treatment has followed (D’Aguanno et al. 2002). The solutions of the corresponding linear equations E can easily be decomposed into forward- and backward-propagating waves E = E F + E B , where F (B) means the forward (backward) propagation. The solutions to these equations are intro(−) duced, Θ(+) n and Θn , which fulfil the following boundary conditions, (+) Θ(+) nF (−0) = 1, ΘnB (L + 0) = 0, (−) Θ(−) nF (−0) = 0, ΘnB (L + 0) = 1. These solutions have the familiar properties (+) (+) (+) Θ(+) nF (L + 0) = ta , ΘnB (−0) = ra , (−) (−) (−) Θ(−) nB (−0) = tn , ΘnF (L + 0) = rn , (+) (−) (−) where r(+) n and tn (rn and tn ) are the linear reflection and transmission complex coefficients for left-to-right (right-to-left) propagation, cf. (Born and Wolf 1999, Yeh 1988). The solutions of the nonlinear equations are decomposed as (+) (−) (−) E n = A(+) n (z)Θn (z) + An (z)Θn (z), 6 Periodic and Disordered Media (−) where A(+) n (z) and An (z) are slowly varying complex envelope functions. Especially, ω z n (+) (−) (−) exp −i (0)r + A (0)t for z ≤ 0, E nB (z) = A(+) n n n n c (−) (+) (+) E nF (z) = An (L)r (−) n + An (L)tn ωn (z − L) for z ≥ L . (6.346) × exp i c As usual with the semiclassical approach, a scalar product is introduced, 1 L ∗ f (z)g(z) dz. (6.347) f |g = L 0 A description has been developed using, e.g., the integrals ( j) (2) (k) (l)∗ , Γ(k,l) (s, j) = Θs |d Θp Θi where j, k, l = +, −. In contrast, we present a quantum description using the solutions of linear equa˜ a(−) , a = s, i, which fulfil the boundary conditions ˜ a(+) and Θ tions Θ ˜ (+) (−0) = 0, Θ ˜ (+) (L + 0) = 1, Θ aB aF ˜ (−) (L + 0) = 0. ˜ (−) (−0) = 1, Θ Θ aB aF The connecting relations are ˜ a(+) = Θa(+) ta(+)∗ + Θa(−) ra(−)∗ , Θ ˜ a(−) = Θa(+) ra(+)∗ + Θa(−) ta(−)∗ . Θ We introduce the overlap integrals ˜ ( j) (2) (k) ˜ (l)∗ . Γ˜ (k,l) (s, j) = Θs |d Θp Θi The nonlinear interaction in the entire photonic band-gap structure is described ˆ (t) given as a sum of operators H ˆ (l) (t) (l = by an interaction Hamiltonian H 1, . . . , N ) that characterize every layer of the structure, ˆ (t) = H ˆ (l) (t), H where N is the number of layers of the structure. Strong pump positive-frequency electric-field amplitudes E p(l,+) (z, t), weak signal and idler positive-frequency electric-field operator amplitudes Eˆ a(l,+) (z, t), a = s, i, and their Hermitianconjugated expressions Eˆ a(l,−) (z, t) are introduced. Then ˆ (l) (t) = d˜ (l) H zl zl−1 Eˆ p(l,+) (z, t) Eˆ s(l,−) (z, t) Eˆ i(l,−) (z, t) + H.c. dz. Photonic Crystals √ Here d˜ (l) = c ωs ωi d (l) , and d (l) means the second-order susceptibility of the lth layer. A monochromatic pump electric field has the positive-frequency amplitude (l) exp ikp(l) (z − zl−1 ) E p(l,+) (z, t) = BpF (l) exp −ikp(l) (z − zl−1 ) exp(−iωp t); (6.354) + BpB (l) (l) (BpB ) is a kp(l) means the wave vector of the pump field in the lth layer and BpF complex coefficient. Down-converted fields are polychromatic, and it is convenient / a , where V is the quantization volume. to express their amplitudes in units of 2ω 0 V Then the positive-frequency operator amplitudes are ˆE a(l,+) (z, t) = ba(l) bˆ (l) (ωa ) exp ika(l) (ωa )(z − zl−1 ) aF (l) + bˆ aB (ωa ) exp −ika(l) (ωa )(z − zl−1 ) exp(−iωa t) dωa . (6.355) Here ba(l) = √1 (l) , a(l) stands for the relative permittivity in the lth layer for the a field a. It holds that (l) (+) (−) (−) = A(+) BpF p (0)ΘpF (z l−1 ) + Ap (L)ΘpF (z l−1 ), (l) (+) (−) (−) = A(+) BpB p (0)ΘpB (z l−1 ) + Ap (L)ΘpB (z l−1 ), l = 1, . . . , N + 1; (N +1) (0) (+) = A(−) particularly, BpB p (L), but BpF = Ap (0). Similarly, (l) (N +1) ˜ (+) (zl−1 ) + bˆ (0) (ωa )Θ ˜ (−) (zl−1 ), (ωa ) = bˆ aF (ωa )Θ ba(l) bˆ aF aF aB aF (l) (N +1) ˜ (+) (zl−1 ) + bˆ (0) (ωa )Θ ˜ (−) (zl−1 ), l = 1, . . . , N . (6.357) ba(l) bˆ aB (ωa ) = bˆ aF (ωa )Θ aB aB aB (N +1) (0) (ωa ) and bˆ aB (ωa ) are output operators. Here bˆ aF The solution |ψ s,i of the Schr¨odinger equation correct up to first order on the assumption of the initial vacuum state |vac s,i for the down-converted fields reads as T i ˆ (t)|vac s,i dt. |ψ s,i = |vac s,i − lim (6.358) H T →∞ −T It can be written in the form (N +1)† ˆ (N +1)† |vac s,i |ψ s,i = |vac s,i + biF ΦFF (ωs , ωi )bˆ sF (N +1)† ˆ (0)† (0)† (N +1)† + ΦFB (ωs , ωi )bˆ sF |vac s,i biB |vac s,i + ΦBF (ωs , ωi )bˆ sB bˆ iF (0)† (0)† + ΦBB (ωs , ωi )bˆ sB bˆ iB |vac s,i dωs dωi . (6.359) 6 Periodic and Disordered Media √ 2π ωs ωi L ˜ (+,+) δ(ωp − ωs − ωi ) A(+) p Γ(s,+) + ic √ 2π ωs ωi L ˜ (+,−) δ(ωp − ωs − ωi ) A(+) ΦFB (ωs , ωi ) = p Γ(s,+) + ic √ 2π ωs ωi L ˜ (+,+) δ(ωp − ωs − ωi ) A(+) ΦBF (ωs , ωi ) = p Γ(s,−) + ic √ 2π ωs ωi L ˜ (+,−) δ(ωp − ωs − ωi ) A(+) ΦBB (ωs , ωi ) = p Γ(s,−) + ic ΦFF (ωs , ωi ) = (−,+) ˜ A(−) Γ p (s,+) , ˜ (−,−) A(−) p Γ(s,+) , (−,+) ˜ A(−) Γ p (s,−) , ˜ (−,−) A(−) p Γ(s,−) . (6.360) Even though these functions are not probability amplitudes, they can be combined, e.g., with parameters of a finite spatial region to give such amplitudes. (iii) Parametric down conversion in a multilayered structure including polarization The foregoing description has been generalized and revised in part in (Peˇrina Jr., et al. 2006). For instance, the structure may be embedded in a medium with the relative permittivity n(0) (n(N +1) ) in/front of (beyond) the sample. The linear indices (l) = m(l) , l = 0, . . . , N + 1, m = s, i, p. of refraction are introduced, n m The treatment has been restricted to plane waves with wave vectors parallel (l) = to the yz-plane. The forward-propagating fields have the wave vectors kmF e y km(l) sin(ϑm(l) ) + ez km(l) cos(ϑm(l) ), where km(l) = ωm (l) n . c m The angles ϑm(l) fulfil the Snell law: (l) sin(ϑm(l) ) = constant, l = 0, . . . , N + 1. nm (l) = e y km(l) sin(ϑm(l) )−ez km(l) cos(ϑm(l) ) characterize the backwardThe wave-vectors kmB (l) (l) (l) (l) (l) propagating fields. For simplicity, km ≡ kmF , km,x = 0, km,y = km(l) sin(ϑm(l) ), km,z = (l) (l) km cos(ϑm ). At this moment, we may still use classical concepts and restrict ourselves to monochromatic waves. We distinguish the TE- and TM-waves. These have the electric fields of the forms Em,TE = E m,TE ex , = E m,TM e y cos [ϑm (z)] − E¯ m,TM ez sin [ϑm (z)] , where ϑm (z) = ϑm(l) for z in the lth layer and E¯ m,TM = 1 ∂ ∂ E m,TM = E m,TM , ikm,y (z) ∂ y ikm,z (z) ∂z 1 Photonic Crystals (l) (l) with km,y (z) = km,y and km,z (z) = km,z (in the lth layer). The linear Helmholtz equations read ∂ 2 E m,α 2 + km,z E m,α = 0, m = s, i, p, α = TE, TM. (6.365) ∂z 2 The prolongation conditions depend on the polarization. The case α = TE is very ∂ E m,α be continuous at the attractive, since the conditions require that E m,α and ∂z points z = zl . Let x and y be zero for definiteness. The case α = TM includes the ∂ E m,α are continuous at z = zl . conditions that E m,α cos[ϑm (z)] and cos [ϑ1m (z)] ∂z (−) We introduce solutions to these equations, Θ(+) m,α and Θm,α , which satisfy the following boundary conditions: (+) Θ(+) mF,α (−0, ωm ) = 1, ΘmB,α (L + 0, ωm ) = 0, (−) Θ(−) mF,α (−0, ωm ) = 0, ΘmB,α (L + 0, ωm ) = 1. ˜ (+) ˜ (−) Similarly, we will need some of the solutions Θ m,α and Θm,α , which fulfil the boundary conditions ˜ (+) (L + 0, ωm ) = 1, ˜ (+) (−0, ωm ) = 0, Θ Θ mB,α mF,α ˜ (−) (−0, ωm ) = 1, Θ ˜ (−) (L + 0, ωm ) = 0. Θ mB,α mF,α In quantum physics, we will use E(+) p,α (z, ωp ) instead of Ep,α (z) for the positivefrequency electric-field amplitude of a monochromatic component at frequency ωp with polarization α. The positive-frequency electric-field amplitude E(+) p (z, t) (+) (z, t) and E is decomposed into the TE- and TM-wave contributions E(+) p,TE p,TM (z, t) and expressed in the forms (+) (+) E(+) p (z, t) = Ep,TE (z, t) + Ep,TM (z, t) ∞ 1 =√ E(+) p (z, ωp ) dωp 2π 0 ∞ 1 (+) =√ (z, ω ) + E (z, ω ) dωp . E(+) p p p,TE p,TM 2π 0 Here (l) (l) (l) E(+) p,α (z, ωp ) = ApF,α (ωp )epF,α (ωp ) exp[ikp,z (z − z l−1 )] (l) (l) (l) (ωp )epB,α (ωp ) exp[−ikp,z (z − zl−1 )], + ApB,α with (l) (+) (N +1) (−) ApF,α (ωp ) = A(0) pF,α (ωp )ΘpF,α (z l−1 , ωp ) + ApB,α (ωp )ΘpF,α (z l−1 , ωp ), (l) (+) (ωp ) = A(0) ApB,α pF,α (ωp )ΘpB,α (z l−1 , ωp ) (N +1) + ApB,α (ωp )Θ(−) pB,α (z l−1 , ωp ), l = 1, . . . , N + 1. 6 Periodic and Disordered Media For l = 0, we use this formula on replacement of zl−1 by z 0 . Further, (l) (l) emF,TE (ωm ) = emB,TE (ωm ) = ex , (l) emF,TM (ωm ) = e y cos(ϑm(l) ) − ez sin(ϑm(l) ), (l) emB,TM (ωm ) = e y cos(ϑm (l) ) + ez sin(ϑm(l) ), ˆ (+) where m = p. The positive-frequency electric-field operators Eˆ (+) s (z, t) and Ei (z, t) for the signal and idler fields can be decomposed into the TE- and TM-wave contriˆ (+) (z, t) and expressed as follows (Vogel et al. 2001) ˆ (+) (z, t) and E butions E a,TE a,TM ˆ a(+) (z, t) = E ˆ (+) (z, t) + E ˆ (+) (z, t) E a,TE a,TM ∞ 1 ˆ a(+) (z, ωa ) dωa E =√ 2π 0 ∞ 1 ˆ (+) (z, ωa ) dωa , a = s, i. ˆ (+) (z, ωa ) + E =√ E a,TE a,TM 2π 0 (+) ˆ a,α E (z, ωa ) = ωa (l) (l) (l) aˆ (ωa )eaF,α (ωa ) exp[ika,z (z − zl−1 )] 20 cB aF,α (l) (l) (l) (ωa )eaB,α (ωa ) exp[−ika,z (z − zl−1 )] , + aˆ aB,α with B the area of the transverse profile of a beam and (l) (N +1) ˜ (+) (zl−1 , ωa ) ba(l) aˆ aF,α (ωa ) = ba(N +1) aˆ aF,α (ωa )Θ aF,α (0) ˜ (−) (zl−1 , ωa ), + ba(0) aˆ aB,α (ωa )Θ aF,α (l) (N +1) ˜ (+) (zl−1 , ωa ) (ωa ) = ba(N +1) aˆ aF,α (ωa )Θ ba(l) aˆ aB,α aB,α (N +1) ˜ (−) (zl−1 , ωa ), l = 1, . . . , N . + ba(0) aˆ aB,α (ωa )Θ aB,α (l) (l) Further, eaF,α (ωa ) and eaB,α (ωa ) are defined by relation (6.371), where m = a. (N +1) (0) (ωa ) obey the following commutation relaThe operators aˆ aF,α (ωa ) and aˆ aB,α tions (N +1)† (N +1) ˆ aˆ aF,α (ωa ), aˆ a F,α (ωa ) = δα,α δa,a δ(ωa − ωa )1, (N +1) +1) ˆ aˆ aF,α (ωa ), aˆ a(N F,α (ωa ) = 0, (0)† (0) ˆ aˆ aB,α (ωa ), aˆ a B,α (ωa ) = δα,α δa,a δ(ωa − ωa )1, (0) ˆ aˆ aB,α (ωa ), aˆ a(0) B,α (ωa ) = 0, Photonic Crystals (0)† (N +1) ˆ aˆ aF,α (ωa ), aˆ a B,α (ωa ) = 0, (N +1) ˆ aˆ aF,α (ωa ), aˆ a(0) B,α (ωa ) = 0. ˆ (t) describing spontaneous parametric downThe interaction Hamiltonian H conversion can be written as zN ∞ ∞ ∞ 0 B ˆ d(z) H (t) = 3 0 0 α,β,γ =TE,TM (2π) 2 z0 0 .. (+) (−) ˆ (−) ˆ . Ep,α (z, ωp )E (z, ω ) E (z, ω ) + H. c. dz dωp dωs dωi , (6.376) s i s,β i,γ . where d(z) means a third-order tensor of nonlinear susceptibility and .. denotes a contraction, i.e., treble sum after the tensors are replaced by their components, and products of the corresponding components are formed. The solution |ψ s,i of the Schr¨odinger equation correct up to first order on the assumption of the initial vacuum state |vac s,i for the down-converted fields is given by the relation ∞ ∞ ΦFβFγ (ωs , ωi ) |ψ s,i = |vac s,i + 0 × + + β,γ =TE,TM (N +1)† (N +1)† bs(N +1) aˆ sF,β (ωs )bi(N +1) aˆ iF,γ (ωi )|vac s,i (N +1)† (0)† ΦFβBγ (ωs , ωi )bs(N +1) aˆ sF,β (ωs )bi(0) aˆ iB,γ (ωi )|vac s,i (0)† (N +1)† ΦBβFγ (ωs , ωi )bs(0) aˆ sB,β (ωs )bi (N +1) aˆ iF,γ (ωi )|vac s,i (0)† (0)† + ΦBβBγ (ωs , ωi )bs(0) aˆ sB,β (ωs )bi(0) aˆ iB,γ (ωi )|vac s,i dωs dωi . Here i ΦFβFγ (ωs , ωi ) = − 4π × A(0) pF,α (ωp ) ωs ωi δ(ωp − ωs − ωi ) zN z0 ˜ (+)∗ ˜ (+)∗ Θ(+) pm,α (z, ωp )Θsn,β (z, ωs )Θio,γ (z, ωi ) . × d(z) .. epm,α (z, ωp )esn,β (z, ωs )eio,γ (z, ωi ) dz dωp , (6.378) ∞ i √ ΦFβBγ (ωs , ωi ) = − ωs ωi δ(ωp − ωs − ωi ) 4π 0 α=TE,TM zN ˜ (+)∗ ˜ (−)∗ × A(0) (ω ) Θ(+) p pm,α (z, ωp )Θsn,β (z, ωs )Θio,γ (z, ωi ) pF,α m,n,o=F,B . × d(z) .. epm,α (z, ωp )esn,β (z, ωs )eio,γ (z, ωi ) dz dωp , 6 Periodic and Disordered Media ΦBβFγ (ωs , ωi ) = − i 4π × A(0) pF,α (ωp ) ωs ωi δ(ωp − ωs − ωi ) zN z0 ˜ (−)∗ ˜ (+)∗ Θ(+) pm,α (z, ωp )Θsn,β (z, ωs )Θio,γ (z, ωi ) . × d(z) .. epm,α (z, ωp )esn,β (z, ωs )eio,γ (z, ωi ) dz dωp , (6.380) ∞ i √ ΦBβBγ (ωs , ωi ) = − ωs ωi δ(ωp − ωs − ωi ) 4π 0 α=TE,TM zN ˜ (−)∗ ˜ (−)∗ × A(0) (ω ) Θ(+) p pm,α (z, ωp )Θsn,β (z, ωs )Θio,γ (z, ωi )d(z) pF,α m,n,o=F,B .. . epm,α (z, ωp )esn,β (z, ωs )eio,γ (z, ωi ) dz dωp , (l) (l) (l) (ωp ), esn,β (z, ωs ) = esn,β (ωs ), eio,γ (z, ωi ) = eio,γ (ωi ) (in with epm,α (z, ωp ) = epm,α (N +1) (ωp ) = 0. the lth layer) when we restrict ourselves to the case ApB,α Corona and U’Ren (2007) study type-II, frequency degenerate, collinear parametric down-conversion in a χ (2) material with uniaxial birefringence. The typeII interaction means that the pump is an extraordinary wave, and the signal and idler are extraordinary and ordinary. For this operation, signal and idler photons are orthogonally polarized. The material is characterized by a spatial periodicity in its linear optical properties. Introducing μ = o for the ordinary ray and μ = e for the extraordinary ray, the index of refraction will be n μ1 (ω), 0 < z < a, (6.382) n μ (ω, z) = n μ2 (ω), a < z < Λ. The authors assume a = the form Λ 2 in a numerical analysis. The Bloch waves are written in E(z, t) = E K (z, ω) exp{i[K (ω)z − ωt]}. Here E K (z, ω) is the Bloch envelope, which has the same period, Λ, as the material, and K (ω) stands for the Bloch wave number. The Bloch waves are described in the . Further m = 1. vicinity of K = mπ Λ A standard perturbative approach to the quantum description has been adopted by Corona and U’Ren (2007). There Eˆ μ (r, t) (μ = p, s, i) represents the electric-field operators related to each of the interacting fields. They assume that the pump field is classical or that the replacement (6.384) Eˆ p(+) (r, t) → αp (ω)E K p (z, ω) exp{i[K p (ω)z − ωt]} dω, where K p (ω) is the Bloch wave number, E K p (z, ω) is the Bloch envelope, and αp (ω) is the spectral amplitude, may be done in the interaction Hamiltonian. The Bloch Photonic Crystals envelope may be expressed as a Fourier series, E K p (z, ω) = εpl (ω)eiG l z , in terms of the spatial harmonics G l = 2πl . The authors touch the quantization of Λ the signal and idler fields when they present the operator εμl (ω)lμ (ω)aˆ μ K μ (ω) + G l Eˆ μ(+) (r, t) = i l × exp i K μ (ω) + G l z − ωt dω, where K μ (ω) is the Bloch wave number, εμl are the Bloch envelope Fourier series coefficients, and aˆ μ (K (ω)) is the annihilation operator for the signal (s) or idler (i) mode. The normalization constant is ωK μ (ω) , (6.387) lμ (ω) = 2μ (ω)S where K μ (ω) is the first frequency derivative of K μ , εμ (ω) is the permittivity in the nonlinear medium, and S is the transverse beam area. The approach to quantization is macroscopic. The joint spectral amplitude has been defined, which depends on the length of the crystal, L. The authors prove with a numerical calculation that each of the three interacting fields propagates essentially as a plane wave. They may utilize this approximation. To obtain conditions for factorizability, the authors expand the mismatch LΔK (ω), where ΔK (ω) = K p (ω) − K s (ω) − K i (ω). They let ωo denote the degenerate fre( j) quency and introduce the mismatch τμ , μ = s, i, j = 2, 3, 4, in the jth frequency derivatives between the wave numbers of the pump and the signal (idler) wave packets where K μ( j) = τμ( j) = L(K p( j) − K μ( j) ), , , 1 d j K μ (ω) ,, 1 d j K p (ω) ,, ( j) , K = , p j! dω j ,ω=ωo j! dω j ,ω=2ωo and the frequency detunings νs,i = ωs,i − ωo . Whereas the zeroth-order approximation is LΔK (ω) ≈ LΔK (0) + O(1), ΔK (0) = K p (2ωo ) − K s (ωo ) − K i (ωo ), 6 Periodic and Disordered Media and O(J + 1) represents (J + 1)th- and higher-order terms in the detunings; the authors present a fourth-order approximation, J = 4. They then choose αp (ωs + ωi ) in the relation f 000 (ωs , ωi ) = ωp (ωs + ωi ) φ000 (ωs , ωi ), where φ000 (ωs , ωi ) is given by relation (15) in Corona and U’Ren (2007), as a Gaussian function with the parameter σ 2 and, in that defining relation, they replace sinc( LΔK2 (ω) ) by another Gaussian (with the parameter γ1 ). The approximation of the function f (ωs , ωi ) ≡ f 000 (ωs , ωi ) comprises the quantity Φsi (νs , νi ) given in (25) in the cited paper. The expression is complicated. The factorability occurs if Φsi (νs , νi ) = 0. Provided that τs(1) = τi(1) = 0, the expression simplifies. Then relations for | f (νs , νi )| and arg[ f (νs , νi )] can be written. Further the authors assume that the signal and idler photons undergo much stronger dispersion than the pump. Particularly, they assume ( j) ( j) ( j) ( j) ( j) that |τp ||τs |, |τi |, j = 2, 3, 4, τp =L K p . At last, they assume that the pump is broad-band, 4 1 . (6.392) σ 24 / γ τs(2) + τi(2) While the phase of the joint spectral amplitude has been factorable without the condition (6.392), only now the modulus of the joint spectral amplitude reduces to 2 γ (2) 2 (2) 2 τ ν + τi νi . (6.393) | f (νs , νi )| ≈ exp 4 s s In Corona and U’Ren (2007), it has been shown also that this modulus can describe a nearly factorable two-photon state. The authors first evidence that in nonlinear photonic crystals, complete group velocity matching, K p =K s =K i , or τs(1) =τi(1) =0, can be achieved. Here it is assumed that a = Λ2 . Then one may search for the lattice period Λ, the permittivity contrast 2 μ1 (ω) − μ2 (ω) , (6.394) α= μ1 (ω) + μ2 (ω) in which an independence of the frequency and of μ = e, o is actually assumed, and the crystal propagation angle θpm such that ΔK (0) = 0, K p = K s , K p = K i . Here ωp = 2ωo , ωs = ωi = ωo as in relation (6.389). The fact that the produced two-photon state is nearly factorable has been shown in Law et al. (2000). (iv) Further principles and effects Diao and Blair (2007) have paid attention to the use of multilayer thin film structures for optical bistability and multistability. They have considered single-cavity and coupled-cavity structures. In the case of a single-cavity structure, mirrors consist of M quarter-wave low-index layers and M − 1 quarter-wave high-index layers. The cavity is based on L quarter-wave high-index layers. Photonic Crystals The coupled-cavity structures have N cavities and N + 1 mirrors. It is assumed that the low-index layers are constructed from silicon dioxide. They have a nonlinear coefficient of the refractive index change n 2,silica ≈ 3 × 10−16 cm2 W−1 . The highindex layers have a coefficient n 2 . For these structures, linear transmission (in magnitude and phase) and group delay may be calculated. The optical bistability and multistability are analyzed mainly by dependence of nonlinear transmission (in magnitude and phase) on normalized input intensity. As it is a multivalued function of the input, also the nonlinear transmission (in magnitude only) is plotted versus the intensity within the cavity. Photonic crystal fibres (PCFs) are dielectric optical fibres with an array of airholes running along the fibre. Usually, the fibres employ a single dielectric material. Mortensen (2005) notes that other base materials have been studied besides the silica. Typically, the airholes are arranged in a triangular lattice with a pitch Λ. A waveguide is formed as a cladding and a core using the core defect, i.e., by removal of a single airhole. From this, the author has realized a call for a theory of photonic crystal fibres with an arbitrary base material. Solving a scalar twodimensional Schr¨odinger-like equation, geometrical eigenvalues γ 2 have been calculated, dependent on the normalized airhole diameter Λd . If Λd is below a critical 2 and that for the funvalue, only the eigenvalue for the fundamental core mode γc,1 d 2 damental cladding mode γcl are seen. If Λ is above the critical value, an eigenvalue 2 diverges from that for the fundamental cladding for the second-order core mode γc,2 2 2 , is used and, for Λd ≤ 0.8, a third-order mode. Then an abbreviation, γc ≡ γc,1 polynomial is presented, which fits the eigenvalues for the fundamental core mode well. One considers the V parameter of the form / (6.395) VPCF = γcl2 − γc2 , and the endlessly single-mode regime is associated with the condition VPCF < π (Mortensen et al. 2003). on the normalized free-space The dependence of the effective index n eff = cβ ω wavelength Λλ is expressed with a second-order polynomial, which agrees with fully vectorial plane-wave simulations in the short-wavelength limit λ Λ. The comparison has been performed for a single slightly subcritical value of the normalized airhole diameter Λd and both for the fundamental core mode and for the fundamental cladding mode. Della Villa et al. (2005) study formation of band gaps in photonic quasicrystals. They have considered a photonic quasicrystal with a Penrose-type lattice. They have inferred a band gap from the normalized local density of states ρ(r0 , ω) = Im {G(r0 , r0 , ω)} , where G(r, r0 , ω) is the Green function. Numerical calculations were performed for finite-size quasicrystals made of hundreds of rods. The normalized local density was determined at the centre of the quasicrystal, and the choice of the central point 6 Periodic and Disordered Media did not affect results. In comparison with the photonic crystal with square lattice, the quasicrystal exhibits small additional band gaps. In the central band gap of the photonic quasicrystal, the normalized local density of states exhibits the exponential decay similar to that of the photonic crystal. The central band gap seems to stem from relatively short-distance interactions. Lateral band gaps stem from long-range interactions. The Fourier spectrum of the permittivity profile for the quasicrystal, together with the usual Bragg condition, predicts the central and upper band gap, and a lower contrast of the permittivities is more advantageous. The frequency, at which the lower band gap occurs, may not be explained using single scattering and should so include multiple scattering. The use of two-dimensional photonic crystals instead of the conventional onedimensional feedback grating can lower the lasing threshold of distributed feedback lasers. Such photonic-crystal-based organic lasers have been studied (Harbers et al. 2005). The photonic crystals can change the spontaneous emission dynamics of excited atoms (Yablonovich 1987, John 1987). The photonic crystals modify also spontaneous emission of quantum dots (Lodahl et al. 2004, Yoshie et al. 2004). Hughes (2005a) introduces a scheme that enables one to study quantum correlations between two quantum dots in a planar-photonic-crystal nanocavity. He considers the fundamental cavity mode ec (r), which fulfils the normalization condition c (r)|ec (r)|2 d3 r = 1, (6.397) all space where c (r) is the permittivity of the nanocavity structure. He considers a tensorvalued Green function c 2 δ(r − r )1 , (6.398) Gb (r, r ; ω) = Gt (r, r ; ω) − ω |ec (r)| where for simplicity Gt (r, r ; ω) = c 2 ω ωc2 ec (r)e∗c (r) , − ω2 − iωΓc where ωc is the cavity resonance frequency and Γc = ωQc is the cavity linewidth. Here we have digressed from Hughes (2005b), who has made some modification of the function. The quantum dots are modelled as two-level atoms (Dung et al. 2002b). We introduce a quantum mechanical basis for the quantum dots a and b and the cavity mode as | A ⊗ |B ⊗ |k = |ABk , where each variable can assume a value of 0 or 1. We concentrate on wave functions of the form |ψ(t) e = Ca (t)|100 + Cb (t)|010 + Cp (t)|001 , where Ca (t), Cb (t), and Cp (t) are complex amplitudes. We consider the reduced wave function of the form Quantization in Disordered Media |ψ(t) = + 1 |Ca (t)|2 + |Cb (t)|2 [Ca (t)|10 + Cb (t)|01 ], where Ca (t) and Cb (t) obey integro-differential equations (Hughes 2005c). The time dependence of the entanglement (E(t)) is calculated for simple initial conditions from the concurrence (C(t)) using (Hughes 2005c) E(t) = −x log2 (x) − (1 − x) log2 (1 − x), where x= 1 1+ 1 − C(t), C(t) = 4|Ca (t)|2 |Cb (t)|2 . + 2 2 As a continuation of the paper Sakoda and Haus (2003), Sibilia et al. (2005) have studied the properties of super-radiant emission from a two-level atomic system embedded in a one-dimensional photonic band gap structure. The description by a reduced system of equations has further been reduced. Attention has been paid to the Rabi splitting. The effect of the location of the atoms has been obtained using a classical model of an amplifier. In rotating Bose–Einstein condensates vortices develop. M¨ustecaplıo˘glu and ¨ Oktel (2005) have shown that a vortex lattice can act as a photonic crystal and generate photonic band gaps. They have considered a two-dimensional triangular lattice. A numerical simulation of the propagation of an electromagnetic wave in a finite lattice has indicated that tens of vortices are enough for the infinite lattice properties to occur. Those authors have proposed a method to measure the rotation frequency of the condensate using a directional band gap. 6.4 Quantization in Disordered Media Quantization of the electromagnetic field in disordered media may be realized by any of the expounded or mentioned approaches. In particular it is likely that an approach as that in Section 2.2.4 could be chosen. Many explanations from Section 6.1 remain valid even on the assumption of a disordered medium. With respect to application to random lasers, a great emphasis is laid on the notion of a mode. As the electric permittivity is a scalar random field on the usual assumption of an isotropic dielectric, eigenfrequencies and modal functions of a device are random. We form an idea on properties of the eigenenergies and modal functions by the condensed-matter theory and even by nuclear physics as well. It may lead to the restriction that the vectorial character is neglected in the modal functions. Certain models are mathematically very demanding to the contrary. Many studies encounter the laser dynamics, which forces one to determine pseudomodes (after Garraway and Knight 1996), lossy modes, quasimodes otherwise). Patra (2002), e.g., assumes a random medium closed in a cavity to be allowed to suppose orthogonal modal functions. He neglects the vectorial character of modal functions. Whereas he may assume that, in an opening from the cavity, the value of 6 Periodic and Disordered Media a modal function has a Gaussian distribution, he encounters a difficulty in the whole cavity. He mentions that the modal functions are Gaussian random fields according to the literature, but he does not use, in fact, this assumption. Patra (2002) restricts himself to values of a finite set of modal functions in a finite number of points and implements orthogonality using columns of a random unitary matrix. Loudon (1999) has expounded the polariton dispersion relation in the framework of the classical Lorentz theory. The linear response theory for a perfect crystal has been generalized to include a randomly diluted crystal. The dependence of polariton radiative damping rates on the occupation probability with admixture atoms has been expressed. The quantum, semiclassical, and classical theories of spontaneous emission have been characterized. The Glauber–Lewenstein model and more quantum theory have been applied to the radiative decay of dilute active atoms in lossy homogeneous and inhomogeneous dielectrics. The phase, group, and energy velocities for optical pulse propagation through a lossy dielectric have been defined and their physical meanings have been 6.4.1 Quantization in Chaotic Cavity Recently (Patra 2002) a model of laser has been adopted that ignores the phase of the field and, on including suitable Langevin terms like in Mishchenko and Beenakker (1999), provides the photon statistics. Peˇrinov´a et al. (2004) hope that the calculations can be refined, when the open systems theory is invoked (see, for example Peˇrinov´a and Lukˇs (2000) and references therein). An optical cavity is considered which is coupled to the environment by a small opening of a diameter d. We concentrate on Np cavity eigenmodes described by the chaotic cavity modal functions Θi (r), each with an eigenfrequency ωi . The quantum description is reduced to only considering the number n i (t) of photons in each mode i. Photons in mode i escape through the opening with rate γi . The cavity is filled with an amplifying medium. The medium can be a four-level laser dye, in which the lasing transition is directed from the third to the second level, the transition’s resonance frequency being Ω. In Patra (2002), the density of excited atoms has been considered at every point r in the cavity. The coupling of mode i to the medium at the point r has been given by K i (r) = w(ωi )|Θi (r)|2 , i = 1, . . . , Np , where w(ωi ) has been the transition matrix element of the atomic transition 3 → 2 (de-excitation). At the level of a numerical solution, a linearization and a discretization have been performed. We discretize the space by introducing the Borel measurable neighbourhoods U (0 j ) of “uniformly” located centres 0 j , j = 1, . . . , Ns , which exhaust all the space and have the same volume ΔV each, Ns ΔV = V , the cavity volume. The description is reduced to only considering the density of excited atoms N j in each neighbourhood U (0 j ). Excitations are created by pumping with a rate P j and are lost nonradiatively with a rate a j . The coupling of mode i to the medium in the neighbourhood U (0 j ) is given by K i j , K i j ≡ K i (0 j ). The original Quantization in Disordered Media notation due to (Patra 2002) has been changed as follows: gi → γi , K i j → K i j ΔV , N j → N j ΔV , P j → P j ΔV , a j → a j ΔV . † We have utilized the annihilation (creation) operators aˆ i (t) (aˆ i (t)), i = 1, . . . , Np , which are assigned to the cavity modes and obey the commutation relations † ˆ [aˆ i (t), aˆ i (t)] = 0. ˆ [aˆ i (t), aˆ i (t)] = δii 1, Using them, we can reinterpret the photon numbers n i (t) as the operators nˆ i (t) = † aˆ i (t)aˆ i (t). ˆ † (t), which ˆ j (t), A In fact, we still have to consider the matter field operators A j obey the commutation relations † ˆ (t)] = ˆ j (t), A [A j 1 ˆ [A ˆ ˆ j (t), A ˆ j (t)] = 0. δ j j 1, ΔV Using these operators, we can reinterpret the densities of excited atoms N j (t) as the ˆ † (t) A ˆ j (t). operators Nˆ j (t) = A j In analogy with exponential phase operators (Peˇrinov´a et al. 1998) − 1 † eI xp[−iϕi (t)] = aˆ i (t) nˆ i (t) + 1ˆ 2 , − 1 eI xp[iϕi (t)] = nˆ i (t) + 1ˆ 2 aˆ i (t), we introduce the quantum phase operators - .− 12 ˆ 1 Nˆ j (t) + , eI xp[−iΦ j (t)] = ΔV .− 12 ˆ 1 ˆ j (t). A eI xp[iΦ j (t)] = Nˆ j (t) + ΔV ˆ † (t) A j In this exposition, we restrict ourselves to the quantum description in the framework of the Schr¨odinger picture. The time dependence of the operators was related to the Heisenberg picture, is not present in the Schr¨odinger picture, and is dropped. We adopt the master equation approach, which concerns the temporal evolution of the statistical operator ρ(t) ˆ normalized such that Tr{ρ(t)} ˆ =1 and leads to the rate equations (6.418) straightforwardly. We propose that the master equation has the form ∂ ˆ + Lˆˆ amp ρ(t) ˆ + Lˆˆ Att ρ(t) ˆ + Lˆˆ nln ρ(t), ˆ ρ(t) ˆ = Lˆˆ att ρ(t) ∂t where Lˆˆ att , Lˆˆ amp , Lˆˆ Att , properties Lˆˆ nln are the Liouvillian superoperators with respective 6 Periodic and Disordered Media ˆ = Lˆˆ att ρ(t) 1 1 † γi aˆ i ρ(t) ˆ aˆ i − nˆ i ρ(t) ˆ − ρ(t) ˆ nˆ i , 2 2 i=1 ˆ = ΔV Lˆˆ amp ρ(t) xp(−iΦ j )ρ(t)I P j eI ˆ exp(iΦ j ) − ρ(t) ˆ , ˆ = ΔV Lˆˆ Att ρ(t) 1ˆ 1 † ˆ ˆ ˆA j ρ(t) ˆ A j − N j ρ(t) ˆ − ρ(t) ˆ Nj , 2 2 †ˆ ˆ Nˆ j ρ(t) ˆ † aˆ i − 1 (nˆ + 1) K i j aˆ i A ˆ A ˆ j ρ(t) j 2 i=1 j=1 1 ˆ ˆ − ρ(t) ˆ N j (nˆ + 1) . 2 Lˆˆ nln ρ(t) ˆ = ΔV Np Ns The subscript “att” denotes the attenuation (escape of photons), the subscript “amp” denotes the amplification (pumping), the subscript “Att” another attenuation (relaxation of the medium), and the subscript “nln” denotes a nonlinear process. The term ˆ is well known from the quantum theory of damping at zero temperature Lˆˆ att ρ(t) ˆ has been proposed. This is (Haken 1970). In analogy with it, the term Lˆˆ Att ρ(t) equivalent to treating the excited atoms as bosons. All the terms have the Lindblad ˆ has such a form with the form Lindblad (1976). For instance, the term Lˆˆ nln ρ(t) †ˆ ˆ ˆ has Lindblad operators Oi jnln = aˆ i A j . The Lindblad form of the term Lˆˆ amp ρ(t) ˆ jamp = eI xp(−iΦ j ). been gained at the cost of considering the Lindblad operators O The unusual operators may be related either to the discretization of the space or to a treatment of the excitations as bosons. To our knowledge, the phenomenology added has not been proved to be wrong. The master equation must be completed with an initial condition that gives the statistical operator ρ(t ˆ 0 ) which commutes with all the operators nˆ i , i = 1, . . . , Np , Nˆ j , j = 1, . . . , Ns . It is natural to ask whether a limit of Ns → ∞ may be taken to absolve the quantum description from the parameters Ns and ΔV . This is not excluded, but the emerging model has too much in common with a quantum field theory. A need for a renormalization would bring us farther than an appropriate choice of ΔV . We let |n 1 , . . . , n Np , N1 , . . . , N Ns denote the (normalized) simultaneous eigenkets of the operators nˆ i , i = 1, . . . , Np , Nˆ j , j = 1, . . . , Ns . Using properties of the operators which underlie to averaging, we obtain the rate equations for probabilities p(n 1 , . . . , n Np , N1 , . . . , N Ns , t) = n 1 , . . . , n Np , N1 , . . . , N Ns |ρ(t)|n ˆ 1 , . . . , n Np , N1 , . . . , N Ns , normalized such that ∞ n 1 =0 ∞ ∞ n Np =0 N1 =0 ∞ N Ns =0 p(n 1 , . . . , n Np , N1 , . . . , N Ns , t) = 1, Quantization in Disordered Media where each prime means that the summation proceeds with the step an initial condition related to the time t0 . A derivation of rate equations is made easier when we write 1 ΔV . We get also 1 1 ˆ j (t) = eI xp[iϕi (t)][nˆ i (t)] 2 , A xp[iΦ j (t)][ Nˆ j (t)] 2 , aˆ i (t) = eI 1 1 † ˆ j (t) = eI aˆ i (t) A xp[−iϕi (t)]I exp[iΦ j (t)] nˆ i (t) + 1ˆ 2 Nˆ j (t) 2 . (6.416) (6.417) They have the form ∂ p(n 1 , . . . , n Np , N1 , . . . , N Ns , t) = L att p(n 1 , . . . , n Np , N1 , . . . , N Ns , t) ∂t + L amp p(n 1 , . . . , n Np , N1 , . . . , N Ns , t) + L Att p(n 1 , . . . , n Np , N1 , . . . , N Ns , t) + L nln p(n 1 , . . . , n Np , N1 , . . . , N Ns , t), (6.418) where L att , L amp , L Att , L nln are operators with the respective properties L att p(n 1 , . . . , n Np , N1 , . . . , N Ns , t) = γi [(n i + 1) p(n 1 , . . . , n i + 1, . . . , n Np , N1 , . . . , N Ns , t) − n i p(n 1 , . . . , n Np , N1 , . . . , N Ns , t)], L amp p(n 1 , . . . , n Np , N1 , . . . , N Ns , t) Ns 1 P j p n 1 , . . . , n Np , N 1 , . . . , N j − , . . . , N Ns , t = ΔV ΔV j=1 (6.420) − p(n 1 , . . . , n Np , N1 , . . . , N Ns , t) , L Att p(n 1 , . . . , n Np , N1 , . . . , N Ns , t) Ns 1 aj Nj + = ΔV ΔV j=1 1 × p n 1 , . . . , n Np , N 1 , . . . , N j + , . . . , N Ns , t ΔV − N j p(n 1 , . . . , n Np , N1 , . . . , N Ns , t) , 6 Periodic and Disordered Media L nln p(n 1 , . . . , n Np , N1 , . . . , N Ns , t) Np Ns 1 K i j ni N j + = ΔV ΔV i=1 j=1 1 × p n 1 , . . . , n i − 1, . . . , n Np , N1 , . . . , N j + , . . . , N Ns , t ΔV (6.422) − (n i + 1)N j p(n 1 , . . . , n Np , N1 , . . . , N Ns , t) . The relation (6.419) indicates that a photon escapes from the ith mode with the probability γi n i Δt within a period of duration Δt. Similarly, the term (6.421) suggests that an excited atom near 0 j decays with the probability a j N j ΔV Δt within a period of Δt. But the term (6.420) expresses that an excited atom near 0 j emerges with probability P j ΔV Δt within such a period. Finally, the term (6.422) tells that an excited atom near 0 j emits a photon into the ith mode with the probability K i j (n i + 1)N j ΔV Δt within such a period. The initial condition related to the time t0 gives the probabilities p(n 1 , . . ., n Np , N1 , . . ., N Ns , t0 ). 6.4.2 Open Systems Approach The open systems approach enables us to “unravel” the master equation (6.409) with respect to one, two, three, or all of the Liouvillian superoperators Lˆˆ att , Lˆˆ amp , Lˆˆ Att , Lˆˆ nln . The unravelling is interesting in the steady state too. This state evolves from an old initial datum after a long time period since t0 ≤ 0 and it is considered to be part of a new initial condition at the time t = 0. The new description presents a stochastic process ρˆ c (t) whose values are statistical operators ρˆ c (t). Here the subscript c stands for condition, and it will be explained in the following. We suppose that such a process has a Markovian property. The event of a continuous change or maybe conservation and the event of an instantaneous discontinuous change of the statistical operator whose probability is asymptotically proportional to Δt for Δt → 0 are statistically independent of such past events. The continuous change could be described with a master equation ∂ ρˆ (t) = Lˆˆ ∓,att ρˆ c (t) + Lˆˆ ∓,amp ρˆ c (t) + Lˆˆ ∓,Att ρˆ c (t) + Lˆˆ ∓,nln ρˆ c (t), ∂t c where Lˆˆ −,att = Lˆˆ att , Lˆˆ +,att ρˆ c (t) = Np i=1 1 1 nˆ i c (t)ρˆ c (t) − nˆ i ρˆ c (t) − ρˆ c (t)nˆ i , 2 2 Quantization in Disordered Media Lˆˆ −,amp = Lˆˆ att , Ns Lˆˆ +,amp ρˆ c (t) = ΔV ˆ P j ρˆ c (t) − ρˆ c (t) = 0, Lˆˆ −,Att = Lˆˆ Att , Lˆˆ +,Att ρˆ c (t) = ΔV Lˆˆ −,nln = Lˆˆ nln , Lˆˆ +,nln ρˆ c (t) = ΔV Np Ns 1ˆ 1 ˆ ˆ N j c (t)ρˆ c (t) − N j ρˆ c (t) − ρˆ c (t) N j , 2 2 ˆ Nˆ j c (t)aˆ † A ˆ j ρˆ (t) K i j (nˆ + 1) i c i=1 j=1 1 1 ˆ ˆ ˆ ˆ ˆ ˆ − (n + 1) N j ρˆ c (t) − ρˆ c (t) N j (n + 1) . 2 2 Having introduced the subscripts ∓, we have alluded to the choice of 24 − 1 unravellings, + means unravelled and − means unmodified. After the discontinuous change, the new statistical operator ρˆ c † aˆ i ρˆ c (t)aˆ i nˆ i c (t) ˆ† ˆA j ρˆ (t) A eI xp(−iΦ j )ρˆ c (t)I exp(iΦ j ) 1 †ˆ ˆ † aˆ i aˆ i A j ρˆ c (t) A j j c , ˆ ˆ N j c (t) (nˆ i + 1) N j c (t) , (6.428) and the factors of asymptotical proportionality are γi nˆ i c (t), P j , ˆ Nˆ j c (t)ΔV . We call them intensities. The discontinuous a j Nˆ j c (t)ΔV , K i j (nˆ i + 1) changes and intensities are related to the components of superoperators Lˆˆ att (Np components), Lˆˆ amp (Ns components), Lˆˆ Att (Ns components), Lˆˆ nln (Np Ns ). The application of the new description is perspicuous and it does not contradict a physical intuition when it is related only to components of the superoperator Lˆˆ att . The new description becomes more transparent after introducing variables m 1 (t), . . . , m Np (t), which obey the initial condition m 1 (0) = . . . = m Np (0) = 0. On the continuous change of the statistical operator ρˆ c (t), all of these variables are conserved. In the discontinuous change of the statistical operator related to the ith component of Lˆˆ att , m i (t) is increased by unity. Even if from this it follows only that the variables m 1 (t), . . . , m Np (t) do not burden the description, we state that the expression of ρˆ c (t) in dependence on m 1 (t ), . . . , m Np (t ), 0 ≤ t ≤ t, and the treatment of m 1 (t), . . . , m Np (t) as classical stochastic processes with the intensities, which are given as quantum averages, is lucid. Moreover, it is appropriate to the physical intuition, when we identify m i (t) with numbers of photons (or photocounts), which have been registered with a detector since the time t = 0. 6 Periodic and Disordered Media The unravelling can be most easily understood as the equality ρ(t) ˆ = E[ρˆ c (t)], where E stands for the expectation value. We introduce the notation ρˆ m 1 ,...,m Np (t) = E[ρˆ c (t)|m 1 (t) = m 1 , . . . , m Np (t) = m Np ] × p(m 1 , . . . , m Np , t), where E[ρˆ c (t)|A] is the conditioned expectation value of the random operator ρˆ c (t) conditioned on the event A and p(m 1 , . . . , m Np , t) = Pr[m 1 (t) = m 1 , . . . , m Np (t) = m Np ], with Pr denoting the probability. Invoking the probability theory, we note that ρ(t) ˆ = m 1 =0 ρˆ m 1 ,...,m Np (t). m Np =0 On introducing the Hilbert space with a complete orthonormal basis |m 1 , . . . , m Np det and considering this space in a tensor product with the original Hilbert space, we can define a statistical operator ρˆ e (t) = m 1 =0 ρˆ m 1 ,...,m Np (t) m Np =0 ⊗ |m 1 , . . . , m Np det det m 1 , . . . , m Np |, where “e” means extended. Letting Trdet denote the partial trace over the extending Hilbert-space factor, Trdet ≡ Trd1 . . . Trd Np , we see easily that ρ(t) ˆ = Trdet ρˆ e (t). The master equation has the form ∂ ρˆ e (t) = Lˆˆ e,att ρˆ e (t) + Lˆˆ e,amp ρˆ e (t) + Lˆˆ e,Att ρˆ e (t) + Lˆˆ e,nln ρˆ e (t), ∂t where Lˆˆ e,att ρˆ e (t) = 1 1 † γi aˆ i eI xp(−iθi )ρˆ e (t)I exp(iθi )aˆ i − nˆ i ρˆ e (t) − ρˆ e (t)nˆ i , 2 2 i=1 (6.436) Lˆˆ e,amp ρˆ e (t) = ρˆ (t) = Lˆˆ e,Att e Lˆˆ amp ρˆ e (t), Lˆˆ ρˆ (t), Att e Lˆˆ e,nln ρˆ e (t) = Lˆˆ nln ρˆ e (t), Quantization in Disordered Media eI xp(iθi ) = 1ˆ ⊗ ∞ m 1 =0 m i =0 m Np =0 × |m 1 , . . . , m i , . . . , m Np det det m 1 , . . . , m i + 1, . . . , m Np |, † eI xp(−iθi ) = eI xp(iθi ) , ˆ ⊗ |0, . . . , 0 det det 0, . . . , 0|. ρˆ e (0) = ρ(0) Now we extend the joint probability distribution of photon numbers and the densities of excited atoms by numbers of emitted photons and introduce the probabilities pe (n 1 , . . . , n Np , N1 , . . . , N Ns , m 1 , . . . , m Np , t) = n 1 , . . . , n Np , N1 , . . . , N Ns |ρˆ m 1 ,...,m Np (t) × |n 1 , . . . , n Np , N1 , . . . , N Ns . The rate equations for these probabilities have the form ∂ pe (n 1 , . . . , n Np , N1 , . . . , N Ns , m 1 , . . . , m Np , t) ∂t = L e,att pe (n 1 , . . . , n Np , N1 , . . . , N Ns , m 1 , . . . , m Np , t) + L e,amp pe (n 1 , . . . , n Np , N1 , . . . , N Ns , m 1 , . . . , m Np , t) + L e,Att pe (n 1 , . . . , n Np , N1 , . . . , N Ns , m 1 , . . . , m Np , t) + L e,nln pe (n 1 , . . . , n Np , N1 , . . . , N Ns , m 1 , . . . , m Np , t), where L e,att pe (n 1 , . . . , n Np , N1 , . . . , N Ns , m 1 , . . . , m Np , t) = γi [(n i + 1) pe (n 1 , . . . , n i + 1, . . . , n Np , N1 , . . . , N Ns , m 1 , . . . , m i − 1, . . . , m Np , t) − n i pe (n 1 , . . . , n Np , N1 , . . . , N Ns , m 1 , . . . , m Np , t)], L e,amp pe (n 1 , . . . , n Np , N1 , . . . , N Ns , m 1 , . . . , m Np , t) = ΔV (6.442) Ns j=1 1 × pe n 1 , . . . , n N p , N 1 , . . . , N j − , . . . , N Ns , m 1 , . . . , m Np , t ΔV − pe (n 1 , . . . , n Np , N1 , . . . , N Ns , m 1 , . . . , m Np , t) , (6.443) 6 Periodic and Disordered Media L e,Att pe (n 1 , . . . , n Np , N1 , . . . , N Ns , m 1 , . . . , m Np , t) Ns 1 aj Nj + = ΔV ΔV j=1 1 × pe n 1 , . . . , n N p , N 1 , . . . , N j + , . . . , N Ns , m 1 , . . . , m Np , t ΔV − N j pe (n 1 , . . . , n Np , N1 , . . . , N Ns , m 1 , . . . , m Np , t) , L e,nln pe (n 1 , . . . , n Np , N1 , . . . , N Ns , m 1 , . . . , m Np , t) Np Ns 1 K i j ni N j + pe n 1 , . . . , n i − 1, . . . , n Np , N1 , . . . , N j = ΔV ΔV i=1 j=1 1 + , . . . , N Ns , m 1 , . . . , m Np , t ΔV (6.445) − (n i + 1)N j pe (n 1 , . . . , n Np , N1 , . . . , N Ns , m 1 , . . . , m Np , t) . The initial condition related to the time origin is , pe (n 1 , . . . , n Np , N1 , . . . , N Ns , m 1 , . . . , m Np , t),t=0 = p(n 1 , . . . , n Np , N1 , . . . , N Ns , 0)δm 1 0 . . . δm Np 0 . Only the term (6.442) means some added phenomenology. It is assumed that an array of ideal detectors is available and, whenever a photon escapes from the ith mode, it is absorbed by the ith detector. By analogy with (6.438), we introduce the operators mˆ i = 1ˆ e ⊗ ∞ m 1 =0 ∞ m i =0 |m 1 , . . . , m Np det det m 1 , . . . , m Np |. m Np =0 We will introduce a shorthand notation kN nˆ k ≡ nˆ k11 . . . nˆ Npp , l Nˆ l ≡ Nˆ 1l1 . . . Nˆ NNss , rN mˆ r ≡ mˆ r11 . . . mˆ Npp . Considering the moments nˆ k Nˆ l mˆ r (t), we can rewrite equation (6.441) in the form of a hierarchy of equations d k ˆl r † nˆ N mˆ (t) = Lˆˆ e,att nˆ k Nˆ l mˆ r (t) + Lˆˆ †e,amp nˆ k Nˆ l mˆ r (t) dt † † + Lˆˆ nˆ k Nˆ l mˆ r (t) + Lˆˆ nˆ k Nˆ l mˆ r (t), e,Att Quantization in Disordered Media where † Lˆˆ e,att nˆ k Nˆ l mˆ r = − kN rN γi nˆ i nˆ k11 . . . nˆ Npp Nˆ l mˆ r11 . . . mˆ Npp kN rN − nˆ k11 . . . (nˆ i − 1ˆ e )ki . . . nˆ Npp Nˆ l mˆ r11 . . . (mˆ i + 1ˆ e )ri . . . mˆ Npp , (6.450) l 1ˆ e j l ˆLˆ † nˆ k Nˆ l mˆ r = ΔV k ˆ l1 ˆ P j nˆ N1 . . . N j + . . . Nˆ NNss mˆ r e,amp ΔV j=1 l − nˆ k Nˆ 1l1 . . . Nˆ NNss mˆ r , Ns † Lˆˆ e,Att nˆ k Nˆ l mˆ r = −ΔV l a j Nˆ j nˆ k Nˆ 1l1 . . . Nˆ NNss mˆ r l 1ˆ e j l − nˆ k Nˆ 1l1 . . . Nˆ j − . . . Nˆ NNss mˆ r , ΔV Np Ns ˆLˆ † nˆ k Nˆ l mˆ r = ±ΔV ˆ ˆ K i j (nˆ i + 1e ) N j ± nˆ k11 e,nln i=1 j=1 l 1ˆ e j k Np l 1 l ki ˆ ˆ ˆ . . . (nˆ i + 1e ) . . . nˆ Np N1 . . . N j − . . . Nˆ NNss mˆ r ΔV kN l ∓ nˆ k11 . . . nˆ Npp Nˆ 1l1 . . . Nˆ NNss mˆ r , (6.453) with † denoting the Hermitian conjugation and 1ˆ e denoting the identity operator. The moment equations (6.449) can be decoupled by various assumptions and approximations. First, we observe that the equations for the means nˆ i (t), Nˆ j (t) need those for the second moments nˆ i Nˆ j (t) Ns d ˆ Nˆ j (t), K i j (nˆ i + 1) nˆ i (t) = −γi nˆ i (t) + ΔV dt j=1 d ˆ ˆ Nˆ j (t). K i j (nˆ i + 1) N j (t) = P j − a j Nˆ j (t) − dt i=1 We neglect such a coupling by the factorizing approximation (Patra 2002, Rice and Carmichael 1994) nˆ i Nˆ j (t) ≈ nˆ i (t) Nˆ j (t). 6 Periodic and Disordered Media We introduce the shorthand n i (t) ≡ nˆ i (t), N j (t) ≡ Nˆ j (t), and m i (t) ≡ mˆ i (t), and, from now on, we consider the differential equations Ns d n i (t) = −γi n i (t) + ΔV [n i (t) + 1]K i j N j (t), dt j=1 d N j (t) = P j − a j N j (t) − [n i (t) + 1]K i j N j (t), dt i=1 d m i (t) = γi n i (t), dt and the appropriate initial conditions related to the time t0 and to the time origin, which give averages of the initial data of the unlinearized model if possible. Further, we introduce the variations δ nˆ i (t) = nˆ i −n i (t)1ˆ e , δ Nˆ j (t) = Nˆ j − N j (t)1ˆ e , δ mˆ i (t) = mˆ i − m i (t)1ˆ e , i = 1, . . . , Np , j = 1, . . . , Ns , of zero expectation values. Assuming that the equations (6.457), (6.458), (6.459) have been solved for the means n i (t), N j (t), m i (t), i = 1, . . . , Np , j = 1, . . . , Ns , and neglecting higher (third) moments, we derive approximate at most linear equations for the variances [δ nˆ i (t)]2 (t), [δ Nˆ j (t)]2 (t), [δ mˆ i (t)]2 (t) and the covariances δ nˆ i (t)δ nˆ j (t) (t), δ nˆ i (t)δ Nˆ j (t) (t), δ Nˆ j (t)δ Nˆ j (t) (t), δ nˆ i (t)δ mˆ j (t) (t), δ Nˆ j (t)δ mˆ i (t) (t), δ mˆ i (t)δ mˆ i (t) (t) from the hierarchy of moment equations. Also in the relations (6.460) and (6.461), we have written the argument t to the right of the angular brackets to indicate that the quantum description is still being carried out in the Schr¨odinger picture, even though some time dependence has been created by the subtracted means. 6.4.3 Semiclassical Approach Semiclassical approach consists in the equations of motion for the photon numbers in the eigenmodes 1, . . . , Np and the densities of excited atoms in the neighbourhoods U (0 j ), j = 1, . . . , Ns . These equations may be supplemented with those for the number of counts taken by the detectors. Stationary values of the means n i (t), N j (t) can be characteristic of stationary processes n i (t), N j (t), which can be obtained in the limit t0 → −∞. We also obtain the time independence of the variances [δn i (t)]2 (t), [δ N j (t)]2 (t) and the covariances δn i (t) δn j (t) , δn i (t)δ N j (t) , δ N j (t)δ N j (t) , and m i (t)=γi tn i (0). In this case, the number of modes above the threshold Nl is interesting, mode i being above the threshold if and only if n i (t) ≥ 2. Quantization in Disordered Media We will restrict ourselves to the Fano factor (Peˇrina 1991) which can be calculated as Np C Np C Fsugg = δm i (T )δm j (T ) i=1 j=1 Np C m i (T ) where the subscript sugg stands for suggested, T is the time needed to explore the 2 2 entire space inside the cavity, T = Ωπ 2Vc3 is chosen as a detection time, and c is the speed of light. As a short-T approximation, we obtain the usual formula for the Fano factor, in fact , , 1 d = (Fcorr − 1), (6.463) (Fsugg − 1),, dT T T =0 where Fcorr means the Fano factor, the subscript corr stands for correlated, which is given as Np Np C C Fcorr = Ti T j δn i (0)δn j (0) i=1 j=1 Np C Ti n i (0) with the transmission probability Ti = γi T , 0 ≤ Ti ≤ 1. With respect to reduced information on an individual random laser, the problem becomes a stochastic problem, an ensemble of cavities with small variations in a shape or scatterer positions being considered. The coefficients γi and K i j thus become random quantities. The coefficients γi are independent and identically distributed, the probability density is P({γi }) = P(γi ), where P(γi ) = √ γi 1 e− 2γ , 2π γi γ which is the gamma or the Porter–Thomas distribution (Porter 1965) with the mean loss rate of a cavity γ = TT , T being the mean transmissivity of a cavity, T = 16π 2 d 6 Ω6 . The expectation values γ i ≡ γi , i = 1, . . . , Np , are equal to the mean c6 loss rate of the cavity, γ i = γ . K i j , i = 1, . . . , Np , j = 1, . . . , Ns , are independent of γi , and they are distributed as squared moduli of elements of a random unitary matrix. We hope to be consistent with Patra (2002), even after the correction of its formula (16). 6 Periodic and Disordered Media As an illustration, a laser with a cavity supporting ten modes has been considered, where one mode is coupled out much less than the others, i.e., γ1 = 0.01 = g, γ2 = · · · = γ10 = 0.1, K i j = 0.1 for all i, a1 = · · · = a10 = 1. It can be accepted that some characteristics of a random laser fluctuate not very much about the ensemble means. Such characteristics are the scaled Fano factor F−1 of the lasing mode (γi = g) g F −1 =T g (δn i )2 −1 ni and the number of modes above threshold Nl . In computer simulations, the scaled Fano factor may and may not fluctuate very much about the conditional mean | Nl when the probability of Nl , p(Nl ), is small or not as can be seen from F−1 g Fig. 6.4. Fig. 6.4 The conditional mean of the scaled Fano factor in the lasing mode in the dependence on the number of modes above the threshold (curve a) and the probability of the number of modes above the threshold (curve b) for cavities with a fixed number Np = Ns modes In this figure, the conditional mean increases in dependence on Nl (curve a). The irregularity at 7–10 is likely due to the simulation. The expectation value of Nl is about 3 (curve b). In the simulation Np = Ns = 10, γ = 0.125, and P1 = · · · = PNs = P, P = 30. | γg for the lasing mode is The simulation study of the conditional mean F−1 g g more difficult than the previous one, because γ is a continuous variable. Denoting the probability density of g with P (1) (g), we note that γg has the probability density γ P(1) (g). The calculated values of the conditional means are plotted in Fig. 6.5 as the curve a and the probabilities g+0.025 γ g P (1) P(1) (g ) dg (6.468) = γ g are depicted as the curve b. The distribution of γg indicates that the conditional mean may be calculated well only for the smallest values of γg , and it is estimated worse Quantization in Disordered Media Fig. 6.5 The conditional mean of the scaled Fano factor in the lasing mode in the dependence on γg (curve a) and the probability of γg in bins of length of 0.025 (curve b) for cavities with a fixed number Np = Ns modes In Fig. 6.6 the scaled Fano factors Fcorrg −1 and suggg are plotted (curves a, c, e and b, d, f, respectively) in dependence on T ∈ [0, 5] for P = g, 100.2 g, 100.4 g (pairs (a, b), (c, d), (e, f)). The straight lines depicting Fcorrg −1 are tangent to the respective curves Fsugg −1 g at the origin. Fig. 6.6 The scaled Fano F −1 factors Fcorrg −1 and suggg (curves a, c, e and b, d, f, respectively) in the dependence on T ∈ [0, 5] for P = g, 100.2 g, 100.4 g (pairs (a, b), (c, d), (e, f)), Considering the master equation in the Lindblad form, we have derived rate equations for the probability distributions describing “classical” state of the random laser. From this, using standard approximations, we have rederived well-known equations for the means and linear equations for the correlators. Both for traditional and random lasers, the Fano factor has been proposed based on open systems theory. Comparison of this proposal with the usual Fano factor has been made in the traditional laser. In the random laser, the scaled Fano factor in the lasing mode has been averaged in dependence on the number of modes above threshold and, alternatively, in dependence on the scaled loss rate. Fu and Berman (2005) have complemented the Green function approach to the spontaneous emission from an atom embedded inside a disordered dielectric (Fleischhauer 1999). The virtual cavity result has been reproduced using an amplitude approach which has been extended to second order. 6 Periodic and Disordered Media 6.5 Propagation in Amplifying Random Media The light propagation in disordered media is contiguous with the concept of light transport analogous to the electron transport in the condensed-matter physics, and its description is of importance for the experimental study of random lasers. Quantum statistical properties of light are determined in disordered media. Models of a random laser with incoherent and coherent feedback are mentioned and it is stated that, in the framework of such models, photon statistics of optical modes were determined. Both localization and laser theories which were developed in the 1960s have been jointly utilized in the study of a random laser. They have been used in studies of strongly scattering gain media. Lasing in disordered media has been a subject of intense theoretical and experimental studies. Random lasers have been classified into incoherent and coherent random lasers. Research works on both types of random lasers have been surveyed in the monographic chapter (Cao 2003). In order to explain quantum-statistical properties of random lasers, quantum theory is needed. Standard quantum theory for lasers applies only to quasidiscrete modes and cannot account for lasing in the presence of overlapping modes. In a random medium, the character of lasing modes depends on the amount of disorder. Weak disorder leads to a poor confinement of light and to strongly overlapping modes. Statistics naturally belongs to the theory of amplifying random media (Beenakker 1998, Patra and Beenakker 1999, 2000, Mishchenko et al. 2001), which is restricted to linear media and has not been used for the description of random lasers above lasing threshold (Cao 2003). The random laser model of Patra (2002, 2003), who calculated more than only statistics of the photon number, has been completed (Lukˇs and Peˇrinov´a 2003). In the framework of the open systems theory, the equations of motion involve those for numbers of photons absorbed by detector. This extension corrects the photon-number statistics. Hackenbroich et al. (2002) have developed a quantization scheme for optical resonators with overlapping (nonorthogonal) modes. Cheng and Siegman (2003) have derived a generalized formalism of radiation-field quantization which need not rely on a set of orthogonal eigenmodes. True eigenmodes of a system will be nonorthogonal and the method is intended for quantization of an open system which contains a gain or loss medium. 6.5.1 Strongly Scattering Media John (1984) has proposed a range of wavelengths or frequencies, in which electromagnetic waves in a strongly scattering disordered medium undergo the Anderson localization (Anderson 1958, Abrahams et al. 1979). Although the derivation is conducted in d = 2 + ε dimensions, in consequence it holds that the photon mobility 1 (d = 3), where l is the photon elastic mean free path. edge ω∗ is as (ω∗l)2 2π The range of wavelengths has the property λ ∼ l. In the case where λ l, early experiments showed that the intensity of light scattered from a concentrated suspension of latex microspheres in water presented Propagation in Amplifying Random Media a sharp peak centred at the backscattering direction (Kuga and Ishimaru 1984, van Albada and Lagendijk 1985, Wolf and Maret 1985). This peak is a coherence effect, which is present in disordered media. Akkermans et al. (1986) have analyzed the multiple scattering of light to explain the peak line shape. The explanation is based on the constructive interferences between time-reversed paths of light in a semi-infinite medium. The intensity reflected in directions distant by more than one degree is almost constant and is incoherent. The analysis is not restricted to scalar waves. It respects that polarization was analyzed parallel and perpendicular to the incident one. John et al. (1996) have contributed to the field of optical tomography (see, e.g., Huang et al. (1991) and references in John et al. (1996)). They assume a wave of frequency ω and velocity c. They recall that a simplified view of a photon as a quantum mechanical particle leads to the use of the Wigner coherence function, which is r r d3 r. (6.469) E R− I (R, k) = exp(ik · r) E ∗ R + 2 2 ensemble The authors let I0 (R, k) denote the source coherence function. On choosing R , k , the source I0 (R, k) = δ(R − R )δ(k − k ) “radiates” a coherence function Γ(R − R ; k, k ), which is called a propagator. The Wigner coherence function can be measured (Raymer et al. 1994). Here we must also refer to John and Stephen (1983) and McKintosh and John (1989) for reviews of the theory. In what follows we mention only the description of a homogeneously disordered dielectric material. Its statistical properties are given by the ensemble-averaged autocorrelation function Bh (r − r ) = k04 h∗ (r)h (r ) ensemble , where h stands for homogeneous and k0 = ωc . The Fourier transform is B˜ h (q) = exp(−iq · r)Bh (r) d3 r. They define 1 Γ˜ h (Q; k , k) = 4 k0 exp(−iQ · R)Γh (R; k , k) d3 R. In the field of optical tomography, conventional radiative transfer theory has been applied (Case and Zweifel 1976). In this theory, the (time-independent) specific light intensity I c (R, k) (c means conventional) of a homogeneous medium without absorption obeys the following phenomenological Boltzmann transport equation (Case and Zweifel 1976) (k · ∇R )I c (R, k) + kσ (k)I c (R, k) d3 k , = I0c (R, k) + k σ (k → k)I c (R, k ) (2π )3 6 Periodic and Disordered Media where σ (k) and σ (k → k) are the total and angular scattering coefficients, respectively, that satisfy the relation d3 k σ (k → k) = σ (k). (6.474) (2π)3 In (6.473), I0c (R, k) is the source specific intensity. Replacing k by k , I0c (R, k) by (2π)3 δ(R−R )δ(k−k ), and I c (R, k) by Γch (R−R ; k, k ), we obtain the equation for the Green function for the specific light intensity. Its Fourier transform Γ˜ ch (Q; k, k ) satisfies the equation ik · QΓ˜ ch (Q; k, k ) = (2π )3 δ(k − k ) d3 k1 − kσ (k)Γ˜ ch (Q; k, k ). + k1 σ (k1 → k)Γ˜ ch (Q; k1 , k ) (2π )3 (6.475) From the optical coherence theory, it follows that 2k · QΓ˜ h (Q; k, k ) = ΔG k (Q)(2π )3 δ(k − k ) d3 k1 + ΔG k (Q) B˜ h (k − k1 )Γ˜ h (Q; k1 , k ) (2π )3 3 d k1 ˜ − ΔG k1 (Q) B˜ h (k − k1 ) Γh (Q; k, k ), (2π )3 where Q Q ΔG k (Q) = G + k + − G− k − , 2 2 1 , G ± (k) = 2 2 k0 − k − Σ± (k) with Σ± (k) = B˜ h (k − q) d3 q . − q 2 − Σ± (q) (2π )3 A comparison of (6.475) with (6.476) may be made in a thorough and expert manner. Even though the coherent backscattering effects have not been included in John et al. (1996), they may be described using the results of MacKintosh and John (1989). On considering multiple light scattering near an inhomogeneity, the analysis of the propagation and measurement of the Wigner distribution function may enhance the resolving power of optical tomography. A formal analogy between wave propagation in a multiple scattering medium with a statistical inhomogeneity and the quantum mechanical scattering of a particle by a localized potential has been explored. Propagation in Amplifying Random Media Bruce and Chalker (1996) have generalized the treatment of quasi-onedimensional systems due to Dorokhov (1982) and Mello et al. (1988b) for it to include absorption. Interestingly, they recall that it is not easy to obtain transmission properties. They consider a waveguide or optical fibre, along which N modes can propagate in each direction. Slightly deviating from them, we let a(L) and b(L) mean vectors of wave amplitudes (N ×1 matrices) for the right-hand propagation and the left-hand propagation, respectively. Here L stands for a propagation distance. With respect to the disordered medium, the coupled modes are described by stochastic differential equations, which we write in the matrix form a(L + δl) a(L) − b(L + δl) b(L) 1 0 x y 2 a − μ = iμ 0 −1 −y∗ −x∗ 2 . 1 2 x y a(L) − μ . (6.479) −y∗ −x∗ b(L) 2 √ Here μ = δl, with δl > 0 an infinitesimal length, a parametrizes the strength of absorption, 1 is an N × N unit matrix, x ≡ x[L ,L+δl] is a random N × N Hermitian matrix, and y ≡ y[L ,L+δl] is a random N × N symmetric matrix. The two matrices x[L 1 ,L 2 ] , y[L 1 ,L 2 ] are statistically independent of the two matrices x[L 3 ,L 4 ] , y[L 3 ,L 4 ] , when the intervals [L 1 , L 2 ] and [L 3 , L 4 ] do not overlap. Considering again x ≡ x[L ,L+δl] and y ≡ y[L ,L+δl] , the elements xαβ , yαβ are random variables, which are specified in terms of the first and second moments, xαβ = yαβ = 0, xαβ yγ δ = 0, yαβ yγ δ = 0, δαγ δβδ δαγ δβδ + δαδ δβγ , yαβ yγ∗ δ = . xαβ xγ∗ δ = N N +1 A general solution of equations (6.479) has the form a(L 0 ) a(L) vFFf (L|L 0 ) vFBf (L|L 0 ) , = vBFf (L|L 0 ) vBBf (L|L 0 ) b(L 0 ) b(L) where L 0 , L 0 ≤ L, is another propagation length and subscripts F and B mean forward (to the right) and backward (to the left), cf. Section 6.2. Now we introduce the matrix τ ρ , (6.482) ρ τ with −1 (L 1 |L)vBFf (L 1 |L), τ = vFFf (L 1 |L) − vFBf (L 1 |L)vBBf −1 ρ = vFBf (L 1 |L)vBBf (L 1 |L), 6 Periodic and Disordered Media −1 ρ = −vBBf (L 1 |L)vBFf (L 1 |L), −1 τ = vBBf (L 1 |L), where L 1 ≡ L + δl and a reflection matrix for waves incident from the right −1 r (L) ≡ vFBf (L|0)vBBf (L|0). From the stochastic differential equation (6.479), it follows that 10 x y τ ρ = + iμ ρ τ 01 y∗ x∗ 1 2 x y 2 10 2 −μ a − μ . 01 y∗ x∗ 2 The authors assume the increase of the system length L by δl. As the new reflection matrix r1 is given by a relatively simple relation r1 = ρ + τ (r + r ρr + . . .)τ , where r1 ≡ r (L 1 ), r ≡ r (L), the authors derive that r1 = r + iμ(y + xr + r x∗ + r y∗ r ) 1 ∗ 1 ∗ 2 2 ∗2 ∗ + μ −2ar − (yy + x )r − r (y y + x ) − xr x . 2 2 On subtracting r from each side and dividing by μ2 , we obtain a usual idea of a stochastic differential equation. The mathematical notation does not require even the division by μ2 . It is relatively easy to derive the stochastic differential equation for Λ, the diagonal matrix of eigenvalues Λα of the matrix r† r . The authors introduce λα as Λα . The joint probability distribution of the set {λα }, W (λ1 , λ2 , . . . , λ N , L), λα = 1−Λ α evolves with the system length L according to the Fokker–Planck equation N 2 ∂ ∂W = λα (1 + λα ) ∂L N + 1 α=1 ∂λα ⎡ ⎤ 1 ∂ W ⎣ ⎦ × W + 2a(N + 1)W + . λ − λα ∂λα β,β=α β This equation has a stationary solution in the limit of long samples (L → ∞). Without absorption, this limit is trivial: = rmn Umk Unk , Propagation in Amplifying Random Media where Umn are elements of the matrix U that has a uniform distribution on the unitary group U(N ). They present a result equivalent to relation (6.503) by Beenakker et al. (1996). Brouwer and Beenakker (1996) have expounded a diagrammatic method for averaging over the circular ensemble of random-matrix theory. The role of the circular ensemble of unitary matrices in the scattering matrix approach has been respected. The method has been modified to the ensemble of uniformly distributed unitary symmetric matrices, which is referred to as the circular orthogonal ensemble. Even though these matrices are of the form U = VVT , with the matrix V uniformly distributed over the unitary group, this efficient method is available. The results have been extended to unitary matrices of quaternions. Brouwer and Beenakker (1996) have applied the method to two types of mesoscopic systems in the condensed-matter physics. In the article (Beenakker 1997), the author concentrates himself on two types of mesoscopic systems in the condensed-matter theory. In conclusions, he mentions that the propagation of electromagnetic waves through a waveguide is the optical analogue of conduction through a wire. In the problem of localization by disorder, the analogy is incomplete. The (relative) dielectric constant (x, y) not only fluctuates around unity but is always positive. Potentials V greater than the Fermi energy have no optical analogue. One new aspect of the optical problem is the behaviour in the case that the dielectric constant has a nonzero imaginary part. The intensity of radiation which has propagated over a distance L is multiplied by a factor exp(σ L), with σ negative (positive) for absorption (amplification). Here√the growth rate σ is related to the dielectric constant by the relation σ = −2k Im . The Dorokhov–Mello–Pereyra– Kumar equation, which applies to the conduction through a wire, is accordingly generalized. Another new aspect of the optical problem is the frequency dependence 2 of the term ωc (k02 ) in the Helmholtz equation, whereas an energy-dependent potential does not occur in the mesoscopic systems. van Rossum and Nieuwenhuizen (1999) have provided a discussion of the propagation of waves in random media. The description of radiation transport respects three length scales: macroscopic, mesoscopic, and microscopic. The diffusion theory presents the diffusion approximation at the macroscopic level. Important corrections are calculated with the radiative transfer equation, which describes intensity transport at the mesoscopic level and is derived from the microscopic wave equation. A precise treatment of the diffuse intensity includes the effects of boundary layers. Situations such as the enhanced backscattering cone and imaging of objects in opaque media are also discussed in the cited work. Mesoscopic correlations between multiple scattered intensities are introduced. The correlation functions and intensity distribution functions are derived. Guo (2002) has modelled the propagation of a plane scalar wave through a uniform dielectric slab using the multiple scattering method. This approach can be used to model light propagation in stratified media, which represents also one-dimensional photonic crystals. The multiple scattering method can easily be 6 Periodic and Disordered Media generalized to treat light propagation in nonuniform media, such as light propagation in random media. The scalar approximation is based on the assumption that the transverse electric waves are propagated. An incident plane wave, E inc (r) = eik0 ·r , with a wave vector k0 , may be a component of an incident pulse. The corresponding total field E(r) may be a component of the corresponding diffracted, transmitted, and reflected pulses. The total field E(r) obeys the following integral equation (in Gaussian units), (6.490) E(r) = E inc (r) + 4πk02 G 0 (r, r1 )χ (r1 )E(r1 ) d3 r1 , σ where G 0 (r, r1 ) = 1 eik0 |r−r1 | . 4π|r − r1 | The subscript σ indicates exclusion of a small volume surrounding the position at r. Guo (2002) has assumed that the slab is formed by uniformly distributed dipoles. He has concentrated on the resonant scattering case, where the complex electric permittivity is pure imaginary and energy is lost. In mathematics, random motions in random media are treated (Bolthausen and Sznitman 2002). Lectures are restricted to discrete models. Then a one-dimensional model of diffusion in a constant medium is the nearest neighbour random walk xn , xn assuming integer values. Here are two ways to introduce the medium randomness in a simple random walk. (i) The probabilities of jumping to the right neighbour are chosen as independent identically distributed random variables p(x), 0 ≤ p(x) ≤ 1, x being an integer. (ii) The probabilities of jumping to the right neighbour are given by the relations cx,x+1 , (6.492) p(x) = cx−1,x + cx,x+1 where cx,x+1 are independent identically distributed random variables, cx,x+1 > 0. In disordered media physics, rather, this model occurs (Bolthausen and Sznitman 2002, Hughes 1996). 6.5.2 Incoherent and Coherent Random Lasers Gouedard et al. (1993) have developed mirrorless light sources based on heavily doped neodymium materials pumped by nanosecond laser pulses. These sources generate quasimonochromatic short pulses and present characteristics of spatial and temporal incoherence. Such devices may find applications in domains such as holography, transport of energy in fibres for medical applications, and laser inertial confinement fusion. A result of their study is the existence of a threshold below which amplified spontaneous emission occurred in one of two compounds mentioned by the authors. Above the threshold the specific behaviour has been found. This behaviour was reported by Ter-Gabrielyan et al. (1991). Propagation in Amplifying Random Media The authors have considered two schemes of the origin: 1. Many different microcrystallites are lasing sequentially in very short pulses. 2. The grains of the powder emit collectively due to distributed feedback provided by multiple scattering. Scheme 2 may be related to the photon localization effect, but this has not been proved or disproved in Gouedard et al. (1993). Second-harmonic generation in strongly scattering media has been investigated by Kravtsov et al. (1991). Lawandy et al. (1994) have opined that composite systems may have spectral and temporal properties characteristic of a multimode laser oscillator, even though these systems do not comprise an external cavity. They have investigated a laser dye dispersed in a strongly scattering medium. This medium was a colloidal suspension of titanium dioxide particles. Colloidal solutions were optically pumped by linearly polarized 532-nm radiation. Either single 7-ns-long pulses or a 125-ns-long train of nine 80-ps-long pulses were used. Most experiments were performed using the long pulses. The presence of the TiO2 nanoparticles led first to a larger emission linewidth, but when the energy of the excitation pulses was increased, the emission at 617 nm grew rapidly and the line narrowed. The emission at the peak wavelength was studied as a function of the pump pulse energy for four different TiO2 nanoparticle densities. A threshold of the pump energy has been observed. This threshold is obvious in plots of emission linewidth as a function of the pump pulse energy when the TiO2 nanoparticle concentration is varied. Excitation with a train of 80-ps pulses demonstrated a threshold behaviour in the temporal characteristics of the colloid. When the pump pulse energy was increased beyond the threshold, the response was a sharp peak. The authors also concluded that a theory for this process did not exist. The literature cited by them required that every dimension of the sample be large compared to the optical scattering length. Sha et al. (1994) performed similar experiments. Single 3-ns-long laser pulses were used. A series of spectral experiments were carried out with a density of 5 × 1011 cm−3 of TiO2 nanoparticles. When the pump pulse energy was varied, a threshold at 620 nm and a possible one at 650 nm were demonstrated. The highest dye concentration exhibited a reduction of the lasing threshold, when the density of scattering particles was increased. For high particle density from 5 × 1011 to 2.5 × 1012 cm−3 the lowest dye concentration revealed the threshold of 0.17 mJ, higher than 0.07 mJ for this concentration in the neat solvent. Temporal characteristics of the response have been investigated as well. Above the threshold, the bandwidth of emission and the temporal width of the emitted pulses are narrowed. Also these authors stated that the exact mechanism for this process had not yet been explained. Wiersma et al. (1995a) declared that Lawandy et al. (1994) had prepared only amplified spontaneous emission. For the pump geometry used the amplified spontaneous emission proceeds perpendicular to the propagation direction of the pump pulse (side emission). It can be achieved that the amplification is negligible parallel to this direction. The paper under criticism does not record the side emission. 6 Periodic and Disordered Media Addition of the TiO2 particles brings about scattering or the detection of some amplified spontaneous emission light. Lawandy and Balachandran (1995) have mainly presented data that clearly show that for a fixed pump pulse energy there exists a threshold scatterer concentration for both side and front emissions. For the front emission it can be demonstrated that the threshold pump pulse energy decreases with increasing particle density. Wiersma et al. (1995b) performed coherent backscattering measurements from amplifying random media. They used optically pumped Ti:sapphire powders. They have found that the light intensity as a function of the angular distance from the exact backscattering direction exhibits a top, which sharpens with increasing gain. They have solved the stationary diffusion equation with gain (Davison and Sykes 1958, Letokhov 1968) and have used an approach by Akkermans et al. (1986) to obtain the coherent component of the backscattered intensity. Beenakker et al. (1996) have recognized the notion of a random laser. Letokhov (1968) called it a “laser with incoherent feedback”, but randomness admits the “last coherence effect that survives” (Akkermans et al. 1986). It is appropriate when the model includes an illuminated area, S. The authors associate the number of modes N λS2 with it, where λ is a wavelength of the incident light. The authors were in fact motivated by a paper of Pradhan and Kumar (1994) on the case N = 1 and have generalized it. The reflection of a monochromatic plane wave (frequency ω, wavelength λ) by a slab (thickness L, transverse dimension W ) is considered, which represents a disordered medium (mean free path l). This medium either amplifies or absorbs the radiation. For a statistical description an ensemble of slabs with different configurations of scatterers is considered. The authors let σ mean the amplification per unit length. A negative value of σ indicates absorption. The parameter γ = σ l is the amplification per mean free path. The treatment is limited to scalar waves. It is assumed that the slab is embedded in an optically passive waveguide without disorder. We let N be the number of modes which can propagate in the waveguide at frequency ω. The modal functions are normalized such that each mode carries unit power. The N × N reflection matrix r has the elements rmn , rmn means the amplitude of a wave reflected into mode m from an incident mode n. The matrix r is symmetric, r T = r, and its singular-value decomposition is a product of U, the diagonal form of the matrix r, and UT . Here U is a unitary matrix, is specific that they assume the which has elements Umn (Mello et al. 1988a). It √ reflection eigenvalues to be nonnegative. They let Rn , n = 1, 2, . . . , N mean the singular values. Then + (∗) Umk Unk Rk . (6.493) rmn = k The difference between the symmetric matrix r and the Hermitian one r is signed by the omission and the use of an asterisk, respectively. The unit power of the mode n is amplified (or reduced) to the value an = |rmn |2 . Propagation in Amplifying Random Media The statistical calculation uses the assumption that W L and U is uniformly distributed in the unitary group. As a consequence, an has a distribution independent of n. Further it is assumed that λ l, λ σ1 . On introducing μn = 1 , Rn − 1 the distribution P(μ1 , μ2 , . . . , μ N , L) obeys the Fokker–Planck equation ⎧ ⎡ ⎤ ⎫ N ⎬ 1 2 ∂ ⎨ ∂P + γ (N + 1)⎦ P = l μi (1 + μi ) ⎣ ⎭ ∂L N + 1 i=1 ∂μi ⎩ μ j − μi j, j=i + N ∂P 2 ∂ μi (1 + μi ) , N + 1 i=1 ∂μi ∂μi J with the initial condition P| L=0 = N i δ(μi + 1). ¯ 2 , it can be derived that On introducing the notation a¯ ≡ an , var a ≡ (an − a) l l d ¯ a¯ = (a¯ − 1)2 + 2γ a, dL d 2 ¯ a¯ − 1)2 . var a = 4(a¯ − 1 + γ ) var a + a( dL N (6.497) (6.498) Equation (6.498) has been derived for large N . Equation (6.497) is in agreement with Selden (1974). The initial conditions for Equations (6.497) and (6.498) are ¯ a(0)=0, var a(0)=0, respectively. In the case of absorption (γ < 0) and in the limit L → ∞ the solution, which we do not quote here, yields + (6.499) a¯ ∞ = 1 − γ − γ 2 − 2γ , var a∞ = 1 a¯ ∞ (1 − a¯ ∞ )2 , 2N 1 − γ − a¯ ∞ which is a stationary solution of equations (6.497) and (6.498) obviously. In the case of amplification (γ > 0), the condition L < L c must be fulfilled, where L c is a critical length, arccos(γ − 1) Lc = l + . 2γ − γ 2 At this length a¯ and var a diverge. It does not imply that a probability distribution of an independent of n, which has the density 0 1" ∗ 1 Unk Unk P(a, L) = δ a − 1 − , (6.502) μk k does not exist for L ≥ L c . It can be characterized by a modal value amax . 6 Periodic and Disordered Media The stationary solution of equation (6.496) is the Laguerre ensemble (Bronk 1965) P(μ1 , μ2 , . . . , μ N , ∞) = C exp[−γ (N + 1)μi ] |μ j − μi |, (6.503) i i< j where C is a normalization constant. The density " ρ(μ) = δ(μ − μi ) is introduced, which in the large-N limit yields N 2γ 2 ρ(μ) = − γ 2, 0 < μ < . π μ γ The average in (6.502) consists in the average of Unk ’s over the unitary group followed by the average of the μk ’s over the Laguerre ensemble (6.503). The pertinent result is known (Dyson 1962), even though only for large N , and it is an inverse Laplace transform. It has been found that the modal value of the distribution of an independent of n is amax =1 + 0.8γ N . Predictions of random-matrix theory have been compared with numerical simulations of the analogous electronic Anderson model with a complex scattering potential. John and Pang (1996) have determined the emission intensity properties of a model dye system, which is immersed in a multiply scattering medium with transport mean free path l ∗ . Since they considered a rhodamine 640 dye solution based on the literature, they respected the emission at 620 and 650 nm. They assumed that the dye solution with scattering titanium particles fills the whole sample region between the two planes z = 0 and z = L. They have used a generic dye laser scheme (Svelto and Hanna 1977, 1989) that explains bichromatic emission from the singlet states and the triplet states. The description comprises laser rate equations for the singlet states and those for the triplet states. The intensity of the pumping / beam is ), where Iinc is the intensity at z = 0 and l z = l ∗ l3a , with of the form Iinc exp( −z lz la the absorption length. Those equations are completed with a diffusion equation for the photon flux. A nonlinear diffusion equation for a dimensionless intensity is presented, from which populations of dye molecules in singlet and triplet states are eliminated. The emission spectra at nine different pump intensities agree with experiments (Sha et al. 1994). The linewidth and emission intensity at 620 nm as a function of pump intensity for different values of transport mean free path l ∗ are consistent with observations (Balachandran and Lawandy 1995, Lawandy et al. 1994). Wiersma and Lagendijk (1996) have confirmed that they set high standards upon random lasers. In their paper they have presented calculations on light diffusion with amplification and have reported on experiments. In a random medium light is multiply scattered. The relevant length scales that describe the scattering process are the scattering mean free path ls defined as the Propagation in Amplifying Random Media average distance between two scattering events and transport mean free path l defined as the average distance a wave travels before its propagation direction randomizes. In an amplifying random medium it is necessary to define two more length scales: the gain length lg and amplification length lamp . The gain length is defined as the path length over which the intensity is multiplied by a factor e = exp(1). The amplification length is defined as the (rms)/distance between the beginning ll g . A sample of an amplifying and ending points for paths of length lg , lamp = 3 random medium in the form of a slab has been studied. Light and the amplifying medium become unstable if the thickness L is above the critical thickness L cr , L cr = πlamp . Wiersma and Lagendijk (1996) have assumed the laser material to be a four-level system. They have considered an incident pump and probe pulse and spontaneous emission. They have described their system with coupled differential equations. The set is formed by three diffusion equations and the rate equation for the concentration N1 (r, t) of laser particles in a metastable state. The first two diffusion equations are linear relative to the energy densities of the pump light and the probe light. External fields in those equations are the intensities of the incoming pump and probe pulses, respectively. The boundary condition is the vanishing of the energy densities outside the slab at the distance z 0 ≈0.7l from the surfaces of the slab. The authors have been able to calculate the outgoing flux at either the front or rear surface of the slab, which is determined by the gradient of the energy density at the sample surface. Experiments have demonstrated that it is possible to realize an amplifying random medium. Wiersma and Lagendijk (1996) have distinguished three regimes depending on the amount of 1. Weak scattering and gain. If l is of the order of the sample size, one says that the scattering is very weak. Addition of scatterers lifts a directionality in the output of the system, cf. (Wiersma et al. 1995a). It is assumed that l continues to be of the order of the sample size. 2. Modest scattering and gain. If λ l L, one says that the scattering is temperate. The calculations have shown that modest scattering with gain can lead to a pulsed output, cf. (Gouedard et al. 1993). 3. Strong scattering and gain. If l is smaller than or equal to the wavelength λ, one says that the scattering is strong. In this regime the Anderson localization of light is expected to occur. Around 1996 there was no experimental evidence for the photonic Anderson localization. Paasschens et al. (1996) have studied the propagation of radiation through a disordered waveguide with a complex dielectric constant . They have called the systems dual, which differ only in the sign of the imaginary part of . In the case of the scattering matrix S= r t t r 6 Periodic and Disordered Media they have introduced transmittances T , T and reflectances R, R 1 1 Tr{tt† }, R = Tr{rr† }, N N 1 1 † † Tr{t t }, R = Tr{r r }. T = N N T = They have let Tn mean the eigenvalues of the matrix tt† (all of them depend on L) and have introduced their localization lengths ξn with the properties 1 1 = − lim ln[Tn (L)]. L→∞ L ξn it holds that The decay length ξ is defined as ξ = max(ξ1 , ξ2 , . . . , ξ N ). For L → ∞√ ξ (σ ) = ξ (−σ ), where a dependence on σ is indicated, σ = −2k Im 1 + i Im , with k the free-space wave number of the radiation. In the case N = 1, the authors have presented also the Fokker–Planck equation for the joint probability distribution P(R, T, L) of the reflectance R and transmittance T . They have included the familiar relation for P(R, L = ∞) and have introduced γ = σ l. Since T diverges for γ > 0 at the lasing threshold L c ≈ lc|γ(γ| ) , where c(γ ) = C + (ln 2)γ − e2γ Ei(−2γ ), Dx t where C is Euler’s constant and Ei(x) = −∞ et dt is the exponential integral, ln T (N ≥ 1) and var ln T (N = 1) are studied. The sum ln T + ξL0 , where ξ0 = (N + 1) 2l is the localization length for σ = 0, is mostly negative in consequence of the approximate equality ln T + Lξ ≈ 0 and the observation that ξ ≤ ξ0 . Beenakker (1998) has generalized results concerning amplified spontaneous emission from a random medium for them to include thermal radiation. He has applied the method of random-matrix theory (Mehta 1991) to quantum optics. He thinks of a linear amplifier as a system in thermal equilibrium at a negative temperature (Jeffers 1993, Matloob et al. 1997). It is assumed that radiation, maybe no photons, comes into a random medium via an N -mode waveguide. The radiation is transformed in the random medium and goes out from it via the same waveguide. Annihilation operators a˜ˆ nin (ω), a˜ˆ nout (ω), b˜ˆ n (ω), c˜ˆ n (ω), n = 1, 2, . . . , N (ω), are introduced. They satisfy the commutation relations ˆ [a˜ˆ n (ω), a˜ˆ m (ω )] = 0, ˆ [a˜ˆ n (ω), a˜ˆ m† (ω )] = δnm δ(ω − ω )1, ˜ˆ c˜ˆ . The operators a˜ˆ in (ω) are related to the incoming modes and for a˜ˆ = a˜ˆ in , a˜ˆ out , b, n out the operators a˜ˆ n (ω) describe the outgoing modes, b˜ˆ n (ω) and c˜ˆ n (ω) are quantum noises for absorption and amplification, respectively. Propagation in Amplifying Random Media On introducing ⎛ out ⎞ ⎞ a˜ˆ 1in (ω) a˜ˆ 1 (ω) ⎜ ⎜ ⎟ ⎟ a˜ˆ in (ω) = ⎝ ... ⎠ , a˜ˆ out (ω) = ⎝ ... ⎠ , a˜ˆ in a˜ˆ out N (ω) N (ω) ⎞ ⎛ ⎛ † ⎞ ˜bˆ (ω) c˜ˆ 1 (ω) 1 ⎟ ⎜ ⎜ . ⎟ . ⎟ ˜† ˜ˆ b(ω) =⎜ ⎝ .. ⎠ , cˆ (ω) = ⎝ .. ⎠ , † c˜ˆ (ω) b˜ˆ N (ω) the input–output relations take the form ˜ˆ a˜ˆ out (ω) = Sa˜ˆ in (ω) + Qb(ω) + Vc˜ˆ † (ω), where S, Q, V are matrices, which satisfy the conditions QQ† − VV† = 1 − SS† , where 1 is the unit matrix. Besides zero mean values, the quantum noises have the properties b˜ˆ n† (ω)b˜ˆ m (ω ) = δnm δ(ω − ω ) f (ω, T ), c˜ˆ n† (ω)c˜ˆ m (ω ) = δnm δ(ω − ω ) f (ω, T ), where T is the temperature and f (ω, T ) = exp 1 ω kB T The negative temperature is obtained according to the relation − [ f (ω, T ) + 1] = f (ω, −T ). The author also mentions the photodetection theory: The probability that n photons are counted in a time t is given by the relation p(n) = 1 ˆ ) : , ˆ n exp(−W : W n! where : : means the normal ordering of the operators inside and t ˆ (t) = aˆ out† (t )ˆaout (t ) dt , W with 1 aˆ out (t) = √ 2π ∞ 0 e−iωt a˜ˆ out (ω) dω. 6 Periodic and Disordered Media The generating function F(ξ ) of factorial cumulants -∞ . n ˆ (t)] : F(ξ ) = ln (1 + ξ ) p(n) = ln : exp[ξ W is introduced. Here factorial means that the connection of usual cumulants with moments of the photon–number distribution is applied to the factorial moments ˆ (t) stands for the integrated intensity. The factorial of the distribution p(n) and W cumulants are , d p F(ξ ) ,, , p = 1, 2, . . . , ∞. (6.521) κp = dξ p ,ξ =0 If ωc t 1, where ωc is the frequency interval within which SS† does not vary significantly, then ∞ dω ln det[1 − (1 − SS† )ξ f (ω, T )] , (6.522) F(ξ ) = −t 2π 0 where det indicates the determinant. If Ωc t 1, where Ωc is the frequency range over which SS† differs appreciably from the unit matrix, then ∞ dω F(ξ ) = − ln det 1 − t (1 − SS† )ξ f (ω, T ) . (6.523) 2π 0 The long-time limit depends only on the set of eigenvalues σ1 , σ2 , . . . , σ N of SS† (Beenakker 1998). These eigenvalues are called scattering strengths. As a frequency-resolved measurement leads to F(ξ ) = − =− ∞ tδω ln [1 − (1 − σn )ξ f (ω, T )] 2π n=1 tδω ln det 1 − (1 − SS† )ξ f (ω, T ) , 2π where δω is a frequency interval, the factorial cumulants are tδω κ p = ( p − 1)! (1 − σn ) p , p = 1, 2, . . . , ∞. [ f (ω, T )] p 2π n=1 N Particularly, n¯ = κ1 = var n = κ2 = N tδω (1 − σn ), f (ω, T ) 2π n=1 N tδω (1 − σn )2 . [ f (ω, T )]2 2π n=1 Propagation in Amplifying Random Media But the blackbody radiation has n¯ = ν f, var n = ν f (ω, T )[1 + f (ω, T )], where ν=N tδω . 2π It has the property 1 var n = n¯ + n¯ 2 , ν whence it is advantageous to introduce n¯ 2 , var n − n¯ an effective number of degrees of freedom. Hence, C N 2 n=1 (1 − σn ) . νeff = ν C CN N 1 N n=1 (1 − σ ) (1 − σn )2 − f (ω,T n n=1 ) νeff = As f (ω, T ) → ∞ for T → ∞, relation C N νeff = ν 2 n=1 (1 − σn ) CN 2 n=1 (1 − σn ) is obtained for T → ∞. Turning to applications, Beenakker (1998) concentrates on the long-time regime with N 1. For random media, a scattering-strength density ρ(σ ) is considered. Relations (6.525) and (6.532) are written using N M(σn ) → ρ(σ )M(σ ) dσ, (6.533) n=1 where M(σn )=(1 − σn ) p . Beenakker (1998) presents further results for a semi-infinite random medium. Let τs denote the inverse scattering rate and τa the inverse absorption or amplification τs . In the regime γ N 2 1, rate. Let us introduce γ = 16 3 τa 1 1 γ 1 N√ γ , (6.534) − 1 − , for 0 < σ < ρ(σ ) = π (1 − σ )2 σ 4 1 + γ2 ρ(σ ) = 0 elsewhere. From this, 1 n¯ = ν f (ω, T )γ 2 1 4 1+ −1 . γ Beenakker (1998) has arrived at the following conclusions. For strong absorption, γ 1, νeff = ν is obtained as for the blackbody radiation. For weak absorption, 6 Periodic and Disordered Media √ γ 1, it is found that νeff = 2ν γ . It has been recognized that it is Glauber’s result for the Lorentzian spectrum (Glauber 1963). Specific results are added for an optical cavity coupled to a photodetector via an N -mode waveguide. Here N 1 is assumed and the modes overlap. Let us , where τdwell is the mean dwell time of a photon in the cavity introduce γ = τdwell τa without absorption. In the limit of weak absorption, γ 1, + 1 N (σ − σ− )(σ+ − σ ) for σ− < σ < σ+ , (6.536) ρ(σ ) = 2 2π (1 − σ ) √ where σ± = 1 − 3γ ± 2γ 2. In the limit of strong absorption, γ 1, relation (6.534) holds. From this, γ n¯ = ν f . (6.537) 1+γ It holds that νeff = ν for γ 1. For γ 1, we find that νeff = ν2 , which is finite (nonvanishing) for γ → 0. The general formulae can also be applied to amplified spontaneous emission. Beenakker (1998) investigates only the random laser below the laser threshold. The semi-infinite medium is above the laser threshold for an arbitrarily small amplification, but the cavity is below the threshold, as long as γ < 1. Cao et al. (1999) observed random laser action with coherent feedback in semiconductor powder. They found that the scattering mean free path is less than the emission wavelength. A comparison with the random laser theory (Wiersma and Lagendijk 1996, Zhang 1995) was realized. The laser emission from the powder could be observed in all directions. Their work is very different from the work on powder laser (Markushev et al. 1986, Ter-Gabri˙elyan et al. 1991). When the particle size was much larger than the wavelength, a single particle could serve as a resonator. When the particle size was less than this wavelength, a single particle was too small to serve as a laser resonator. Laser resonators were formed by recurrent light scattering. Patra and Beenakker (1999) have continued the study of an amplifying disordered cavity (Beenakker 1998). Besides the cavity they have investigated an amplifying disordered waveguide. The disordered medium is illuminated by monochromatic radiation of a single propagating mode in a coherent state. First they consider an amplifying disordered medium embedded in a waveguide that supports N (ω) propagating modes at frequency ω. The incoming radiation in mode n is described by an annihilation operator a˜ˆ nin (ω), where n = 1, 2, . . . , N for a mode on the left-hand side of the medium and n = N + 1, N + 2, . . . , 2N for a mode on the right-hand side. The outgoing radiation in mode n is described by an annihilation operator a˜ˆ nout (ω), where again n = 1, 2, . . . , N for a mode on the left-hand side of the medium and n = N + 1, N + 2, . . . , 2N for a mode on the right-hand side. These two sets of operators are connected by the input–output relations aˆ˜ out (ω) = S(ω)a˜ˆ in (ω) + V(ω)c˜ˆ † (ω). Propagation in Amplifying Random Media Here S(ω) is a 2N × 2N scattering matrix, V(ω) is a 2N × 2N matrix, and c˜ˆ † (ω) is † † † a vector of 2N creation operators c˜ˆ 1 (ω), c˜ˆ 2 (ω),. . . , c˜ˆ 2N (ω). The scattering matrix S has the form r t , (6.539) S(ω) = t r where r ≡ r(ω) and r ≡ r (ω) are N × N reflection matrices and t ≡ t(ω) and t ≡ t (ω) are N × N transmission matrices. In the case under consideration t = tT , r = rT , and r = r T . † It is assumed that the operators c˜ˆ n (ω) and c˜ˆ n (ω) commute with the operators † a˜ˆ m (ω) and a˜ˆ m (ω). This implies that V(ω)V† (ω) = S(ω)S† (ω) − 1. The state of the incoming radiation is |ψ = 2N 8 |vac m ⊗ |ψ(ω) m 0 , m=1 m=m 0 where |vac m is a vacuum state of mode m of the incoming radiation and |ψ(ω) m is a coherent state related to an unnormalized wave function ψ(ω), |ψ(ω)|2 → I0 δ(ω− ω0 ). The coherent state is defined in terms of the usual unnormalized single-photon states |ω n with the property n ω|ω m = δnm δ(ω − ω ). The counting of photons is studied using the integrated intensity τ ˆ (τ ) = Iˆ (t) dt, W where Iˆ (t) = ηˆaout† (t)Pˆaout (t), with η the efficiency of the photodetector, 00 P= , 01 being a 2N × 2N matrix divided into four N × N matrices, and 1 aˆ nout (t) = √ 2π e−iωt a˜ˆ nout (ω) dω. The generating function is ˆ ) : . F(ξ ) = ln : exp(ξ W 6 Periodic and Disordered Media The result for the detection in the long-time regime ωc τ 1 is relatively simple. It is found that ∞ τ F(ξ ) = Fex (ξ ) − ln det[1 − ηξ f (ω, T )(1 − rr† − tt† )] dω, (6.548) 2π 0 where −1 † † † t0 Fex (ξ ) = ηξ τ t0 1 − ηξ f (ω0 , T )(1 − r0 r0 − t0 t0 ) with {. . .}m 0 m 0 denoting the matrix element located in m 0 th row and m 0 th column. In relation (6.549) t0 ≡ t(ω0 ), r0 ≡ r(ω0 ). The general description may be applied also to an optical cavity filled with an amplifying random medium. It holds that t = 0 because there is no transmission. Patra and Beenakker (1999) have studied how the noise figure F increases on approaching the laser threshold. Near the laser threshold the noise figure has a divergent ensemble average. Its modal value is of the order of the number N of propagating modes in the medium. The noise power is increased by † † † . (6.550) Pex = 2η2 f I0 t0 (1 − r0 r0 − t0 t0 )t0 m0m0 It has been found that Pex increases monotonically with increasing amplification rate, but it has a maximum as a function of absorption rate for certain geometries. Mishchenko and Beenakker (1999) say clearly that they borrow from the field of electronic conduction in disordered metals. Besides, they take into account the absorption and emission of photons. They D let f k (r, t) denote the density of photon number at the position r and such that f k (r, t) d3 r is the total photon number in the mode k. The authors consider the random field f k (r, t) and its mean ¯f k (r). The system of wave vectors k is discrete. This has been adopted for ease of notation. It is appropriate that this notion is independent of the time. But this does not facilitate reading of a Boltzmann equation cs · ∇r ¯f k (r, t) = I˜k (r, t), where s = k |k| and I˜k (r, t) = J˜kk (r, t) − J˜k k (r, t) + I˜k+ (r, t), where J˜kk (r, t) ≡ Jkk ¯f k (r, t), ¯f k (r, t) , J˜k k (r, t) ≡ Jk k ¯f k (r, t), ¯f k (r, t) . On writing it in the form 0 = −cs · ∇r ¯f k (r, t) + I˜k (r, t), Propagation in Amplifying Random Media we see that a continuity equation was first generalized ∂ ¯f k (r, t) = −cs · ∇r ¯f k (r, t) + I˜k (r, t). ∂t Here I˜k (r, t) has to mean gain and loss terms. Let us note the form of the gain term due to amplification I˜k+ (r, t) = wk+ 1 + ¯f k (r, t) , with the amplification rate. The unity enables a zero density to be amplified. The gain term due to scattering from the mode with the wave vector k , J˜kk (r, t) = wkk ¯f k (r, t)[1 + ¯f k (r, t)], is similar. Obviously, it is nonlinear in the described field. This does not mean that Equation (6.551) is not linear. It holds that J˜kk (r, t) − J˜k k (r, t) = wkk [ ¯f k (r, t) − ¯f k (r, t)] = wk k [ ¯f k (r, t) − ¯f k (r, t)]. Mishchenko and Beenakker (1999) continue the results such as those in Kogan (1996). Consistently with (6.551), the Boltzmann–Langevin equation for the random field itself is presented ∂ f k (r, t) = cs · ∇r f k (r, t) + [Jkk (r, t) − Jk k (r, t)] ∂t k + Ik+ (r, t) − Ik− (r, t) + Lk (r, t), where Jkk (r, t) ≡ Jkk f k (r, t), f k (r, t) , Jk k (r, t) ≡ Jk k f k (r, t), f k (r, t) . Here Ik+ (r, t) is a gain term due to amplification, Ik+ (r, t) = wk+ [1 + Ik (r, t)] and Ik− (r, t) is a loss term due to absorption, Ik− (r, t) = wk− f k (r, t), with the absorption rate. In (6.559), Lk (r, t) is a Langevin term, Lk (r, t) = [δ Jkk (r, t) − δ Jk k (r, t)] + δ Ik+ (r, t) − δ Ik− (r, t), which is remarkable for copying the form of the previous terms. The elementary stochastic processes δ Jkk (r, t), δ Ik± (r, t) have zero means, δ Jkk (r, t) = 0, δ Ik+ (r, t) = 0, δ Ik− (r, t) = 0, 6 Periodic and Disordered Media and they have properties δ Jkk (r, t)δ Jqq (r , t ) = Δ(r, t, r , t )δkq δk q Jkk (r, t), δ Ik± (r, t)δ Ik± (r , t ) = Δ(r, t, r , t )δkk Ik± (r, t), where Δ(r, t, r , t ) = δ(r − r )δ(t − t ). They are also characterized by the stochastic independence between δ Jkk (r, t) and δ Iq± (r , t ) and by the same relationship between δ Ik+ (r, t) and δ Iq− (r , t ). The authors make a diffusion approximation, which is related to the expansion of the random field with respect to s. Then they consider the propagation through an absorbing or amplifying disordered waveguide (of length L). The noise power is decomposed into the fluctuations in the transmitted radiation, those in the thermal radiation, and the excess noise, which is characterized in Henry and Kazarinov (1996). The expressions for the thermal fluctuations and the excess noise agree with Beenakker (1999). As a contribution the noise power of the thermal radiation emitted by a sphere is given (per unit surface area). Numerical results both for the waveguide geometry and for the sphere geometry are presented. Beenakker (1999) has presented the statistics of thermal radiation in dependence on the deviation 1 − SS† from the unitarity of the scattering matrix S of the system. He has recovered the familiar results for black-body radiation in the limit S → 0. A simple expression for the mean photocount has been identified as Kirchhoff’s law. A generalization of the Kirchhoff law has been derived. For the extension of the Kirchhoff law to the statistics of quanta, which exists in the case of single-mode detection, reference is made to (Bekenstein and Schiffer 1994). Due to a similarity, the theory has been easily applied to a random amplifying medium (or a “random laser”) below the laser threshold. Zacharakis et al. (2000) measured photon–number distributions of fluorescence of an organic dye. The dye was mixed with poly(methyl methacrylate), which fixed scatterers. They were able to measure the photon–number distributions for different time delays and at different wavelengths. The source of excitations was a frequencydoubled 200-fs pulsed laser emitting at 400 nm. The photon-number distribution from the sample when it was pumped above threshold had different character for different time delays. For a small time delay this distribution was Poisson-shaped with an imperfect vacuum value. When the time delay increased, a Bose–Einstein distribution was appropriate. In contrast to the high-energy case, when the excitation energy is below threshold, the photon number has the Bose–Einstein distribution, which is independent of the time delay. In Patra and Beenakker (2000), a continuation of Patra and Beenakker (1999) and Beenakker (1998) is contained. The treatment includes a noise characteristic averaged over an ensemble of random media with different positions of the scatterers. The authors assume a waveguide with N (ω) propagating modes at frequency ω. Modes 1, 2, . . . , N are on the left-hand side of the medium and modes Propagation in Amplifying Random Media N + 1, N + 2, . . . , 2N are on the right-hand side. The outgoing radiation in mode n is described by an annihilation operator a˜ˆ nout (ω). A vector a˜ˆ out (ω) is defined. Similar notation is introduced for incoming radiation. The operators, which belong to the same stage, fulfil the commutation relations between the annihilation and creation ones, ˆ [a˜ˆ n (ω), a˜ˆ m (ω )] = 0, ˆ [a˜ˆ n (ω), a˜ˆ m† (ω )] = δnm δ(ω − ω )1, ˜ in ˜ out ˜ˆ aˆ , aˆ . where a= In the input–output relations, also the vectors bˆ and cˆ occur, each of which has 2N annihilation operators for its elements. Their correlation functions are determined dependent on the temperature of the medium. The usual quantization of discrete frequencies is obtained by considering those in the relations for a frequency step Δ, the frequencies ω p = pΔ, and subscripts as those in the relations ω p+1 1 out a˜ˆ nout (ω) dω, a˜ˆ np = √ Δ ωp Snp,n p = Snn (ω p )δ pp . Patra and Beenakker (2000) have considered a useful modification 1 out out a˜ˆ np = √ a˜ˆ np . Δ We have used the prime to distinguish this modification from the usual annihilation operator. A characteristic function is 1 † : . (6.570) χ (β, Δ) = : exp Δ 2 a˜ˆ β − β † a˜ˆ The vector β has the elements βnp = βn (ω p ). The statistical properties of the bath are χabs (β, Δ) = exp(−β † fβ) for an absorbing medium and χamp (β, Δ) = exp(β † fβ) for an amplifying medium. In these relations f means a matrix with the elements f np,n p = δnn δ pp f (ω p , T ), where f (ω p , T ) is given in (6.515). The characteristic function of the outgoing state is (6.573) χout (β, Δ) = exp −β † (1 − SS† )fβ χin (S† β, Δ). The photocount distribution is the probability P(n, τ ) that n photons are absorbed by a photodetector within a time τ . The appropriate generating function is denoted ˆ (τ ) is defined by the relation by F(ξ, τ ). The integrated intensity W 6 Periodic and Disordered Media 2N τ ˆ (τ ) = W 0 ηn aˆ nout† (t)aˆ nout (t) dt. Here ηn ∈ [0, 1] is the detection efficiency of the nth mode. We let η denote a 2N × 2N diagonal matrix containing the detection efficiencies ηn on the diagonal (ηnm = ηn δnm ). The discretization of frequencies will lead to the integrated ˆ (τ, Δ). It can be written using the matrix η˜ = η ⊗ 1frequencies , with the intensity W elements η˜ np,n p =ηnn δ pp . Here 1frequencies is the unit matrix with the elements δ pp . The respective generating function is denoted by F(ξ, τ, Δ). The generating function F(ξ, τ, Δ) may be determined from exp [F(ξ, τ, Δ)], which in turn is a linear integral transform of the characteristic function χout (β, Δ). On respecting the input–output relation (6.573), we obtain the relation exp [F(ξ, τ, Δ)] = 1 det(−ξ π η) ˜ 1 † −1 † † † × χin (S β, Δ) exp β (η) ˜ β − β (1 − SS )fβ dβ, ξ (6.575) is chosen. The thermal fluctuations can be separated and we obtain where Δ = 2π τ the relations F(ξ, τ, Δ) = Fth (ξ, τ, Δ) 1 † −1 χin (β, Δ) exp(−β M β) dβ , + ln det(π M) Fth (ξ, τ, Δ) = − ln{det[1 − ξ η(1 ˜ − SS† )f]}, (6.576) (6.577) and ˜ − SS† )f]−1 ηS. ˜ M = −ξ S† [1 − ξ η(1 Returning to the continuous frequency, relation (6.577) can be written as ∞ τ ln det 1 − ξ η[1 − S(ω)S† (ω)] f (ω, T ) dω, Fth (ξ, τ ) = − 2π 0 where f (ω, T ) is given in (6.515). It is stated that all factorial cumulants depend linearly on the detection time in the long-time limit. The notion of detection efficiency may be utilized also to treatment of particular cases, such as the detection at one side of the waveguide (Beenakker 1999). Patra and Beenakker (2000) assume that the incident radiation is in the ideal ˆ squeezed state |ζ, α = Cˆ S|0 , where 1 in in † in ∗ Sˆ = exp Δ(ˆa in T ζ † a˜ˆ − a˜ˆ ζ a˜ˆ ) , 2 Propagation in Amplifying Random Media is the squeezing operator and 1 in † in Cˆ = exp Δ 2 (a˜ˆ α − α † a˜ˆ ) is the displacement operator. In relation (6.580), T means the transposition, ζ is the diagonal matrix with the elements ζnp,n p = ζn (ω p )δnn δ pp , and α is the vector with the elements αnp = αn (ω p ). Useful are the real parameters ρn (ω p ), φn (ω p ) such that ζn (ω) = in in † ρn (ω) exp (iφn (ω)). Here a˜ˆ np is the column with the elements a˜ˆ np . The characteristic function of the incident radiation in the case where this radiation is in the ideal squeezed state is 1 χin (β, Δ) = exp α † β − β † α − β T [e−iφ sinh(2ρ)]β 4 1 − β † [eiφ sinh(2ρ)]β ∗ − β † (sinh ρ)2 β . (6.582) 4 On substitution into relation (6.573), we obtain that the characteristic function of the outgoing radiation is 1 χout (β, Δ) = exp α † S† β − β † Sα − β T S∗ [e−iφ sinh(2ρ)]S† β 4 1 − β † S[eiφ sinh (2ρ)]ST β ∗ 4 − β † [f − S f − (sinh ρ)2 S† ]β . (6.583) Using an integral transformation, we obtain the generating function F(ξ, τ, Δ), 1 1 α ∗ T −1 Mα X , (6.584) F(ξ, τ, Δ) = Fth (ξ, τ, Δ) − ln (det X) − M∗ α ∗ 2 2 α where the matrix X is defined in terms of the matrix M, sinh ρ 0 M sinh ρ −Meiφ cosh ρ . X=1+ 0 sinh ρ −M∗ e−iφ cosh ρ M∗ sinh ρ Further Patra and Beenakker (2000) consider the case, where only the mode m 0 is squeezed. The Fano factor is the ratio F= P , I¯ where P = τ1 (κ2 + κ1 ) is the noise power and I¯ = τ 1κ1 is the mean current. For simplicity this characteristic is considered in the limit of τ → ∞. The detection efficiency does not depend on the mode. The thermal contributions may be neglected in the considered case, since they are spread out over a wide range of frequencies. 6 Periodic and Disordered Media In the exposition, η ceases to be a matrix and it means the common value of the detector efficiencies ηn . For (6.582), one has Fin = 1 + |α cosh ρ − α ∗ eiφ sinh ρ|2 − |α|2 + 12 (sinh ρ)2 cosh(2ρ) . |α|2 + (sinh ρ)2 Patra and Beenakker (2000) consider the Fano factor both for the direct detection (Fdirect ) and for the homodyne detection (Fhomo ). In the case of direct detection, the authors find that † Fdirect − 1 = η(t0 t0 )m 0 m 0 (Fin − 1) † + 2η f (ω0 , T ) [t0 (1 − r0 r0 − t0 t0 )t0 ]m 0 m 0 † (t0 t0 )m 0 m 0 The Fano factors for the direct and homodyne detections depend on the reflection and transmission matrices of the waveguide. These matrices depend on the positions of the scatterers inside the waveguide. The distribution of these matrices can be chosen by random-matrix theory (Beenakker 1997). The details may be found in Brouwer (1998). It is examined how the distribution √ depends on the mean free path l and the amplification (absorption) length ξa = Dτa , where D = cl3 is the diffusion constant and τ1a is the amplification (absorption) rate. On neglect of the correlation between numerator and denominator for an absorbing disordered waveguide it is obtained that 4lη (Fin − 1) 3ξa sinh s 2s + coth s s η s coth s − 1 + + f (ω0 , T ) 3 − − . 2 sinh s (sinh s)2 (sinh s)3 Fdirect = 1 + Here s ≡ ξLa . In the limit of strong absorption, the Fano factor Fdirect = 1 + 3 f (ω0 , T ). The Fano factor Fin may be given by equation (6.587), but Equation 2η (6.589) is valid even for any state of the incident radiation. For an amplifying disordered waveguide it is found that 4lη (Fin − 1) 3ξa sin s 2s − cot s s η s cot s − 1 − + f (ω0 , T ) 3 − + . 2 sin s (sin s)2 (sin s)3 Fdirect = 1 + The laser threshold is s = π. In Patra and Beenakker (2000) also the average Fano factors for the homodyne detection are presented. The effect of absorption on quasimodes of a random waveguide has been studied in Sebbah et al. (2007). Cao et al. (2000) have mentioned the concept of the Anderson localization. Optical absorption counteracts photon localization. Also optical gain reduces photon localization length in a one-dimensional random medium. After the previous experiment on coherent feedback for lasing, the authors have arrived at enhancement of Propagation in Amplifying Random Media the scattering strength and at spatial confinement of laser light in the disordered medium. They opine that the optical gain enhances the photon localization at least in a three-dimensional medium. The coherent amplification of the scattered light enhances the interference effect and helps the spatial confinement. ZnO particles of the average size about 50 nm and a ZnO powder film of thickness about 30 μm were prepared. The scattering mean free path l in the ZnO powder has been estimated l ∼ 0.5λ. The ZnO powder film is photoluminescent, when it is pumped at 266 nm. The pump beam falls in the normal direction on an about 20 μm spot of the film. The spectrum of emission from the powder film is measured. At the same time, the spatial distribution of the emitted light intensity is imaged in the ultraviolet. Figure 1, which we do not reproduce here, presents the measured spectra and spatial distribution of emission in a ZnO powder film at two different pump powers. For the lower pump level, the spectrum consists of a single broad spontaneous emission peak. The spatial distribution of the spontaneous emission intensity is almost uniform across the excitation area. It depends on the pump intensity distribution. For the higher pump level, when the pump intensity exceeds a threshold, sharp peaks emerge in the emission spectrum. Bright tiny spots appear in the spatial distribution of the emission. When the pump intensity increases further, additional sharp peaks emerge in the emission spectrum. Also more bright spots appear in the emitted light pattern. Above the threshold, the total emission intensity begins to approach the pump power. The fact that the bright spots in the emission pattern and the lasing modes in the emission spectrum always occur simultaneously could mean that the bright spots are efficient scatterers. Then the bright spots should scatter the spontaneously emitted light below the lasing threshold. As the bright spots do not exist below the lasing threshold, the laser light intensity is high at their locations. The authors have measured the short scattering mean free path, i.e. very strong light scattering on average. They expect small regions of higher disorder and stronger scattering. In other words, they assume many resonant cavities formed by multiple scattering and interference. Every cavity has its lasing threshold. The lasing peaks in the emission spectrum illustrate cavity resonant frequencies and the bright spots in the spatial light pattern give positions and shapes of the cavities. To verify this hypothesis, the authors have reduced the size of the random medium to a cluster of ZnO nanoparticles. The existence of the lasing threshold is related to nonradiative and radiative recombination of the excited carriers. The authors have calculated the electromagnetic-field distribution in a random medium using the finite-difference time-domain method according to the book (Taflove 1995). In the calculations the assumption that the ZnO particles are surrounded by air has been used. Optical gain has been introduced to the Maxwell equations by the negative conductance σ . The simulation has shown that, when the optical gain is just above the lasing threshold, the emission spectrum consists of a single peak. When the optical gain increases further, additional lasing modes appear. The authors have concluded that optical gain helps spatial confinement of light in a random medium. 6 Periodic and Disordered Media The authors have paid more attention to the Anderson localization. In spite of the fact that there is no criterion for the Anderson localization in an active random medium, they have determined the Thouless number δ = 0.75 < 1 in favour of the Anderson localization in the lasing mode. Thareja and Mitra (2000) have reported on an experiment on optically pumped ZnO powder. In this medium a random laser has been demonstrated. The theoretic explanation of this effect has been based on the paper (Cao et al. 1999). Jiang and Soukoulis (2000) emphasize that, in contrast to the paper (Lawandy et al. 1994), where discrete lasing peaks were not observed, a new interesting property was reported, e.g. in Cao et al. (1999). The authors provide references, e.g. in (Zyuzin 1995, John and Pang 1996), but they have pointed out a limitation of the diffusion approach. They see a limitation, though mild, also in the approach as in (Paasschens et al. 1996, Jiang and Soukoulis 1999, Jiang et al. 1999). Essentially, they return to the semiclassical laser theory, e.g., (Siegman 1986). The authors have combined the equations for electron densities with Maxwell’s equations and have used the finite-difference time-domain method according to the book (Taflove 1995). After a long relaxation time stationary solutions can be obtained. The time dependence of the electric field inside the system and in its vicinity is examined. The emission spectra and the modes inside the system can be obtained after the Fourier transformation. The system is a one-dimensional simplification of the reported experiments. It consists of many dielectric layers of fixed thickness bestowed between two surfaces, with the space among the dielectric layers filled with a gain medium. The distance between the neighbouring dielectric layers is assumed to be a random variable. The total length of the system is L. The numerical simulations have shown the following: (i) In periodic and short (L < ξ ) random systems an extended mode dominates in the field and the spectrum. (ii) For either strong disorder or a long (L ξ ) system a low threshold value for lasing is obtained. By increasing the length or the gain more peaks appear in the spectrum. The peaks are coming from localized modes. (iii) The saturation can be observed. The number of the peaks is proportional to the length of the system. (iv) The emission spectra are not the same for various output directions. In the three-dimensional case lasing peaks need not be so sharp as in the onedimensional case. The alternating layers are made of dielectric materials with dielectric constants 1 = 0 and 2 = 40 . The thickness of the first layer, which simulates the gain medium, is a random variable an = a0 (1 + W γ ), where a0 = 300 nm, W is the strength of randomness, and γ is a random value in the range [−0.5, 0.5]. The thickness of the second layer, which simulates the scatterers, is a constant b = 180 nm. In the layers, which represent the gain medium, a four-level electronic material Propagation in Amplifying Random Media is admixed. The electron densities at a ground, first, second, and third levels are N0 (x, t),. . . ,N3 (x, t), respectively. An external mechanism pumps electrons from the ground level to the third one at a certain pumping rate Pr . Nonradiative transitions occur from the higher level to the lower one with the lifetimes of the upper levels τ32 , τ21 , τ10 . The radiative transition from the second level to the first one, or back has the centre frequency ωa . According to the monograph (Siegman 1986), the polarization density P(x, t) depends nonlinearly on the population inversion ΔN (x, t) = N1 (x, t) − N2 (x, t) and on the electric field E(x, t). An equation for the polarization density P(x, t) can be written. Equations for electron densities at every level can be utilized. One must introduce sources into the system. The distance between the two sources L s must be smaller than the localization length ξ . The sources simulate the spontaneous emission. They have a Lorentzian spectrum centred around ωa and their amplitudes depend on N2 . Two leads are assumed at the left-hand and right-hand sides of the system. A numerical method for solving the mentioned equations with an absorbing-boundary condition is described. Jiang and Soukoulis (2000) have performed the numerical simulations for periodic and random systems. They associate a lasing threshold with each of the systems. With the increase of the randomness, the threshold intensity decreases. It has been found that, in the case of a periodic system and a short (L < ξ ) random system, one mode dominates, even if the gain increases far above the threshold. In the case of a long (L ξ ) random system, the stationary behaviour is marked with beats. There are more than one localized mode and each one has its specific frequency. In a figure, which is not reproduced here, it is shown how, for the pumping rates Pr = 104 , 106 , 1010 s−1 , one lasing mode appears and then more lasing modes. So more than one mode can exist together and each mode seems to repel others to reserve itself some space. There exists a saturated number of lasing modes Nm , which is proportional to the length of the system L. There exists an average mode length L m = NLm , which is proportional to the localization length. The emission spectra at the right-hand and left-hand sides of the system are different. In the real three-dimensional experiments, Jiang and Soukoulis (2000) assume that every localized mode has its direction, strength, and position. Cao et al. (2001) have measured the photon statistics of random lasers with resonant feedback. They have found that, when the pump intensity increases, the photon–number distribution changes continuously from the Bose–Einstein distribution at the threshold to the Poisson distribution well above the threshold. The decreases correspondingly from normalized second factorial moment G 2 = n(n−1) n 2 2 to 1. By comparing the photon statistics of a random laser with resonant feedback and this statistics of a random laser with nonresonant feedback, the authors have formed the idea about two lasing mechanisms. For a random laser with nonresonant feedback, the fluctuation of the total number of photons in all modes of laser emission is smaller than the fluctuation of this number in blackbody radiation with the same number of modes (Zacharakis et al. 2000). However, the photon–number distribution in a single mode remains the Bose– Einstein distribution even well above the threshold. The quasimodes correspond to 6 Periodic and Disordered Media eigenfrequencies, whose imaginary parts represent the decay rates, in fact they are pseudomodes. When kl > 1 (k is a wave number and l is the transport mean free path), the quasimodes overlap spectrally and the emission spectrum is continuous. In other words, in the case of weak scattering, the quasimodes decay fast and they are strongly coupled. So the loss of quasimodes is much lower than the loss of a single quasimode. In an active random medium, when the optical gain for interacting quasimodes reaches the loss of these quasimodes, lasing with nonresonant feedback emerges. A significant spectral narrowing is observed. Well above the threshold, the total photon–number fluctuation decreases due to gain saturation. However, strong coupling of quasimodes excludes stabilization of the lasing in a single quasimode. When the amount of optical scattering increases, the decay rates of the quasimodes decrease and the mixing of the quasimodes weakens. When the optical gain increases, lasing with nonresonant feedback occurs first. As the optical gain increases further, it exceeds the loss of a quasimode that has a long lifetime. This resembles a traditional laser. A further increase of optical gain leads to lasing in more low-loss quasimodes. Laser emission manifests itself by discrete peaks. This process is lasing with resonant feedback. When the scattering strength increases further, the lasing threshold in individual low-loss quasimodes drops below the threshold for lasing in coupled quasimodes. So the mentioned stage of lasing with nonresonant feedback is absent. Well above the threshold, the fluctuations of individual photon numbers decrease due to gain saturation. Vanneste and Sebbah (2001) also perceived that the studies of the dependence of laser action on the strength of the disorder (the randomness) and, for highly scattering media, of the Anderson localization on laser gain were not finished. The Anderson localization of electronic waves has later been extended to electromagnetic waves (John 1984). The average localization length ξ characterizes an exponential decrease of the envelope of a localized mode. Also properties of the electronic or photonic transport depend on this parameter. The localized eigenmodes are microcavities in fact and they can serve as the feedback cavities of the laser. The reports of experiments and theoretic interpretations admit only nonresonant feedback of spontaneous emission amplified along open scattering paths (Vanneste and Sebbah 2001). Only recently laser action in a random medium with resonant feedback has been reported, e.g. (Cao et al. 1999). In the experiment, a semiconductor powder was used, which played simultaneously the roles of the random and active media. Possible connection with the Anderson localization was merely mentioned (Cao et al. 2000). In theory it is assumed that the localized modes of the passive random system are preserved in the presence of gain, but one may ask, how much they are modified by the gain. The doubt whether localization is enhanced or inhibited by the gain ended in a low quotation index in one of the papers (Zhang 1995, Paasschens et al. 1996, Jiang 1999). Vanneste and Sebbah (2001) examine the role of strong localization in the lasing action process. The numerical model describes the full dynamics of the field and the Propagation in Amplifying Random Media levels’ populations in a two-dimensional active random medium. First they choose a window of modes strongly localized in the spectrum of the passive medium and examine the spatial and spectral characteristics of these modes. Next the gain is activated and the passive modes are compared with the laser modes. It results that the active medium is described by the modes of the passive system. The amplifying medium has only a small effect on the frequencies. The two-dimensional spatial profile of the localized wave functions is reproduced without distortion. They consider two-dimensional disordered medium of size L 2 made of circular scatterers with radius r , refractive index n 2 , and surface filling fraction φ, imbedded in a matrix of index n 1 . This system is equivalent to an array of dielectric cylinders parallel to the z-axis. The matrix also plays the role of an active medium. They utilize the rate equations of a four-level atomic system and Maxwell’s equations with a polarization term including atomic population inversion. It is a generalization of the paper (Jiang and Soukoulis 2000). A TM field defined by the components E z , Hx , and Hy is considered. The modes of the passive system have been studied as follows. The time response to a short pulse is recorded and Fourier transformed. It is damped. The first and second half of the time record can be Fourier transformed. It leads to a conclusion that the leaky modes (quasimodes) with shorter lifetimes have not survived in the second half of the time record. The modes with longer lifetimes are examined by a monochromatic source on their spatial pattern and time evolution. It can be concluded that the regime of the Anderson localization has been attained. The investigation has continued with introducing gain by uniform pumping of the atoms in the whole system. Above threshold a stationary regime is attained after a transient exponential growth of the field amplitude. The structure of mode has been preserved. At higher pump levels, the laser emission is multimode. After a transient regime the field becomes stationary in beats between several excited modes. The choice of individual localized modes is possible by pumping locally. So it is meaningful to consider local thresholds for lasing. Vanneste and Sebbah (2001) have concluded with a question, if the introduction of gain contributes somehow to the discrimination between the diffusive and localized regimes in actual experiments. Burin et al. (2001) have based their model for a random laser on a planar system of resonant scatterers pumped by an external laser. At the beginning they expound also the following classification: Random lasing with nonresonant feedback appears as the remarkable narrowing of the luminescence spectrum to a single peak of width about several nanometres. The coherent feedback lasing is identified as the series of high and narrow peaks having the width decreasing with the increase of pump power to at least the tenth nanometre scale. They distinguish two different theoretical approaches to the description of random lasing: (i) The diffusion model with coherent backscattering corrections, e.g. (Wiersma and Lagendijk 1996) appropriate for describing the regime of nonresonant feedback, but which fails to predict the lasing threshold behaviour for the laser operation. The criticism is directed to the disability of prediction of the formation 6 Periodic and Disordered Media of high-quality random cavities. It seems to comprise even the rejection of the Anderson localization (considered in the paper (Jiang and Soukoulis 1999)). (ii) In another approach, the criticism is contrasted with the intended comparison of the model with the random matrix approach (Frahm et al. 2000, Misirpashaev and Beenakker 1998). The authors have analysed an experiment on ZnO disk-shaped powder samples using a classical microscopic model. The medium is represented by a set of random scatterers. The number of these particles is denoted by N . Each of them is considered as an electric dipole oscillator. The resonant frequency of the kth particle is denoted by ωk and the transition dipole moment of the kth particle is denoted by dk , its length is |dk |=dk . The position vector of a particle k relative to a centre j is denoted by Rk j . It is assumed that the polarization component pk , |pk |= pk , is parallel to the transition dipole moment, pk = pk dk . dk We suppose that ˜ k eizt , Ek j = E ˜ k j eizt . pk = p˜ k eizt , Ek = E If it may be put = 1, then the equations for the collective eigenfrequencies z and the collective eigenvectors {p˜ k } of the system have the form − z 2 p˜ k = −(ωk − ig)2 p˜ k + 2ωk dk (dk · E˜ k ), where g is a gain rate and ˜k = E j j=k ˜ k j + i 2 q 3 p˜ k E 3 are the electric fields of other particles and the damping term, with p˜ j − 3n(n · p˜ j ) E˜ k j = eiq Rk j 1 − iq Rk j 3 Rk j ˜ j) ˜ p j − n(n · p , +q 2 eiq Rk j Rk j where n= Rk j z , q= . Rk j c The equations are linear, but z enters in a relatively complicated manner. It is a complex number, the imaginary part of which is the decay rate. The iteration method they have used calculates the lasing threshold. They could not treat a very large system with the number of particles exceeding N = 1000. Propagation in Amplifying Random Media They have restricted the analysis to a two-dimensional system of scatterers. They have ignored the difference of dielectric constants of substrate and air. They report all N particles were that they have studied the case ωk = ω0 . The positions of √ N ωηc0 . Three differgenerated randomly within the circle of the radius R = ent values of the parameter η, η = 0.3, 1, 3, have been probed. In comparison with the random matrix approach, they have seen that the high-quality collective modes, in fact, occupy a few scatterers (e. g., from 5 to 10 for N = 100). Ling et al. (2001) have presented a detailed experimental study of random lasers with resonant feedback. One of two materials was a poly(methyl methacrylate) film that contained dye and titanium dioxide particles. The other was zinc oxide polycrystalline film on a sapphire substrate. The dependences of the incident pump-pulse energy at the lasing threshold and the number of lasing modes for a fixed pump intensity on the transport mean free path have been measured. The effects of the pump area and the sample size have been determined too. The idea published in Cao et al. (2001) has been developed to an analytical model and the theoretical predictions have agreed with the experimental results. Burin et al. (2002) have announced an analytical approach to random lasing in a one-dimensional medium. They have dealt with the lasing threshold. They have discussed application to the regime of strong three-dimensional localization of light. They have derived that the lasing threshold has strong fluctuations from sample to sample. The original approach (Letokhov 1968) is based on a diffusion formalism. It 2 predicts the lasing instability, when the length of the diffusion path Llt , with lt being the mean-free path length of light, attains the gain length lg . Not even John (1984) modifies this criterion. But experimental and numerical studies have shown a different value of gain rate at which lasing sets in. The authors consider a different physical mechanism from diffusive motion. They study a one-dimensional medium as studied by (Jiang and Soukoulis 1999). They identify relevant channels responsible for lasing with the quasimodes. The results may be applied to higher dimension in the strong localization regime. This phenomenon has been reported in the studies (Chabanov et al. 2000, Wiersma et al. 1997). It is assumed that a one-dimensional scattering medium is situated between the planes x = 0 and x = L. A gain medium is assumed. The description is based on an imaginary correction of the frequency ω → ω + ig2 , where g is the gain rate. The lasing threshold is associated with the singularity in the transmission through the sample ( 00 ) as in papers (Jiang and Soukoulis 1999, Beenakker 1998). The authors relate the threshold with the intensity of the field near the source point. They show the equivalence convincingly. In the strong localization regime the lasing threshold is very small. We will consider a source in the middle of the structure. We let rl , rr denote the reflection coefficients from the left-hand and right-hand halves, respectively. Provided that lt L, |rl | ≈ 1, |rr | ≈ 1, it is essential that the reflected waves interfere constructively and the sum of reflection phases (r (ω) = |r (ω)|eiΦ(ω) ) is 6 Periodic and Disordered Media a multiple of 2π . The authors call this resonance. The resonances approach eigenmodes of the whole system closed between mirrors. But in the passive medium the equality is not reached, |rl | < 1, | rr | < 1. The equation is rewritten as rl (ω)rr (ω) = 1 g g + Φr ω + i = 1, |rlrr | exp i Φl ω + i 2 2 g dΦl dΦr |rlrr | exp − + = 1. 2 dω dω Letting gc denote the solution of equation (6.599), we can express it, approximately, as gc ≈ − |tl |2 + |tr |2 dΦ1 dω dΦr dω We differ in the sign. The validity or invalidity of the sign could be ultimately determined according to examples of Φl (ω), Φr (ω). The exposition comprises other minor errors. For example, the Green function ik(x−x ) s cr e + rr e−ik(x−xs ) , x ≥ xs , (6.601) G(x, xs ) = cl e−ik(x−xs ) + rl eik(x−xs ) , x ≤ xs , is written without brackets, but also without cr , cl , which would have been obtained on a possible removal of the brackets. Here k = ωc and xs is the source point. The Green function should be continuous at xs , but ∂∂x G(x, xs ) should have a
{"url":"https://epdf.tips/quantum-aspects-of-light-propagationb0bb73730be1c87ef3747e15f9c433a478583.html","timestamp":"2024-11-13T21:07:12Z","content_type":"text/html","content_length":"1049657","record_id":"<urn:uuid:373f0079-bcf9-4863-be12-dafa74889382>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00653.warc.gz"}
Convert kernel model for binary classification to incremental learner Since R2022a IncrementalMdl = incrementalLearner(Mdl) returns a binary Gaussian kernel classification model for incremental learning, IncrementalMdl, using the traditionally trained kernel model object or kernel model template object in Mdl. If you specify a traditionally trained model, then its property values reflect the knowledge gained from Mdl (parameters and hyperparameters of the model). Therefore, IncrementalMdl can predict labels given new observations, and it is warm, meaning that its predictive performance is tracked. IncrementalMdl = incrementalLearner(Mdl,Name=Value) uses additional options specified by one or more name-value arguments. Some options require you to train IncrementalMdl before its predictive performance is tracked. For example, MetricsWarmupPeriod=50,MetricsWindowSize=100 specifies a preliminary incremental training period of 50 observations before performance metrics are tracked, and specifies processing 100 observations before updating the window performance metrics. Convert Traditionally Trained Model to Incremental Learner Train a kernel classification model for binary learning by using fitckernel, and then convert it to an incremental learner. Load and Preprocess Data Load the human activity data set. For details on the data set, enter Description at the command line. Responses can be one of five classes: Sitting, Standing, Walking, Running, or Dancing. Dichotomize the response by identifying whether the subject is moving (actid > 2). Train Kernel Classification Model Fit a kernel classification model to the entire data set. Mdl = ResponseName: 'Y' ClassNames: [0 1] Learner: 'svm' NumExpansionDimensions: 2048 KernelScale: 1 Lambda: 4.1537e-05 BoxConstraint: 1 Mdl is a ClassificationKernel model object representing a traditionally trained kernel classification model. Convert Trained Model Convert the traditionally trained kernel classification model to a model for incremental learning. IncrementalMdl = incrementalLearner(Mdl,Solver="sgd",LearnRate=1) IncrementalMdl = IsWarm: 1 Metrics: [1x2 table] ClassNames: [0 1] ScoreTransform: 'none' NumExpansionDimensions: 2048 KernelScale: 1 IncrementalMdl is an incrementalClassificationKernel model object prepared for incremental learning. • The incrementalLearner function initializes the incremental learner by passing model parameters to it, along with other information Mdl extracted from the training data. • IncrementalMdl is warm (IsWarm is 1), which means that incremental learning functions can start tracking performance metrics. • incrementalClassificationKernel trains the model using the adaptive scale-invariant solver, whereas fitckernel trained Mdl using the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) Predict Responses An incremental learner created from converting a traditionally trained model can generate predictions without further processing. Predict classification scores for all observations using both models. [~,ttscores] = predict(Mdl,feat); [~,ilscores] = predict(IncrementalMdl,feat); compareScores = norm(ttscores(:,1) - ilscores(:,1)) The difference between the scores generated by the models is 0. Configure Performance Metric Options Use a trained kernel classification model to initialize an incremental learner. Prepare the incremental learner by specifying a metrics warm-up period and a metrics window size. Load the human activity data set. For details on the data set, enter Description at the command line. Responses can be one of five classes: Sitting, Standing, Walking, Running, and Dancing. Dichotomize the response by identifying whether the subject is moving (actid > 2). Because the data set is grouped by activity, shuffle it for simplicity. Then, randomly split the data in half: the first half for training a model traditionally, and the second half for incremental n = numel(Y); rng(1) % For reproducibility cvp = cvpartition(n,Holdout=0.5); idxtt = training(cvp); idxil = test(cvp); shuffidx = randperm(n); X = feat(shuffidx,:); Y = Y(shuffidx); % First half of data Xtt = X(idxtt,:); Ytt = Y(idxtt); % Second half of data Xil = X(idxil,:); Yil = Y(idxil); Fit a kernel classification model to the first half of the data. Mdl = fitckernel(Xtt,Ytt); Convert the traditionally trained kernel classification model to a model for incremental learning. Specify the following: • A performance metrics warm-up period of 2000 observations • A metrics window size of 500 observations • Use of classification error and hinge loss to measure the performance of the model IncrementalMdl = incrementalLearner(Mdl, ... MetricsWarmupPeriod=2000,MetricsWindowSize=500, ... Fit the incremental model to the second half of the data by using the updateMetricsAndFit function. At each iteration: • Simulate a data stream by processing 20 observations at a time. • Overwrite the previous incremental model with a new one fitted to the incoming observations. • Store the cumulative metrics, window metrics, and number of training observations to see how they evolve during incremental learning. % Preallocation nil = numel(Yil); numObsPerChunk = 20; nchunk = ceil(nil/numObsPerChunk); ce = array2table(zeros(nchunk,2),VariableNames=["Cumulative","Window"]); hinge = array2table(zeros(nchunk,2),VariableNames=["Cumulative","Window"]); numtrainobs = [zeros(nchunk,1)]; % Incremental fitting for j = 1:nchunk ibegin = min(nil,numObsPerChunk*(j-1) + 1); iend = min(nil,numObsPerChunk*j); idx = ibegin:iend; IncrementalMdl = updateMetricsAndFit(IncrementalMdl,Xil(idx,:),Yil(idx)); ce{j,:} = IncrementalMdl.Metrics{"ClassificationError",:}; hinge{j,:} = IncrementalMdl.Metrics{"HingeLoss",:}; numtrainobs(j) = IncrementalMdl.NumTrainingObservations; IncrementalMdl is an incrementalClassificationKernel model object trained on all the data in the stream. During incremental learning and after the model is warmed up, updateMetricsAndFit checks the performance of the model on the incoming observations, and then fits the model to those observations. Plot a trace plot of the number of training observations and the performance metrics on separate tiles. t = tiledlayout(3,1); xlim([0 nchunk]) ylabel(["Number of","Training Observations"]) xlim([0 nchunk]) ylabel("Classification Error") xlim([0 nchunk]) ylabel("Hinge Loss") The plot suggests that updateMetricsAndFit does the following: • Fit the model during all incremental learning iterations. • Compute the performance metrics after the metrics warm-up period only. • Compute the cumulative metrics during each iteration. • Compute the window metrics after processing 500 observations (25 iterations). Specify SGD Solver The default solver for incrementalClassificationKernel is the adaptive scale-invariant solver, which does not require hyperparameter tuning before you fit a model. However, if you specify either the standard stochastic gradient descent (SGD) or average SGD (ASGD) solver instead, you can also specify an estimation period, during which the incremental fitting functions tune the learning rate. Load the human activity data set. For details on the data set, enter Description at the command line. Responses can be one of five classes: Sitting, Standing, Walking, Running, and Dancing. Dichotomize the response by identifying whether the subject is moving (actid > 2). Randomly split the data in half: the first half for training a model traditionally, and the second half for incremental learning. n = numel(Y); rng(1) % For reproducibility cvp = cvpartition(n,Holdout=0.5); idxtt = training(cvp); idxil = test(cvp); % First half of data Xtt = feat(idxtt,:); Ytt = Y(idxtt); % Second half of data Xil = feat(idxil,:); Yil = Y(idxil); Fit a kernel classification model to the first half of the data. TTMdl = fitckernel(Xtt,Ytt); Convert the traditionally trained kernel classification model to a model for incremental learning. Specify the standard SGD solver and an estimation period of 2000 observations (the default is 1000 when a learning rate is required). IncrementalMdl = incrementalLearner(TTMdl,Solver="sgd",EstimationPeriod=2000); IncrementalMdl is an incrementalClassificationKernel model object configured for incremental learning. Fit the incremental model to the second half of the data by using the fit function. At each iteration: • Simulate a data stream by processing 10 observations at a time. • Overwrite the previous incremental model with a new one fitted to the incoming observations. • Store the initial learning rate and number of training observations to see how they evolve during training. % Preallocation nil = numel(Yil); numObsPerChunk = 10; nchunk = floor(nil/numObsPerChunk); learnrate = [zeros(nchunk,1)]; numtrainobs = [zeros(nchunk,1)]; % Incremental fitting for j = 1:nchunk ibegin = min(nil,numObsPerChunk*(j-1) + 1); iend = min(nil,numObsPerChunk*j); idx = ibegin:iend; IncrementalMdl = fit(IncrementalMdl,Xil(idx,:),Yil(idx)); learnrate(j) = IncrementalMdl.SolverOptions.LearnRate; numtrainobs(j) = IncrementalMdl.NumTrainingObservations; IncrementalMdl is an incrementalClassificationKernel model object trained on all the data in the stream. Plot a trace plot of the number of training observations and the initial learning rate on separate tiles. t = tiledlayout(2,1); xlim([0 nchunk]) ylabel("Number of Training Observations") xlim([0 nchunk]) ylabel("Initial Learning Rate") The plot suggests that the fit function does not fit the model to the streaming data during the estimation period. The initial learning rate jumps from 0.7 to its autotuned value after the estimation period. During training, the software uses a learning rate that gradually decays from the initial value specified in the LearnRateSchedule property of IncrementalMdl. Input Arguments Mdl — Traditionally trained model or model template ClassificationKernel model object | kernel model template Traditionally trained Gaussian kernel model or kernel model template, specified as a ClassificationKernel model object returned by fitckernel or a template object returned by templateKernel, If Mdl is a kernel model template object, incrementalLearner determines whether to standardize the predictor variables based on the Standardize property of the model template object. For more information, see Standardize Data Incremental learning functions support only numeric input predictor data. If Mdl was trained on categorical data, you must prepare an encoded version of the categorical data to use incremental learning functions. Use dummyvar to convert each categorical variable to a numeric matrix of dummy variables. Then, concatenate all dummy variable matrices and any other numeric predictors, in the same way that the training function encodes categorical data. For more details, see Dummy Variables. Name-Value Arguments Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter. Example: Solver="sgd",MetricsWindowSize=100 specifies the stochastic gradient descent solver for objective optimization, and specifies processing 100 observations before updating the window performance metrics. General Options SGD and ASGD Solver Options Adaptive Scale-Invariant Solver Options Performance Metrics Options Output Arguments IncrementalMdl — Binary Gaussian kernel classification model for incremental learning incrementalClassificationKernel model object Binary Gaussian kernel classification model for incremental learning, returned as an incrementalClassificationKernel model object. IncrementalMdl is also configured to generate predictions given new data (see predict). The incrementalLearner function initializes IncrementalMdl for incremental learning using the model information in Mdl. The following table shows the Mdl properties that incrementalLearner passes to corresponding properties of IncrementalMdl. The function also passes other model information required to initialize IncrementalMdl, such as learned model coefficients, regularization term strength, and the random number stream. Input Object Mdl Type Property Description KernelScale Kernel scale parameter, a positive scalar ClassificationKernel model object or kernel model template object Learner Linear classification model type, a character vector NumExpansionDimensions Number of dimensions of expanded space, a positive integer ClassNames Class labels for binary classification, a two-element list Mu Predictor variable means, a numeric vector ClassificationKernel model object NumPredictors Number of predictors, a positive integer Prior Prior class label distribution, a numeric vector ScoreTransform Score transformation function, a function name or function handle Sigma Predictor variable standard deviations, a numeric vector Note that incrementalLearner does not use the Cost property of the traditionally trained model in Mdl because incrementalClassificationKernel does not support this property. More About Incremental Learning Adaptive Scale-Invariant Solver for Incremental Learning Random Feature Expansion Estimation Period Standardize Data If incremental learning functions are configured to standardize predictor variables, they do so using the means and standard deviations stored in the Mu and Sigma properties, respectively, of the incremental learning model IncrementalMdl. • If you standardize the predictor data when you train the input model Mdl by using fitckernel, the following conditions apply: □ incrementalLearner passes the means in Mdl.Mu and standard deviations in Mdl.Sigma to the corresponding incremental learning model properties. □ Incremental learning functions always standardize the predictor data. • When you set Standardize=true by using the Standardize name-value argument of templateKernel, and the Mdl.Mu and Mdl.Sigma properties are empty, the following conditions apply: □ If the estimation period is positive (see the EstimationPeriod property of IncrementalMdl), incremental fitting functions estimate the means and standard deviations using the estimation period observations. □ If the estimation period is 0, incrementalLearner forces the estimation period to 1000. Consequently, incremental fitting functions estimate new predictor variable means and standard deviations during the forced estimation period. • When incremental fitting functions estimate predictor means and standard deviations, the functions compute weighted means and weighted standard deviations using the estimation period observations. Specifically, the functions standardize predictor j (x[j]) using ${x}_{j}^{\ast }=\frac{{x}_{j}-{\mu }_{j}^{\ast }}{{\sigma }_{j}^{\ast }}.$ □ x[j] is predictor j, and x[jk] is observation k of predictor j in the estimation period. □ ${\mu }_{j}^{\ast }=\frac{1}{\sum _{k}{w}_{k}^{\ast }}\sum _{k}{w}_{k}^{\ast }{x}_{jk}.$ □ ${\left({\sigma }_{j}^{\ast }\right)}^{2}=\frac{1}{\sum _{k}{w}_{k}^{\ast }}\sum _{k}{w}_{k}^{\ast }{\left({x}_{jk}-{\mu }_{j}^{\ast }\right)}^{2}.$ □ ${w}_{j}^{\ast }=\frac{{w}_{j}}{\sum _{\forall j\in \text{Class}k}{w}_{j}}{p}_{k},$ ☆ p[k] is the prior probability of class k (Prior property of the incremental model). ☆ w[j] is observation weight j. Performance Metrics Version History Introduced in R2022a
{"url":"https://ch.mathworks.com/help/stats/classificationkernel.incrementallearner.html","timestamp":"2024-11-08T12:28:57Z","content_type":"text/html","content_length":"189732","record_id":"<urn:uuid:c3f65d79-206b-4a2c-9b12-986c0c040d24>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00282.warc.gz"}
Inherently balanced ATUs Hams are taken by fashion and pseudo technical discussion more than objective circuit analysis, experiment, and measurement. Nowhere is this more evident that the current fashion for “True Balanced LB Cebik in 2005 in his article “10 Frequency (sic) Asked Questions about the All-Band Doublet” wrote In recent years, interest in antennas that require parallel transmission lines has surged, spurring the development of new inherently balanced tuners. Open wire lines require current balance to minimise radiation and pick up, the balance objective is current balance at all points on the line. Cebik goes on to give examples of his “inherently balanced tuners”. Above, Cebik’s “inherently balanced tuners” all have a common mode choke at the input, and some type of adjustable network to the output terminals. Cebik was not the originator of the idea, many others had written of the virtue of the configuration, but I cannot recall seeing meaningful measurement to support the claims. An experiment Lets take the last circuit, and simulate it using A low Insertion VSWR high Zcm Guanella 1:1 balun for HF followed by an MFJ-949E T match ATU. The MFJ949E stands on its insulating feet on a large conductive sheet to serve as the ground, the balun in connected to the ATU input jack and the input jack of the balun is grounded to the aluminium sheet. A banana jack adapter is connected to the ATU Coax 1 output jack, and resistors of 50Ω and 100Ω connected from those terminals to provide a slightly asymmetric load. The voltage between ground and each of the output terminals was measured with a scope, and currents calculated. Above are the measured output voltage waveforms at 14MHz. The meanings of currents used here is given at Differential and common mode components of current in a two wire transmission line. Lets work out the current amplitudes. Above, V1 (yellow) is 4.8divpp, V2 (cyan) is 7.4divpp. I1=V1/50=4.8*0.2/50=19.2mApp. I2=V2/100=7.4*0.2/100=14.8mApp. Expanding the timebase allows better measurement of the phase difference. V2 lags by a half cycle and 7.5µs, so V2 phase is -180-7.5e-9*14e6*360=-180-38=-218°. Lets calculate the common mode and differential component of current in each load resistor. We will use Python as it handles complex numbers. >>> i1=0.0192 >>> i2=0.0148*(math.cos(-218/180*math.pi)+1j*math.sin(-218/180*math.pi)) >>> ic=(i1+i2)/2 >>> abs(2*ic) >>> id=(i1-i2)/2 >>> abs(id) >>> abs(2*ic)/abs(id) >>> 20*math.log(abs(2*ic)/abs(id))/math.log(10) So, the differential component of current is 16.1mApp, and the total common mode current is 11.8mApp, the total common mode current is more than a two thirds the differential current or 2.7dB less than differential current. By any standard, this is appalling balance. The measurements reported here are for a specific scenario (components, frequency and load), and should not be simply extrapolated to other scenarios. The calculated imbalance if you like applies to the specific test circuit, and cannot really be extended to use of this balun in an antenna system scenario. Inherent balance The problem starts with that it is near impossible to build such an ATU with perfect symmetry, meaning the distributed inductances and capacitances to ground are symmetric so that with a symmetric load, the entire system was symmetric and there was very low common mode load current. Achieving that symmetry does not guarantee symmetric currents in an asymmetric load. Fig 4 and the associated text at Balanced ATUs and common mode current deal with this problem. “Inherent balance” is a belief of the very naive… and snake oil salesmen who would relieve them of their money whilst selling the satisfaction that they have something rather special! Superlatives like “true balanced tuner”, “fully balanced tuner”, “superb current balance” are bait for naive hams who do not test the claims, hams who do not measure the balance objective… common mode current. Continued at Inherently balanced ATUs – part 2 .
{"url":"https://owenduffy.net/blog/?p=14477","timestamp":"2024-11-09T17:40:59Z","content_type":"text/html","content_length":"61247","record_id":"<urn:uuid:29377826-d726-4d2c-b8eb-24c6f8ff42bb>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00441.warc.gz"}
Printable Blank Multiplication Table 1 10 Charts Worksheet In PDF | Multiplication Chart Printable Printable Blank Multiplication Table 1 10 Charts Worksheet In PDF Printable Blank Multiplication Table 1 10 Charts Worksheet In PDF Printable Blank Multiplication Table 1 10 Charts Worksheet In PDF – A Multiplication Chart is an useful tool for kids to find out just how to increase, divide, and locate the tiniest number. There are many usages for a Multiplication Chart. These handy tools assist children recognize the process behind multiplication by using colored paths and also filling out the missing out on items. These charts are free to download and print. What is Multiplication Chart Printable? A multiplication chart can be utilized to aid youngsters learn their multiplication facts. Multiplication charts been available in several types, from complete web page times tables to single web page ones. While specific tables serve for presenting chunks of information, a complete web page chart makes it less complicated to review facts that have actually already been grasped. The multiplication chart will commonly include a left column as well as a top row. When you want to locate the product of 2 numbers, pick the first number from the left column as well as the second number from the leading row. Multiplication charts are handy knowing tools for both youngsters as well as grownups. Multiplication Table Printable 1-10 are offered on the Internet and also can be published out as well as laminated for resilience. Why Do We Use a Multiplication Chart? A multiplication chart is a diagram that reveals exactly how to increase two numbers. You pick the initial number in the left column, relocate it down the column, and also then select the 2nd number from the leading row. Multiplication charts are handy for lots of reasons, consisting of aiding children discover how to split and streamline fractions. Multiplication charts can additionally be practical as workdesk resources due to the fact that they serve as a constant pointer of the trainee’s progression. Multiplication charts are also beneficial for aiding pupils memorize their times tables. As with any skill, remembering multiplication tables takes time and practice. Multiplication Table Printable 1-10 Free Printable Multiplication Table 1 10 Chart Template PDF Best Multiplication Table Printable 1-10 You’ve come to the appropriate area if you’re looking for Multiplication Table Printable 1-10. Multiplication charts are offered in different styles, consisting of full dimension, half dimension, as well as a range of charming designs. Some are vertical, while others feature a straight layout. You can additionally locate worksheet printables that include multiplication formulas and also math Multiplication charts as well as tables are crucial tools for kids’s education and learning. These charts are fantastic for usage in homeschool math binders or as classroom posters. A Multiplication Table Printable 1-10 is a valuable tool to enhance mathematics facts as well as can help a youngster learn multiplication rapidly. It’s likewise an excellent tool for avoid counting and also finding out the times tables. Related For Multiplication Table Printable 1-10
{"url":"https://multiplicationchart-printable.com/multiplication-table-printable-1-10/printable-blank-multiplication-table-1-10-charts-worksheet-in-pdf-4/","timestamp":"2024-11-11T18:07:28Z","content_type":"text/html","content_length":"27600","record_id":"<urn:uuid:498279f7-5e71-4c3c-a308-ef8a697cf205>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00496.warc.gz"}
Understanding Alternate Interior Angles: Definition and Properties - Chimpvine Understanding Alternate Interior Angles: Definition and Properties Alternate Interior Angles In the realm of geometry, the concept of alternate interior angles holds significant importance. Let’s delve into the definition and properties of alternate interior angles to gain a comprehensive understanding of their role in geometric configurations. What are Alternate Interior Angles? Alternate interior angles are a pair of angles that are formed when a transversal intersects two lines. They are located on opposite sides of the transversal and inside the two lines, creating a distinct geometric relationship. Let’s understand the concept of alternate interior angles in a simple manner. Alternate interior angles are like secret buddies hiding within parallel lines that get crossed by another line (usually called a transversal). Imagine two parallel lines with a third line crossing through them. Where the third line cuts through the first two, it creates four angles. Alternate interior angles are the ones on the inside, but opposite sides of the transversal. They’re like a pair of cousins sitting across from each other at a dinner table, but in geometry! These angles have a special relationship: they’re always equal. So, if you measure one of them, you’ll find that the other one has the same measurement. It’s like they’re twins – whatever one does, the other follows suit. This concept is super handy because it helps us solve all sorts of geometry problems, from figuring out angles in shapes to understanding how lines intersect. Identifying Alternate Interior Angles To identify alternate interior angles, one must recognize the pattern of their positioning when a transversal intersects two lines. Understanding the properties and congruence of these angles is essential in various geometric applications. Here’s a simple step-by-step guide to spot alternate interior angles: Find Parallel Lines: First, look for two lines that are parallel to each other. They should be like train tracks – never crossing each other, but running side by side. Locate the Transversal: Next, find another line that crosses (or intersects) those parallel lines. This line is called the transversal. Identify Interior Angles: Now, focus on the angles formed on the inside of the parallel lines, but on opposite sides of the transversal. These are your alternate interior angles. Check for Equality: Remember, alternate interior angles are always equal. So if you measure one angle and find its value, you’ll see that the other alternate interior angle has the same measurement. Keep in mind this neat trick: if you have parallel lines and a transversal, and you spot equal interior angles on one side of the transversal, you can bet there are equal alternate interior angles hiding on the other side! Example: Find the value of X and Y in the given figure We know that, The alternate interior angle of ∠X is 90°, so ∠X = 90°. The alternate interior angle of ∠Y is 90°, so ∠X = 90°. Summary: This example illustrates the process of identifying and understanding alternate interior angles in the context of parallel lines intersected by a transversal. By recognizing the congruence and properties of alternate interior angles, one can effectively analyze geometric configurations and make accurate deductions. 1. Angle Measurement Tip: When lines are parallel, alternate interior angles are congruent, allowing for the calculation of angle measures based on known values. 2. The Architect’s Blueprint Tip: Utilize the properties of alternate interior angles to ensure the symmetry and precision of architectural structures. 3. Engineering Precision Tip: Leverage the congruence of alternate interior angles to optimize the design and functionality of engineering projects. Scenario: Architectural Design Architects use the concept of alternate interior angles to ensure the symmetry and balance of architectural structures, creating visually appealing and structurally sound designs. Scenario: Engineering Precision Engineers apply the properties of alternate interior angles to optimize the functionality and efficiency of mechanical and structural systems, ensuring precision and accuracy in their designs. Alternate interior angles are a pair of angles that are formed when a transversal intersects two lines. They are located on opposite sides of the transversal and inside the two lines. Understanding alternate interior angles is crucial in geometry as it helps in proving theorems, solving geometric problems, and determining the congruence of angles and lines. Alternate interior angles are related in such a way that they are always congruent, meaning they have the same measure or size. Yes, the concept of alternate interior angles can be applied in various real-life scenarios such as architecture, engineering, and design to ensure the accuracy and symmetry of structures. The properties of alternate interior angles include being congruent, forming a ‘Z’ shape when lines are parallel, and playing a key role in the proof of theorems related to parallel lines and Like? Share it with your friends
{"url":"https://site.chimpvine.com/article/alternate-interior-angles/","timestamp":"2024-11-12T22:42:02Z","content_type":"text/html","content_length":"223860","record_id":"<urn:uuid:adb19952-784c-4a00-b4af-c1536384a26d>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00852.warc.gz"}
Forecasting epidemic trajectories: Time Forecasting epidemic trajectories: Time Series Growth Curves package tsgc Michael Ashby, Andrew Harvey, Paul Kattuman, Craig Thamotheram This paper documents the Time Series Growth Curves ( ) package for , which is designed for forecasting epidemics, including the detection of new waves and turning points. The package implements time series growth curve methods founded on a dynamic Gompertz model and can be estimated using techniques based on state space models and the Kalman filter. The model is suitable for predicting future values of any variable which, when cumulated, is subject to some unknown saturation level. In the context of epidemics, the model can adjust to changes in social behavior and policy. It is also relevant for many other domains, such as the diffusion of new products. The package is demonstrated using data on COVID-19 confirmed cases. Outbreaks of infectious diseases with epidemic potential require real-time responses by public health authorities. Accurate real-time forecasting of the trajectory of the epidemic over the near future is of great value in this regard. The R package, tsgc, is intended for use in monitoring and forecasting the progress of an epidemic, including the detection of new waves and turning points. It develops and implements time series growth curve methods first reported in Harvey and Kattuman (2020) (hereinafter referred to as HK). HK develop a class of time series models for predicting future values of a variable which, when cumulated, is subject to an unknown saturation level. In a single wave of an epidemic, as more and more people get infected, the pool of susceptible individuals dwindles. This results in the decline of new infections, and the cumulative number of infections approaches its saturation level. The model can take account of deviations relative to this canonical trajectory due to changes in social behavior and policy. Models in this family are relevant for many other disciplines, such as marketing (when estimating the demand for new products). While attention here is focused on the spread of epidemics and the applications used for illustration relate to coronavirus, this package is designed with a view to wider applicability. Given the number of different modeling approaches for epidemics, there are many notable packages that can be used for monitoring epidemics. For the most part, these seek to model explicitly the mechanism by which the disease spreads through the population. For example, EpiModel (Jenness, Goodreau, and Morris 2018), EpiEstim (Cori et al. 2013), epinowcast (Abbott and Monticone 2021) to name a few, can be categorized as belonging to the class of mechanistic models in the language of philosophy of science, in that they require structural knowledge of the disease spread mechanism in order to obtain predictions. In contrast, the empirical approach implemented in tsgc falls into the class of models described as phenomenological. Although it is motivated by the archetypal pattern in the dynamics of disease spread, it does not rely on structural assumptions derived from epidemiological theory. There are advantages to not requiring assumptions about values of parameters relating to, inter alia, disease infectiousness, disease severity, or contact patterns, which are difficult to pin down with sufficient precision, especially in real-time during an epidemic. Our approach makes minimal assumptions and merely requires past observations of the epidemic variable of interest, to which we apply time-series methods to provide predictions over short future time horizons. The model can be estimated quickly and straightforwardly, and subjected to standard diagnostic tests. A statistical model of this type is a useful complement to mechanistic models that attempt to describe the epidemic in terms of underlying processes. Section 2 sets out the state space formulation of the dynamic Gompertz growth curve and the way nowcasts and forecasts are obtained from predictive recursions. It is then shown how these numbers translate into estimates of the instantaneous reproduction number \(R_{t}.\) Section 3 explains how multiple waves can be accommodated by reinitializing the series at the start of new waves. The start of a new wave is not obvious in real time but a rule for triggering reinitialization that works well in practice is presented. Section 4 describes the functionality of tsgc. Section 5 sets out a full working example of the use of the package to forecast COVID infection in Gauteng province in South Africa. Section 6 concludes. Gompertz curve Our model is based on the sigmoidal growth curve pattern that characterizes epidemics. We start by assuming that the cumulative number of cases follows a Gompertz curve, which is a parsimonious model for the canonical sigmoid shape of cumulative case numbers in a one-wave epidemic. Over the course of a wave, the number of new infected cases increases up to a peak before declining to zero as the pool of susceptible individuals declines. Specifically, if the cumulative number of cases at time \(t\), \(\mu(t)\), follows a Gompertz curve, we can write \[\mu(t) = \bar{\mu}\exp\{\gamma_0 e^{\ gamma t}\},\] where \(\bar{\mu}\) is the unknown saturation level for the cumulative number of cases, \(\gamma_0 < 0\) is a parameter related to \(\mu(0)\) and \(\gamma < 0\) is the growth rate parameter. Defining \(\dot{\mu}(t) \equiv d\mu(t)/dt\) and \(g(t) \equiv {\dot\mu(t)}/{\mu(t)}\), it is straightforward to show that \[ \ln g(t) &= \delta + \gamma t, & (1) \]where \(\delta = \ln \gamma_0 \gamma\). The observational model needs to be specified in discrete, rather than continuous, time. This is straightforward. Let \(Y_t\) be the observed cumulative number of cases on day \(t\) and \(y_t = Y_t-Y_{t-1}\) be the number of daily new cases. We can then define the growth rate of \(Y_t\) as \(g_t = y_t/Y_{t-1}\) and replace \(\ln g(t)\) with \(\ln y_{t}-\ln Y_{t-1}\). Dynamic Gompertz model The deterministic trend implied by (1) is too inflexible for practical time-series modeling of an epidemic. Replacing it with a stochastic trend allows the model to adapt to changes in dynamics during the course of the epidemic. We call this stochastic-trend counterpart of (1) the dynamic Gompertz model. It is a local linear trend model specified as \[ \ln g_{t}&=\delta_{t}+\varepsilon_{t}, \;\;\;\;\;\varepsilon_{t}\sim NID(0,\sigma_{\varepsilon }^{2}),\;t=2,...,T, & (2) \] where \(\ln g_{t}=\ln y_{t}-\ln Y_{t-1}\) and \[\begin{eqnarray} \delta_ {t} &=&\delta_{t-1}+\gamma_{t-1}, & (3)\\ \gamma_{t} &=&\gamma_{t-1}+\zeta_{t},\;\;\; \zeta_{t}\sim NID(0,\sigma_{\zeta }^{2}), & (4) \end{eqnarray}\] where the disturbances \(\varepsilon_{t}\) and \ (\zeta_{t}\) are mutually independent, and \(NID(0,\sigma^2)\) denotes normally and independently distributed with mean zero and variance \(\sigma^2\). Note that the larger the signal-to-noise ratio, \(q_{\zeta }=\sigma_{\zeta }^{2}/\sigma_{\varepsilon }^{2}\), the faster the estimate of the slope parameter, \(\gamma_t\), which can be interpreted as the growth rate of the growth rate of cumulative cases, changes in response to new observations. Conversely, a lower signal-to-noise ratio induces more smoothness to the estimates. When \(\sigma_{\zeta }^{2}=0\), the trend is deterministic as in (1). State space form and estimation It is convenient to write the dynamic Gompertz model in general state space form: \[\begin{align*} \ln g_t &= Z\alpha_t + \varepsilon_t & \varepsilon_{t} \sim NID(0,\sigma^2_{\varepsilon})\\ \alpha_ {t+1} &= T\alpha_{t} + R \eta_t & \eta_{t} \sim NID(0,Q) \end{align*}\] with \[\begin{equation*} \alpha_t = \left(\delta_t, \gamma_t\right)', \; Z = \left(1, 0\right), \; \eta_t = \left(0, \zeta_t\ right)', \; %\end{equation*} %\begin{equation*} T = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}, \; R = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}, \; Q = \begin{pmatrix} 0 & 0 \\ 0 & \sigma^2_{\ zeta} \end{pmatrix}. \end{equation*}\] This model can be estimated using techniques based on the Kalman filter once a prior is specified. The prior is \[\begin{equation*} (\delta_1, \gamma_1)' \sim N(a_1,P_1), \end{equation*}\] where \(a_1\) is a \(2\times 1\) vector of prior means and \(P_1\) a \(2\times 2\) prior variance matrix. We use a diffuse prior due to the absence of prior information about the epidemic when the model is first estimated: i.e., we set \(a_1 = (0,0)'\), \(P_1=\kappa I\), and let \(\kappa \to \infty.\) Model estimation, including implementation of the diffuse prior, is carried out using the KFAS package Helske (2017). The Kalman filter outputs estimates of the state vector \((\delta_{t},\gamma_{t})^{\prime }.\) The estimates at time \(t\) conditional on information up to and including time \(t\) are denoted \((\ hat\delta_{t \mid t},\hat\gamma_{t \mid t})^{\prime }\) and given by the contemporaneous filter; the predictive filter estimates the state at time \(t+1\) from the same information set, outputting \ ((\hat\delta_{t+1 \mid t},\hat\gamma_{t+1 \mid t})^{\prime }.\) It may be useful to review past movements of the state vector \((\delta_{t},\gamma_{t})^{\prime }.\) This can be done using the smoothed estimates \((\hat\delta_t,\hat\gamma_t)^{\prime}\), which denotes the estimates of the state vector at time \(t\) based on all \(T\) observations in the series. Estimation of the unknown variance parameters (\(\sigma^2_{\varepsilon}\) and \(\sigma^2_{\zeta}\)) is by maximum likelihood (ML) and is carried out using KFAS following the procedure described in Helske (2017). We retain the option of either estimating the signal-to-noise ratio \(q_{\zeta}\), or of fixing it at a plausible value. In practice, for coronavirus applications, we set the value of \(q_{\zeta}\) based on experience and judgment, reducing the number of parameters to be estimated by one. Tests for normality and residual serial correlation are based on the standardized innovations, that is one-step ahead prediction errors, \(v_{t}=\ln g_{t}-\delta_{t \mid t-1},\) \(t=3,...,T.\) Daily effects, which are generally quite pronounced in the coronavirus data, can be included in the model as described in the Appendix. Forecasts and peak prediction Forecasts of future observations are obtained from the predictive recursions \[ \nonumber \widehat{g}_{T+\ell \mid T} &=\exp (\hat\delta_{T\mid T}+\hat\gamma _{T\mid T}\ell ),\;\;\;\;\ell =1,2,...\\ \widehat{\mu }_{T+\ell \mid T} &=\widehat{\mu }_{T+\ell -1\mid T}(1+ \widehat {g}_{T+\ell \mid T}) \] so that \[ \widehat{y}_{T+\ell \mid T}&=\widehat{g}_{T+\ell \mid T}\widehat{\mu } _{T+\ell -1\mid T} = Y_T \exp\hat\delta_{T+\ell|T} \prod_{j=1}^{\ell-1}(1+\exp \hat\delta_ {T+j|T}) & (5) \] and \(\widehat{Y}_{T+\ell \mid T}=\widehat{\mu }_{T+\ell \mid T};\) the initial value is \(\widehat{\mu }_{T\mid T}=Y_{T}.\) We construct forecast intervals for \(y_t\) based on the prediction intervals for \(\delta_t\). The conditional distribution of future values of \(\hat\delta_{t}\) is Gaussian. We replace \(\hat\ delta_{T+j|T}\) in (5) with the upper bound of a prediction interval for \(\hat\delta_{T+j|T}\) to compute the upper bound of our forecast interval for \(y_{T+\ell}\) and likewise for the lower bound. In effect, the forecast intervals are based on inference on the log cumulative growth rate, \(\delta_{t}\). The filtered growth rate \(\hat{g}_{y,t\mid t}\) of new cases \(y_{t}\), can be extracted from the continuous-time incidence curve: \(\mu^{\prime}(t) = g(t) \mu(t)\), where \(\mu(t)\) is the growth curve and \(g(t)\) is its growth rate. Taking logarithms and differentiating we get \[ \hat{g}_{y,t\mid t}&=\hat{g}_{t\mid t}+\hat\gamma_{t\mid t}, & (6) \] where \(\hat{g}_{t\mid t}=\exp \hat\delta_{t\mid t}.\) The sampling variability of \(\hat{g}_{t\mid t}\) is dominated by that of \(\hat\gamma_{t\mid t}\) (see Harvey and Kattuman (2021)). Therefore when constructing confidence intervals for \(\hat{g}_{t\mid t}\) we treat \(\hat{g}_{y,t}\) as if it has a normal distribution centered on \(\hat{g}_{y,t\mid t}\) with variance \(\text{Var}(\hat\ gamma_{t\mid t})\). Even when the nowcast \(\hat{g}_{y,T\mid T}\) is positive and daily cases are growing, there will be a saturation level for the cumulative total, \(Y_{t},\) so long as \(\hat\gamma_{T\mid T}\) is negative. The nowcasts of \(y_{t}\) peak when \(\hat{g}_{y,t\mid t}=0,\) which requires \(\hat\gamma_{t\mid t}\) to be sufficiently negative to outweigh \(\hat{g}_{t\mid t},\) which is, of course, always positive. This can be seen from the expression for the growth rate of daily cases: \[ \hat{g}_{y,T\mid T}&=\exp \hat\delta_{T\mid T}+\hat\gamma_{T\mid T}=\hat{g}_{T\mid T}+\hat\gamma_{T\mid T}. & (7) \] When \(\hat\gamma_{T\mid T}\) is negative, there is a flattening of the curve and a signaling of an upcoming peak in the trend of \(y_{t}.\) As shown in [HK, p10], the peak in the trend is predicted to be \(\ell _ {T}\) days ahead where \[\ell_{T}=\frac{\ln (-\hat\gamma_{T\mid T})-\hat\delta_{T\mid T}}{\hat\gamma_{T\mid T}}=\frac{\ln (-\hat\gamma_{T\mid T}/\hat{g}_{T\mid T})}{\hat\gamma_{T\mid T}},\;\;\;\; -\hat{g}_{T\mid T}<\hat\ gamma_{T\mid T}<0. \] The generation of forecasts is demonstrated in Section 4. Reproduction Number \(R_t\) The path of the epidemic is best tracked by nowcasts and forecasts of \(g_{y,t}\), the growth rate of \(y_t\), which are constructed by HK from the filtered estimates in the state space model, (2), (3) and (4). Wallinga and Lipsitch (2007) describe how the estimates of \(g_{y,t}\) can be translated into estimates of the instantaneous reproduction number \(R_{t}.\) Harvey and Kattuman (2021) propose \[ \widetilde{R}_{t,\tau}&=1+\tau g_{y,t\mid t}\;\;\;\text{or}\;\;\;\widetilde{R}_{\tau,t}^{e} =\exp(\tau g_{y,t\mid t}),\;\;\; & (8) \] where \(\tau\) is the generation interval – the typical number of days between an infected person becoming infected and them transmitting the disease to someone else. We construct credible intervals for \(\widetilde{R}_{t,\tau}\) and \(\widetilde {R}_{\tau,t}^{e}\) by substituting the upper and lower bounds of the confidence intervals for \(g_{y,t}\) into (8) to get the upper and lower bounds of the credible intervals. See Harvey, Kattuman, and Thamotheram (2021) for an application. The estimates of \(R_{t}\) can be used for tracking and forecasting the epidemic. The nowcasts of \(y_{t}\) peak when \(\hat{g}_{y,t\mid t}=0\), corresponding to \(\widetilde{R}_{t,\tau}=\widetilde {R}_{\tau,t}^{e}=1.\) Based on (7), predictions of \(g_{y,t}\) are given by \[ \hat{g}_{y,T+\ell\mid T} & =\exp\hat\delta_{T+\ell\mid T}+\hat\gamma_{T+\ell\mid T}=\exp(\hat\delta_{T\mid T}+\hat\gamma_{T\mid T}\ell)+\hat\gamma_{T\mid T},\;\;\;\;\;\ell=1,2,... & (9) \] We can then obtain predictions of \(R_{t},\) as in (8). If \(\hat\gamma_{T\mid T}\) is zero, the estimated growth of \(y_{t}\) is exponential and it is helpful to characterize it by the doubling time, \(\ln2/\hat{g}_{y,T\mid T} =0.693\exp(-\hat\delta_{T\mid T}).\) When \(\exp \hat\delta _{T\mid T}+\hat\gamma _{T\mid T}>0\), the nowcast \(\hat{g}_{y,T\mid T}\) is positive and the estimate of \(R_{t}\) given by (8) is greater than one. So long as \(\hat\gamma _ {T\mid T}\) is negative, then as \(T\rightarrow \infty ,\) \(\widetilde{R}_{\tau ,T+\ell \mid T}^{e}\rightarrow \exp (\tau \hat\gamma _{T\mid T})<1,\) and a saturation level for \(Y\) appears on the We now turn to case where \(\gamma _{t}\) potentially turns positive in a typically short-lived phase, as a new wave emerges. The coronavirus pandemic was characterized by multiple waves punctuated by plateaus. At the beginning of a new wave the growth rate of daily cases, \(g_{y,{t}}\), turns positive. The initial surge may be explosive to the point where the growth is super exponential. In this case, \(\gamma _{t}\), the growth rate of \(g_t\) (which is the growth rate of cumulative cases) can also turn positive, with no peak in prospect for \(y_{t}.\) Such a phase can be expected to be transient, with \(\gamma _{t}\) dropping back to zero (exponential growth in infection, accompanied by an upcoming peak in \ (y_{t}\)), and then falling below zero (sub-exponential growth in infection). From the point-of-view of forecasting an epidemic, a peak must be in prospect even if it can only be expected some way into the future. There is thus a need for a solution to the problem of the estimated \(\hat\gamma _{t\mid t}\) rising to positive values as it adapts to the upward surge in \(y_{t}\), and remaining positive for any protracted period. This upward shift in \(\hat\gamma _{t\ mid t}\) can be averted by reinitializing the \(\ln g_{t}\) series at the start of a new wave. This involves setting the cumulative total of cases \(Y_t\) back to zero at, or around, the start of a new wave and setting \(\gamma_t\) to zero so as to impose exponential growth. From the point-of-view of the relationship \(g_{y_{t}} = g_{t} + \gamma _{t},\) the re-initialization effectively shifts surplus \(\gamma_t\) emanating from super-exponential growth, into \(\delta_t\) and therefore into \(g_t\) (since \(g_t = \exp \delta_t\)). Note that, on the date of the re-initialization, \(g_{y,t} = g_t\), since \(\gamma_t=0\), and both \(g_{y,t}\) and \(g_t\) will be high because a new wave is taking off. Reinitalizing the data series Let \(t=r\) denote the re-initialization date and let \(r_0\) denote the date at which the cumulative series is set to 0. Then: \[ \ln g_t &= \ln y_t - \ln Y_{t-1} & t=1, \ldots, r \notag \\ \ln g_t^r &= \ln y_t - \ln Y_{t-1}^r & t=r+1, \ldots, T && (10)\\ Y_{t}^{r}&=Y_{t-1}^{r}+y_{t} & t=r,\ldots,T && (11) \] where \(Y_{t}^{r}\) denotes the cumulative cases after re-initialization. We set \(Y_{r-1}^{r}=0\), so that the growth rate of cumulative cases is available from \(t=r+1\) onwards. Note that \(Y_{t}^ r = Y_{t} - Y_{r_0}\). The gap between the two series becomes apparent by writing \[$$\ln g_t^r = \ln g_t + \ln \frac{Y_{t-1}}{Y_{t-1}^r} = \ln g_t + \ln \frac{Y_{t-1}}{Y_{t-1} - Y_{r_0}} \;\;\;\; t=r+1, \ldots, T \;\;\;\;\;\;\;\; (12)$$\] In the next section, where we illustrate the working of the program, it can be seen that in contrast to the original \(\ln g_t\) series, which continues to increase, the reinitialized \(\ln g_t\) series begins to decrease from the reinitialization date. The reinitialization enforces the canonical Gompertz curve with the log of growth rate of cumulative cases sloping down. Reinitalizing the model We reinitialize the model by specifying the appropriate prior distribution for the initial states: \(\alpha^r_1 \sim N(a^r_1,P^r_1)\) with \(a_1^r = (a_{\delta,1}^r,a_{\gamma,1}^r)'\) and \(P_1^r\) defined as follows. Let \(\mathcal{F}_t = \ln g_t, \ln g_{t-1}, \ldots, \ln g_1\) and define \(a_t = E(\alpha_t|\mathcal{F}_{t-1})\), \(a_{t|t} = E(\alpha_t|\mathcal{F}_t)\), \(P_t = Var(\alpha_t|\ mathcal{F}_{t-1})\), \(P_{t|t} = Var(\alpha_t|\mathcal{F}_t).\) Then, \[\begin{align*} a^r_{\delta,1} &= a_{\delta,r+1} + \ln(Y_r/y_r) \\ a^r_{\gamma,1} &= 0\\ P^r_1 &= P_{r+1}, \end{align*}\] where \(a_{\delta,r+1}\) and \(P_{r+1}\) are obtained from the non-reinitialized model estimated over \(t=1, \ldots, r\) via the usual Kalman filter recursions. Adding \(\ln(Y_r/y_r)\) to \(a_{\ delta,r+1}\) corrects for the shift down in the log cumulative cases caused by reinitializing the cumulative case series. Setting \(a^r_{\gamma,1} = 0\) ensures the model starts off with exponential, rather than super-exponential, growth. We reinitialize the model through the priors in this way rather than simply re-estimating the model from scratch for two reasons. First, it allows us to impose a proper (rather than diffuse) prior centered on zero for \(\gamma\), so that the starting point is exponential growth. Second, it enables us to make use of data from before the reinitialization date. One needs a reasonable sample size for the estimated model and forecasts to be reliable, but if a new wave is taking off, forecasts need to be generated quickly. This was particularly true with the emergence of the Omicron variant, which caused an explosive increase in infection over a short period of time. We do not re-estimate the \(\sigma^2_{\varepsilon}\) or \(\sigma^2_{\zeta}\) parameter in the reinitialized model. Rather, we use the values estimated in the original model over \(t=1, \ldots, r\). The one-step-ahead prediction error at \(t=r\) is the same in both the initialized and reinitialized models, but after \(t=r\), the prediction errors diverge. The reinitialization procedure is very similar in the case where we have seasonal terms. If we let \(\alpha_{s,t}\) be the vector of seasonal states and maintain an analogous notation to that above, the prior mean of the seasonal components in the reinitialized model is \[a^r_{s,1} = a_{s,r+1}.\] The prior variance of \(\alpha^r_1\) remains \(P_{r+1}\) where \(P_{r+1}\) is appropriately re-defined to include the seasonal term, as described in the Appendix. Functionality of tsgc The two main classes in tsgc are SSModelDynamicGompertz and SSModelDynGompertzReinit. These implement the models described in (2)-(4), with and without reinitialization, respectively. They both inherit from a common base class SSModelBase, which acts as a wrapper around KFAS to set up the state space model and define consistent update and estimation methods for it. The unknown parameters are estimated with the \$estimate method in both classes. This estimation returns an object of the FilterResults class, which is a wrapper around the KFAS KFS class with a date index and additional methods for prediction attached. The SSModelDynamicGompertz needs only a cumulative series \(Y\) as an input. In our application, this is the cumulative number of new coronavirus cases. There is an option to specify the signal-to-noise ratio \(q_{\zeta}\), rather than estimate it, and an option to specify the model to have a seasonal component, using the sea.type option. The period of a seasonal component is specified through the sea.period option. The SSModelDynGompertzReinit class allows the model to be estimated for a new wave without losing information from prior waves. It will accept the reintialization date specified by reinit.date or a FilterResults object from which it can extract the initial values. If the user wishes to reinitialize the model without using prior information (i.e. treat the new wave as an entirely separate epidemic), a reinitialization date can be specified through the renit.date option and use.presample.info can be set to FALSE. The FilterResults class contains prediction methods which can be applied to estimated dynamic Gompertz curve models (both reinitialized and non-reinitialized). get_growth_y will return filtered or smoothed estimates of the growth rate of new cases (\(g_t\)), while get_gy_ci will return the same with confidence intervals. Forecasts of the incidence variable (new cases, \(y_t\)) can be obtained with the predict_level call, and forecasts of all the states can be obtained with the predict_all call. Several functions are available to generate plots of smoothed and filtered estimates and forecasts. plot_forecast will plot actual and realised values of \(\ln(g_t)\). plot_gy and plot_gy_ci can be used to plot the smoothed or filtered growth rate, its components, and confidence intervals, respectively. Forecasts of the incidence variable (\(y_t\)) and forecast intervals can be plotted using plot_new_cases, while plot_holdout adds plots of prediction intervals and of realized outcomes over a holdout period to help evaluate forecast accuracy. Finally, the reinitialise_dataframe function can be used to reinitialise a dataframe at a given reinit.date. More details on how to use the methods and functions described are presented in the following section. Illustration of the tsgc package In this section we provide a full working example of the tsgc package in R which implements the modeling framework for time series growth curves-based epidemic forecasting. tsgc comes with two example data sets relating to COVID-19: one for Gauteng province in South Africa (sourced from South Africa’s official coronavirus online news and information portal) and another for England (sourced from the official UK government dashboard for data and insights on coronavirus). In the example that follows, we use the data on confirmed cases in Gauteng. The data series is in cumulative form and is loaded as an xts object with a date index, as follows. New COVID-19 cases reported for Gauteng province and their centered 7-day moving average presented in Figure 1 show a sequence of four waves over the period between 10 March 2020 and 5 January 2022. Setting up the forecasting exercise We begin by specifying a number of options for the forecasting exercise, as defined below. • Y is the data, in the form of time series of cumulative confirmed cases. In this example the object holding this series is called gauteng. • estimation.date.start is the date of the first observation in the sample to be used for estimating the model. By default, it is the first date in the xts object Y. • estimation.date.end is the date of the last observation in the sample to be used for estimating the model. By default, it is the last date in the xts object Y. • n.forecasts is the number of days or periods for which forecasts are to be made. E.g., if n.forecasts = 14, forecasts will be generated for up to 14 days following estimation.date.end. • q is the signal-to-noise ratio, which controls the smoothness of the estimated trend. A lower value will lead to more smoothness. By default, we use q = 0.005, which in our experience ensures a good balance between the smoothness of the trend and the speed with which changes in estimates respond to new observations. Alongside, q can be estimated and compared with the default value. • confidence.level sets the coverage of the confidence intervals for \(\ln(g_t)\) which is then used to generate the prediction intervals for forecasts. Here, we use 0.68, corresponding to the probability that the forecast lies within one standard deviation of the point forecast. • plt.length sets a truncation date to enhance the clarity of plots, e.g. showing only the last 30 days of the estimation sample. The date range for plotting can be set as plt.length days upto In this example the data is the cumulative confirmed cases time series for Gauteng. The start and end dates (estimation.date.start and estimation.date.end) that define the sample used for estimation are chosen as appropriate for the exercise. In this example, we begin with the sample period set from 1 February to 19 April 2021. This marks the beginning of the third wave in Gauteng as can be seen in Figure 1. The options are specified as below. We begin by selecting the data series (Y) for the defined sample period. idx.est = (zoo::index(Y) >= estimation.date.start) & (zoo::index(Y) <= estimation.date.end) y = Y[idx.est] We then estimate the model using a diffuse prior distribution for the initial state vector. The signal-to-noise ratio can be left as a free parameter to be estimated, as in the code below. In the rest of this example we estimate the model setting the signal-to-noise ratio at \(0.005.\) As mentioned, in our experience this value strikes a useful balance between the smoothness of the estimate of the slope parameter \(\gamma\), and the speed with which it adapts to new observations. We can now plot the forecast of \(\ln g_t\) - the log of the growth rate of \(Y\), the cumulative cases – which is the transformation of the data series that is taken to the model, and we can compare these forecasts to the actual \(\ln g_t\) series. We do this by passing the output (res) of the estimation step along with an evaluation sample to a plotting function. We specify the evaluation sample by converting the cumulative cases series to the log of the growth rate of the cumulative cases. First, we create the evaluation sample. tsgc::plot_forecast then creates and plots forecasts of \(\ln(g_t)\). y.eval = y.eval, n.ahead = n.forecasts, plt.start.date = tail(res$index, 1) - plt.length From these results we can recover the forecasts of new cases from 20 April 2021, with their prediction intervals. res, Y=y, plt.start.date = tail(res$index, 1) - plt.length To assess accuracy, we plot these forecasts against the actual new cases, that have been held back from the estimation sample, using the plot_holdout function. The model forecasts are compared with the the first differences of Y.eval, the cumulative series for the forecast window. tsgc::plot_holdout(res, Y=y, Y.eval = Y[(tail(res$index,1)+0:n.forecasts)], confidence.level = 0.68, date_format = "%Y-%m-%d") Figure 4 shows that the forecasts were accurate over the first seven days, with a mean absolute percentage error (MAPE) of 13.9%. Note that reported cases were unusually low on 27 April due to the fact that it is Freedom Day and a public holiday in South Africa. That day aside, over the six days from 28 April the MAPE was 12.8%. Over the full 14 days of the forecasts, the MAPE was 27%. As discussed earlier, the reproduction numbers \(R_t\) and their 68% credible intervals can be calculated. The plot in Figure 5 reveals that \(R_t\) remains above one through the period, indicating that the (third) wave in Gauteng had launched by this time. ## # A tibble: 7 × 5 ## Date Rt lower upper name ## <date> <dbl> <dbl> <dbl> <chr> ## 1 2021-04-13 1.18 1.05 1.34 Gauteng ## 2 2021-04-14 1.14 1.01 1.29 Gauteng ## 3 2021-04-15 1.13 1.01 1.28 Gauteng ## 4 2021-04-16 1.12 0.996 1.27 Gauteng ## 5 2021-04-17 1.06 0.941 1.20 Gauteng ## 6 2021-04-18 1.02 0.906 1.15 Gauteng ## 7 2021-04-19 1.06 0.944 1.20 Gauteng A CSV file (named ) is written to the directory specified. The forecast options specified earlier are retained. In all countries the coronavirus pandemic was characterized by a series of recurring waves due to a combination of biological, behavioral, and environmental reasons. In an epidemic, the onset of a new wave is signalled when the slope parameter \(\gamma\), which measures of the growth rate of the growth rate of new cases, rises above zero for a sustained period. Such a super-exponential phase of the epidemic in which the growth rate of new cases is itself increasing over time is typically short. This section illustrates the reinitialization procedure which allows us to apply the model to the new wave as it begins, without jettisoning information from the wave that has just ended. We extend the estimation window to 25 June 2021, by which date the third wave is well on course with its peak within sight (see Figure 1). All other options remain the same. Triggering reinitialization It is not obvious, a priori, when precisely to reinitialize the model. Based on experiments a reasonable option is to reinitialize when the estimate of the slope parameter, \(\gamma_t\), breaches a threshold of two standard errors above zero, and at that point backdate reinitialization to when the estimate of \(\gamma_t\) first turned positive. In applying the above heuristic there is a choice between the filtered slope estimate and the smoothed slope estimate. Experiments suggests that the greater noisiness of the filtered estimate of \(\gamma_t\) often triggers reinitialization too early. The smoothed estimate is more reliable. Figure 6 shows that for the third wave in Gauteng, the smoothed slope estimate exceeded twice its standard error on 1 May 2021, having risen above zero on 21 April 2021. The date for reinitialization is set accordingly. Estimating the reinitialized model SSModelDynGompertzReinit takes the same arguments as SSModelDynGompertz, with the addition of the reinit.dates argument. model <- SSModelDynGompertzReinit$new( Y = y, q = q, reinit.date = as.Date(reinit.dates, format = date.format) res.reinit <- model$estimate() We generate the reinitialized data frame by setting cumulative cases to \(0\) at the appropriate point, as discussed in the Theory section and extract the evaluation sample from the reinitialized data frame as below. y.eval.reinit <- Y %>% reinitialise_dataframe(., reinit.dates) %>% df2ldl() %>% subset(index(.) > tail(res.reinit$index,1)) Estimating the model with the reinitialized series, the actual and forecast \(\ln(g_t)\) can be plotted as in Figure 7, which is analogous to Figure 2 for the non-reinitialized series. y.eval = y.eval.reinit, n.ahead = n.forecasts, plt.start.date = tail(res.reinit$index, 1) - plt.length, title='Forecast of $\ln(g_t)$ after reinitialization.' The plot of the forecasts of new cases, and that of these forecasts against the actual number of new cases can be produced as before. See Figures 8 and 9. The trend has begun to turn down in model forecasts with with the reinitialized series. res.reinit, Y=Y.reinit, n.ahead=n.forecasts, plt.start.date = tail(res.reinit$index, 1) - plt.length tsgc::plot_holdout(res = res.reinit, Y=Y.reinit[index(y)], Y.eval = Y[(tail(res.reinit$index,1)+0:n.forecasts)], confidence.level = 0.68, date_format = "%Y-%m-%d") Comparing forecasts, without reinitialization the 14-day forecast MAPE is 41.9% (see Figure 10). With reinitialization, it falls to 20.2%. The corresponding MAPE figures for 7-day forecasts is 15.2% without reinitialization and 9.5% with reinitialization. Just as for the standard (non-reinitialized) model, the returned estimation results contained in res.reinit are a FilterResults object, and can be written to CSV using the write_results method. The tsgc package is based on a dynamic Gompertz curve model for the log of the growth rate of cumulative cases in an epidemic, with seasonal terms that capture day-of-the-week effects. The estimation is carried out using the KFAS, a package for state-space modeling in R. The Kalman filter is used to estimate the state vector at each time point. The filter is initialized using a diffuse prior for the initial state vector. We allow the option for the signal-to-noise ratio to either be estimated or fixed at some value based on experience and judgment. Future observations are forecast using predictive recursions. Epidemics are often characterized by multiple waves. A natural problem in this context is that there is very little data pertinent to the new wave available in its initial stages. The package employs a reinitialization method using priors in a way that allows data from before the beginning of the new wave to be used in estimation. The package also includes several functions for generating plots and forecasts. The package is demonstrated using COVID-19 data from South Africa, but it can be used to model and forecast any time series variable where a growth curve-like trajectory is expected. Examples might include sales of a new product, innovation adoption or website traffic. We thank The Cambridge Centre for Health Leadership & Enterprise, University of Cambridge Judge Business School, and Public Health England/UK Health Security Agency for generous support. We are indebted to Thilo Klein and Stefan Scholtes for constructive comments. Andrew Harvey’s work was carried out as part of the University of Cambridge Keynes Fund project `Forecasting and Policy in Environmental Econometrics’. Abbott, Sam, and Pietro Monticone. 2021. “Epiforecasts/Epinowcast: Evaluation in Germany Initial Release,” Cori, Anne, Neil M. Ferguson, Christophe Fraser, and Simon Cauchemez. 2013. “A New Framework and Software to Estimate Time-Varying Reproduction Numbers During Epidemics.” American Journal of Epidemiology 178 (9): 1505–12. Durbin, James, and Siem Jan Koopman. 2012. Time Series Analysis by State Space Methods. Oxford University Press. Harvey, Andrew, and Paul Kattuman. 2020. “Time Series Models Based on Growth Curves with Applications to Forecasting Coronavirus.” Harvard Data Science Review. ———. 2021. “A Farewell to R: Time-Series Models for Tracking and Forecasting Epidemics.” Journal of the Royal Society Interface 18 (182): 20210179. Harvey, Andrew, Paul Kattuman, and Craig Thamotheram. 2021. “Tracking the Mutant: Forecasting and Nowcasting COVID-19 in the UK in 2021.” National Institute Economic Review 256 (1): 110–26. Helske, Jouni. 2017. “KFAS: Exponential Family State Space Models in R.” Journal of Statistical Software 78 (i10). Jenness, Samuel M, Steven M Goodreau, and Martina Morris. 2018. “EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks.” Journal of Statistical Software 84. Proietti, Tommaso. 2000. “Comparing Seasonal Components for Structural Time Series Models.” International Journal of Forecasting 16 (2): 247–60. Wallinga, J., and M. Lipsitch. 2007. “How Generation Intervals Shape the Relationship Between Growth Rates and Reproductive Numbers.” Proceedings of the Royal Society B 274: 599–604. Appendix: Incorporating seasonal terms into the state space model When we add a seasonal term to the model, the observation equation in the dynamic Gompertz curve (2) becomes \[\ln g_{t}=\delta_{t}+\nu_t+\varepsilon_{t},\;\;\;\;\;\varepsilon_{t}\sim NID(0,\sigma_{\ varepsilon }^{2}),\;\;\;\;\;t=s,...,T,\] where \(\nu_t\) is the seasonal component, \(\delta_t\) remains defined by (3), and \(\varepsilon_t\) remains the iid Normal disturbance. There are two options for specifying the evolution of the seasonal component. We can either use a trigonometric seasonal or a dummy variable seasonal. In our application, we use a trigonometric seasonal, although the two specifications are closely related. Indeed, Proietti (2000) shows that, under certain conditions, the two approaches are equivalent. In the trigonometric seasonal approach, the seasonal pattern is captured by a set of trigonometric terms at the seasonal frequencies \(\lambda_j = \frac{2 \pi j}{2}\) for \(j=1, \ldots s^*\), where \ (s^*=\frac{s}{2}\) if \(s\), the periodicity of the seasonal effect, is even, and \(s^*=\frac{s-1}{2}\) if \(s\) is odd Durbin and Koopman (2012). Our applications use daily data and we set \(s=7\) to capture day-of-the-week effects. Letting \(\nu_{j,t}\) be the effect of season \(j\) at time \(t\), the seasonal terms evolve according to \[ \nu_{t} &= \sum_{j=1}^{s^{*}} \nu_{j,t}, & (13) \] where \[ \nu_{j,t} &= \nu_{j,t-1} \cos \lambda_j + \nu^*_{j,t-1} \sin \lambda_j + \omega_{j,t} & (14)\\ \nu^*_{j,t} &= -\nu_{j,t-1} \sin \lambda_j + \nu^*_{j,t-1} \cos \lambda_j + \omega^*_{j,t}, \; \;\;\;\; j = 1, \ldots, s^*, & (15) \] and \(\omega_{j,t}\) and \(\omega^*_{j,t}\) are mutually independent, iid \(N(0,\sigma^2_{\omega})\) variables. When reinitializing the model with seasonal terms, \(P^r_1\) is a block-diagonal matrix based on \(P_{r+1}\) which sets the covariances between \((\delta_t,\gamma_t)'\) and \((\nu_{1,t},\nu_{2,t},\ ldots,\nu_{s^*,t})'\) to be zero. The covariance between \(\delta_t\) and \(\gamma_t\), as well as the covariances between \(\nu_{1,t},\nu_{2,t},\ldots,\nu_{s^*,t}\), are permitted to be non-zero and come directly from \(P_{r+1}\).
{"url":"https://cran.uvigo.es/web/packages/tsgc/vignettes/tsgc_vignette.html","timestamp":"2024-11-12T20:41:15Z","content_type":"text/html","content_length":"874431","record_id":"<urn:uuid:d6ce8382-73b1-4ada-9755-7f728588af4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00223.warc.gz"}
Capture-avoiding substitution in PLT Redex, Part 2 Following up on yesterday's post , there's another way to specify capture-avoiding substitution that has a convenient representation in Scheme. In the last decade, Pitts and Gabbay have built a research program on reasoning about binding using an algebra of names with as their fundamental operation. The notation (a b) ⋅ x means roughly "swap occurrences of names in the term ". This is very easy to code in a general way using S-expressions: (define-metafunction swap [(x_1 x_2 x_1) x_2] [(x_1 x_2 x_2) x_1] [(x_1 x_2 (any_1 ...)) ((swap (x_1 x_2 any_1)) ...)] [(x_1 x_2 any_1) any_1]) The new definition of is very similar to the one I posted yesterday, except instead of using it uses (define-metafunction subst [(x_1 e_1 (lambda (x_1) e_2)) (lambda (x_1) e_2)] [(x_1 e_1 (lambda (x_2) e_2)) ,(term-let ([x_new (variable-not-in (term e_1) (term x_2))]) (lambda (x_new) (subst (x_1 e_1 (swap (x_2 x_new e_2)))))))] [(x_1 e_1 x_1) e_1] [(x_1 e_1 x_2) x_2] [(x_1 e_1 (e_2 e_3)) ((subst (x_1 e_1 e_2)) (subst (x_1 e_1 e_3)))]) This corresponds to Pitts and Gabbay's definition of capture-avoiding substitution. The cool thing about is that its definition doesn't have to change as you add new expression forms to your language; it's completely oblivious to the binding structure of the expression, and in a sense to any of the structure of the expression. All it needs is the ability to visit every node in the tree. So S-expressions as a term representation and as a variable-freshening operation fit together very nicely to form a convenient implementation of capture-avoiding substitution in PLT Redex. No comments:
{"url":"http://calculist.blogspot.com/2007/05/capture-avoiding-substitution-in-plt_20.html","timestamp":"2024-11-07T19:24:53Z","content_type":"application/xhtml+xml","content_length":"49723","record_id":"<urn:uuid:fb5546ff-bc20-4e30-88e1-9d01e0157b8a>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00526.warc.gz"}
Quadratic Function - Standard Form, Formula, Examples A day full of math games & activities. Find one near you. A day full of math games & activities. Find one near you. A day full of math games & activities. Find one near you. A day full of math games & activities. Find one near you. Quadratic Function Quadratic functions are used in different fields of engineering and science to obtain values of different parameters. Graphically, they are represented by a parabola. Depending on the coefficient of the highest degree, the direction of the curve is decided. The word "Quadratic" is derived from the word "Quad" which means square. In other words, a quadratic function is a “polynomial function of degree 2.” There are many scenarios where quadratic functions are used. Did you know that when a rocket is launched, its path is described by quadratic function? In this article, we will explore the world of quadratic functions in math. You will get to learn about the graphs of quadratic functions, quadratic functions formulas, and other interesting facts about the topic. We will also solve examples based on the concept for a better understanding. 1. What is Quadratic Function? 2. Standard Form of a Quadratic Function 3. Quadratic Functions Formula 4. Different Forms of Quadratic Function 5. Domain and Range of Quadratic Function 6. Graphing Quadratic Function 7. Maxima and Minima of Quadratic Function 8. FAQs on Quadratic Function What is Quadratic Function? A quadratic function is a polynomial function with one or more variables in which the highest exponent of the variable is two. Since the highest degree term in a quadratic function is of the second degree, therefore it is also called the polynomial of degree 2. A quadratic function has a minimum of one term which is of the second degree. It is an algebraic function. The parent quadratic function is of the form f(x) = x^2 and it connects the points whose coordinates are of the form (number, number^2). Transformations can be applied on this function on which it typically looks of the form f(x) = a (x - h)^2 + k and further it can be converted into the form f(x) = ax^2 + bx + c. Let us study each of these in detail in the upcoming sections. Standard Form of a Quadratic Function The standard form of a quadratic function is of the form f(x) = ax^2 + bx + c, where a, b, and c are real numbers with a ≠ 0. Quadratic Function Examples The quadratic function equation is f(x) = ax^2 + bx + c, where a ≠ 0. Let us see a few examples of quadratic functions: • f(x) = 2x^2 + 4x - 5; Here a = 2, b = 4, c = -5 • f(x) = 3x^2 - 9; Here a = 3, b = 0, c = -9 • f(x) = x^2 - x; Here a = 1, b = -1, c = 0 Now, consider f(x) = 4x-11; Here a = 0, therefore f(x) is NOT a quadratic function. Vertex of Quadratic Function The vertex of a quadratic function (which is in U shape) is where the function has a maximum value or a minimum value. The axis of symmetry of the quadratic function intersects the function (parabola) at the vertex. Quadratic Functions Formula A quadratic function can always be factorized, but the factorization process may be difficult if the zeroes of the expression are non-integer real numbers or non-real numbers. In such cases, we can use the quadratic formula to determine the zeroes of the expression. The general form of a quadratic function is given as: f(x) = ax^2 + bx + c, where a, b, and c are real numbers with a ≠ 0. The roots of the quadratic function f(x) can be calculated using the formula of the quadratic function which is: • x = [ -b ± √(b^2 - 4ac) ] / 2a Different Forms of Quadratic Function A quadratic function can be in different forms: standard form, vertex form, and intercept form. Here are the general forms of each of them: • Standard form: f(x) = ax^2 + bx + c, where a ≠ 0. • Vertex form: f(x) = a(x - h)^2 + k, where a ≠ 0 and (h, k) is the vertex of the parabola representing the quadratic function. • Intercept form: f(x) = a(x - p)(x - q), where a ≠ 0 and (p, 0) and (q, 0) are the x-intercepts of the parabola representing the quadratic function. The parabola opens upwards or downwards as per the value of 'a' varies: • If a > 0, then the parabola opens upward. • If a < 0, then the parabola opens downward. We can always convert one form to the other form. We can easily convert vertex form or intercept form into standard form by just simplifying the algebraic expressions. Let us see how to convert the standard form into each vertex form and intercept form. Converting Standard Form of Quadratic Function Into Vertex Form A quadratic function f(x) = ax^2 + bx + c can be easily converted into the vertex form f(x) = a (x - h)^2 + k by using the values h = -b/2a and k = f(-b/2a). Here is an example. Example: Convert the quadratic function f(x) = 2x^2 - 8x + 3 into the vertex form. • Step - 1: By comparing the given function with f(x) = ax^2 + bx + c, we get a = 2, b = -8, and c = 3. • Step - 2: Find 'h' using the formula: h = -b/2a = -(-8)/2(2) = 2. • Step - 3: Find 'k' using the formula: k = f(-b/2a) = f(2) = 2(2)^2 - 8(2) + 3 = 8 - 16 + 3 = -5. • Step - 4: Substitute the values into the vertex form: f(x) = 2 (x - 2)^2 - 5. Converting Standard Form of Quadratic Function Into Intercept Form A quadratic function f(x) = ax^2 + bx + c can be easily converted into the vertex form f(x) = a (x - p)(x - q) by using the values of p and q (x-intercepts) by solving the quadratic equation ax^2 + bx + c = 0. Example: Convert the quadratic function f(x) = x^2 - 5x + 6 into the intercept form. • Step - 1: By comparing the given function with f(x) = ax^2 + bx + c, we get a = 1. • Step - 2: Solve the quadratic equation: x^2 - 5x + 6 = 0 By factoring the left side part, we get (x - 3) (x - 2) = 0 x = 3, x = 2 • Step - 3: Substitute the values into the intercept form: f(x) = 1 (x - 3)(x - 2). Domain and Range of Quadratic Function The domain of a quadratic function is the set of all x-values that makes the function defined and the range of a quadratic function is the set of all y-values that the function results in by substituting different x-values. Domain of Quadratic Function A quadratic function is a polynomial function that is defined for all real values of x. So, the domain of a quadratic function is the set of real numbers, that is, R. In interval notation, the domain of any quadratic function is (-∞, ∞). Range of Quadratic Function The range of the quadratic function depends on the graph's opening side and vertex. So, look for the lowermost and uppermost f(x) values on the graph of the function to determine the range of the quadratic function. The range of any quadratic function with vertex (h, k) and the equation f(x) = a(x - h)^2 + k is: • y ≥ k (or) [k, ∞) when a > 0 (as the parabola opens up when a > 0). • y ≤ k (or) (-∞, k] when a < 0 (as the parabola opens down when a < 0). Graphing Quadratic Function The graph of a quadratic function is a parabola. i.e., it opens up or down in the U-shape. Here are the steps for graphing a quadratic function. • Step - 1: Find the vertex. • Step - 2: Compute a quadratic function table with two columns x and y with 5 rows (we can take more rows as well) with vertex to be one of the points and take two random values on either side of • Step - 3: Find the corresponding values of y by substituting each x value in the given quadratic function. • Step - 4: Now, we have two points on either side of the vertex so that by plotting them on the coordinate plane and joining them by a curve, we can get the perfect shape. Also, extend the graph on both sides. Here is the quadratic function graph. Example: Graph the quadratic function f(x) = 2x^2 - 8x + 3. By comparing this with f(x) = ax^2 + bx + c, we get a = 2, b = -8, and c = 3. • Step - 1: Let us find the vertex. x-ccordinate of vertex = -b/2a = 8/4 = 2 y-coordinate of vertex = f(-b/2a) = 2(2)^2 - 8(2) + 3 = 8 - 16 + 3 = -5. Therefore, vertex = (2, -5). • Step - 2: Frame a table with vertex written in the middle row. • Step - 3: Fill the first column with two random numbers on either side of 2. • Step - 4: Find y by substituting each x-value in the given quadratic function. For example, when x = 0, y = 2(0)^2 - 8(0) + 3 = 3. │x │ y │ │0 │3 │ │1 │-3 │ │2 │-5 │ │3 │-3 │ │4 │3 │ • Step - 5: Just plot the above points and join them by a smooth curve. Note: We can plot the x-intercepts and y-intercept of the quadratic function as well to get a neater shape of the graph. The graph of quadratic functions can also be obtained using the quadratic functions calculator. Maxima and Minima of Quadratic Function Maxima or minima of quadratic functions occur at its vertex. It can also be found by using differentiation. To understand the concept better, let us consider an example and solve it. Let's take an example of quadratic function f(x) = 3x^2 + 4x + 7. Differentiating the function, ⇒f'(x) = 6x + 4 Equating it to zero, ⇒6x + 4 = 0 ⇒ x = -2/3 Double differentiating the function, ⇒f''(x) = 6 > 0 Since the double derivative of the function is greater than zero, we will have minima at x = -2/3 (by second derivative test), and the parabola is upwards. Similarly, if the double derivative at the stationary point is less than zero, then the function would have maxima. Hence, by using differentiation, we can find the minimum or maximum of a quadratic ☛Related Articles Important Notes on Quadratic Function: • The standard form of the quadratic function is f(x) = ax^2+bx+c where a ≠ 0. • The graph of the quadratic function is in the form of a parabola. • The quadratic formula is used to solve a quadratic equation ax^2 + bx + c = 0 and is given by x = [ -b ± √(b^2 - 4ac) ] / 2a. • The discriminant of a quadratic equation ax^2 + bx + c = 0 is given by b^2-4ac. This is used to determine the nature of the zeroes of a quadratic function. Examples of Quadratic Function 1. Example 1: Determine the vertex of the quadratic function f(x) = 2(x+3)^2 - 2. Solution: We have f(x) = 2(x+3)^2 - 2 which can be written as f(x) = 2(x-(-3))^2 + (-2) Comparing the given quadratic function with the vertex form of quadratic function f(x) = a(x-h)^2 + k, where (h,k) is the vertex of the parabola, we have h = -3, k = -2 Hence, the vertex of f(x) is (-3,-2) Answer: Vertex = (-3,-2) 2. Example 2: Find the zeros of the quadratic function f(x) = x^2 + 3x - 4 using the quadratic functions formula. Solution: The quadratic function f(x) = x^2 + 3x - 4. On comparing f(x) with the general form ax^2 + bx + c, we get a = 1, b = 3, c = -4 The zeros of quadratic function are obtained by solving f(x) = 0. For this, we use the quadratic formula: x = [ -b ± √(b^2 - 4ac) ] / 2a x = [ -3 ± √{3^2 - 4(1)(-4)}] / 2(1) = [ -3 ± √(9 + 16) ] / 2 = [ -3 ± √25 ] / 2, x = [ -3 + 5 ] / 2, [ -3 - 5 ] / 2 = 1, -4 Answer: Roots of f(x) = x^2 + 3x - 4 are 1 and -4 3. Example 3: Write the quadratic function f(x) = (x-12)(x+3) in the general form ax^2 + bx + c. Solution: We have the quadratic function f(x) = (x-12)(x+3). We will just expand (multiply the binomials) it to write it in the general form. f(x) = (x-12)(x+3) = x(x+3) - 12(x+3) = x^2 + 3x - 12x - 36 = x^2 - 9x - 36 Answer: x^2 - 9x - 36 View Answer > Breakdown tough concepts through simple visuals. Math will no longer be a tough subject, especially when you understand the concepts through visualizations. Practice Questions on Quadratic Function Check Answer > FAQs on Quadratic Function What is Quadratic Function in Math? A quadratic function is a polynomial function with one or more variables in which the highest exponent of the variable is two. In other words, a quadratic function is a “polynomial function of degree Why is the Name Quadratic Function? The meaning of "quad" is "square". Hence, a polynomial function of degree 2 is called a quadratic function. What is Quadratic Function Equation? A quadratic function is a polynomial of degree 2 and so the equation of quadratic function is of the form f(x) = ax^2 + bx + c, where 'a' is a non-zero number; and a, b, and c are real numbers. What is the Vertex of Quadratic Function? Vertex of a quadratic function is a point where the parabola changes direction and crosses the axis of symmetry. It is a point where the parabola changes from increasing to decreasing or from decreasing to increasing. At this point, the derivative of the quadratic function is 0. What Are the Zeros of a Quadratic Function? The zeroes of a quadratic function are points where the graph of the function intersects the x-axis. At the zeros of the function, the y-coordinate is 0 and the x-coordinate represents the zeros of the quadratic polynomial function. The zeros of a quadratic function are also called the roots of the function. What is a Quadratic Functions Table? A quadratic functions table is a table where we determine the values of y-coordinates corresponding to each x-coordinates and vice-versa. The table consists of the coordinates of the graph of the quadratic functions. We usually write the vertex of the quadratic functions in the quadratic functions in one of the rows of the table. How to Draw Quadratic Graph? The graph of a quadratic function is a parabola. It can be drawn by plotting the coordinates on the graph. We plug in the values of x and obtain the corresponding values of y, hence obtaining the coordinates of the graph. After plotting the coordinates on the graph, we connect the dots using a free hand to obtain the graph of the quadratic functions. Finding the vertex helps in drawing a quadratic graph. How to Find the x-intercept of a Quadratic Function? The X-intercept of a quadratic function can be found considering the quadratic function f(x) = 0 and then determining the value of x. In other words, the x-intercept is nothing but zero of a quadratic equation. Is Parabola is a Quadratic Function? A parabola is a graph of a quadratic function. A quadratic function is of the form f(x) = ax^2 + bx + c with a not equal to 0. Parabola is a U-shaped or inverted U-shaped graph of a quadratic How to Find the Inverse of a Quadratic Function? The inverse of a quadratic function f(x) can be found by replacing f(x) by y. Then, we switch the roles of x and y, that is, we replace x with y and y with x. After this, we solve y for x and then replace y by f^-1(x) to obtain the inverse of the quadratic function f(x). What are the Forms of Quadratic Function? A quadratic function can be in different forms: standard form, vertex form, and intercept form. Here are the general forms of each of them: • Standard form: f(x) = ax^2 + bx + c, where a ≠ 0. • Vertex form: f(x) = a(x - h)^2 + k, where a ≠ 0 and (h, k) is the vertex of the parabola representing the quadratic function. • Intercept form: f(x) = a(x - p)(x - q), where a ≠ 0 and (p, 0) and (q, 0) are the x-intercepts of the parabola representing the quadratic function. We can convert one of these forms into the other forms. For more information, click here. What is the Difference Between Quadratic Function and Quadratic Equation? A quadratic function is of the form f(x) = ax^2 + bx + c, where a ≠ 0. Each point on its graph is of the form (x, ax^2 + bx + c). This is for the graphing purpose. On the other hand, a quadratic equation is of the form ax^2 + bx + c = 0, where a ≠ 0. This is for finding the solution and it gives definite values of x as solution. Download FREE Study Materials Download Functions Worksheets Math worksheets and visual curriculum
{"url":"https://www.cuemath.com/calculus/quadratic-functions/","timestamp":"2024-11-14T10:24:16Z","content_type":"text/html","content_length":"311175","record_id":"<urn:uuid:55ace866-f2ed-45bc-b543-6a44f712a240>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00274.warc.gz"}
How is calendar month calculated? We multiply the weekly rent by the number of weeks in a year. This gives us the annual rent. We divide the annual rent into 12 months which gives us the calendar monthly amount. Remember your rent is always due in advance so should you wish to pay monthly then your rent must be paid monthly in advance. What is per calendar month? The rent is often displayed as being PCM – an abbreviation for Per Calendar Month. This means the rent due would be taken from you on the same date every month. PCM is just one of the many terms used in the rental market. Most types of rentals such as houses, apartments and even rooms are often rented PCM. How do you calculate a calendar? Divide the year by 4 and ignore any remainder. Then add this to the original year. Find the remainder when dividing by 7. Just use the date itself (so for the 18th August, use 18) But to simplify calculation later, it is better to find the remainder when dividing by 7.So for 18, calculate that 18/7 has remainder +4. How do you calculate effort in person months? The estimated human effort by work package in Part B can be calculated as follows (indicative method): if 1 year = 220 (working) days, then 1 month = 220/12 = 18.33 (working) days. So 24 full working days for one person would be 24/18.33 = 1.31 person‑months. What is calendar month with example? 1 : one of the months as named in the calendar. 2 : the period from a day of one month to the corresponding day of the next month if such exists or if not to the last day of the next month (as from January 3 to February 3 or from January 31 to February 29) How many days is a calendar month? 30.436875 days The mean month-length in the Gregorian calendar is 30.436875 days. How do you solve day and date problems? 4:597:18Problems on Calendar, Basics and Methods, Shortcuts, Time and DateYouTube What is the formula of effort? The effort distance (also sometimes called the “effort arm”) is shorter than the resistance distance. Mechanical advantage = |Fr/Fe | where | means “absolute value.” Mechanical advantage is always How do I convert months to calendar hours? To convert a month measurement to an hour measurement, multiply the time by the conversion ratio. The time in hours is equal to the months multiplied by 730.485. Is a calendar month 30 days? The definition of a calendar month is one of the 12 divisions of the year. The period of duration from the same date of one month to the same date of the next month, and thus can be 28, 29 during a leap year, 30 or 31 days long. What is the end of a calendar month? Related Definitions Each Calendar Month shall end on the day immediately preceding the beginning of the next succeeding Calendar Month. Calendar Month means any of the twelve (12) months of the Calendar Year. What calendar do we use? the Gregorian calendar Today, the vast majority of the world uses what is known as the Gregorian calendar, Named after Pope Gregory XIII, who introduced it in 1582. The Gregorian calendar replaced the Julian calendar, which had been the most used calendar in Europe until this point. How do you solve calendar problems easily? Calendar Aptitude Tricks100 years give us 5 odd days as calculated above.200 years give us 5 x 2 = 10 – 7 (one week) 3 odd days.300 years give us 5 x 3 = 15 – 14 (two weeks) 1 odd day.400 years give us {5 x 4 + 1 (leap century)} – 21} (three weeks) 0 odd days.Month of January gives us 31 – 28 = 3 odd days.More items Which two months in a year have the same calendar? Hence, April and July months will have the same calendar. What are 1st 2nd and 3rd class levers? - First class levers have the fulcrum in the middle. - Second class levers have the load in the middle. - This means a large load can be moved with relatively low effort. - Third class levers have the effort in the middle. What is load formula? According to Sir Isaac Newton, the force of an entity equals its mass, multiplied by acceleration. This basic principle is what is used to calculate load force, which is the force that opposes that entity. Apply Sir Isaac Newtons formula: force = mass x acceleration. What percent is 1 month of a year? Hence, 8.22% is 1 month of a year. How many hours are in current month? Month to Hour Conversion TableMonthHour [h]1 month730 h2 month1460 h3 month2190 h5 month3650 h7 more rows How many days are in a calendar month? 30.436875 days Numerical relations. The mean month-length in the Gregorian calendar is 30.436875 days.
{"url":"http://fuegoondemand.com/kynirowoz25375.html","timestamp":"2024-11-07T06:22:10Z","content_type":"text/html","content_length":"29522","record_id":"<urn:uuid:6778d37a-1f8b-4809-b72e-bad4a179c7f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00422.warc.gz"}
Specified Complexity Made Simple Historical Backdrop Specified complexity is the legitimate offspring of the mathematical theory of information. You wouldn’t know that, however, from reading what’s said about it on the internet. Take the opening sentence of the Wikipedia article on specified complexity: “Specified complexity is a creationist argument introduced by William Dembski, used by advocates to promote the pseudoscience of intelligent design.” It’s hard to imagine any other single sentence better crafted to discredit specified complexity. What further need is there to understand or engage this concept if that’s what it is? In fact, specified complexity is a bona fide information measure. Yes, specified complexity is applicable to intelligent design, but its definition and properties make sense independently of its applications. In this article I want to define what specified complexity is, establish that it belongs squarely to standard information theory, review some intuitively clear applications of it, and show why it is an important concept even apart from its applications to intelligent design. First off, despite my widely acknowledged association with the term specified complexity, let’s be clear that I didn’t invent it. Critics of intelligent design treat specified complexity as a con man pretending to be a real scientist. Thus they liken it to debunked scientific concepts such as phlogiston or pseudosciences such as phrenology. But if you think of specified complexity as a job seeker with a resumé, the people put down as references (to vouch for the concept) are actually quite impressive. Indeed, some very respectable scientists were, for a time, happy to associate their names and reputations with the concept. Given its pedigree, there is no way to justify treating specified complexity as a bastard child of real Biologist Leslie Orgel, a colleague of Francis Crick, introduced the term in 1973 in a book on the origin of life. Crick himself had toyed with a less developed version of the concept as far back as 1958—see his paper “On Protein Synthesis” where he writes, “By information I mean the specification of the amino acid sequence of the protein.” In the years immediately following specified complexity’s coinage by Orgel, the underlying idea was widely accepted even if incompletely understood. With the term specified complexity, Orgel was trying to understand three distinct types of order: Repetitive Order: BOATBOATBOATBOATBOATBOATBOATBOATBOATBOAT. Example in nature: A salt crystal. Random Order: SBIPDYAQBUKHQFLYRTXHBIWGJNSCPVMZDLEGKYAC. Example in nature: Mixture of random polymers. Complex Specified Order: THISSEQUENCEOFLETTERSISACARRIEROFMEANING. Example in nature: A DNA sequence coding for a protein. In these examples, each sequence comprises 40 capital Roman letters. In the first, the word BOAT is repeated 10 times. Because BOAT is a known word, it may be regarded as specified in our language. Yet because the word BOAT is short and keeps being repeated, the entire sequence may also be regarded as simple. This, then, is an example of specified simplicity, which is therefore distinct from specified complexity. Note that we might have used the more random looking QOZK in place of BOAT and then generated the 40-letter sequence QOZKQOZKQOZKQOZKQOZKQOZKQOZKQOZKQOZKQOZK. In that case, QOZK might be regarded as unspecified, and yet because of the repetition, the entire sequence would still be regarded as simple. Accordingly, this sequence would be an example of unspecified simplicity, which is therefore also distinct from specified complexity. The example of random order, on the other hand, is neither specified nor simple. It is unspecified because there is no straightforward way to describe that sequence other than simply listing it in its entirety. Its generation, for instance, cannot be understood in terms of any easy formula. Compared to a four-letter word like BOAT, it is also relatively long, and therefore complex. It is an example of unspecified complexity, which is therefore distinct from specified complexity. And finally, there’s the last example in which a sequence of 40 capital Roman letters spells a meaningful English sentence. It is specified in virtue of its use of English words arranged in grammatically correct syntax to make a meaningful statement. And yet there is no simple way to generate or account for it. It is an example of specified complexity. I’m playing fast and loose with the terms specified and complexity in these examples, having yet to carefully define them. But these examples make clear the basic intuition underlying the concept. Moreover, these are exactly the sorts of examples Orgel had in mind when he introduced the concept. And indeed, for about thirty years after Orgel introduced the concept, specified complexity garnered wide respect in the scientific community. As an example of the respect shown to specified complexity in the first three decades after Orgel introduced the term, consider well-known physicist and science writer Paul Davies, who in his 1999 book on the origin of life (The Fifth Miracle) wrote: “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity.” Unwilling as they might be to attribute specified complexity to intelligence, scientists during that period at least understood that the emergence of specified complexity posed a challenge that needed to be explained. So what happened to change the fortunes of specified complexity in the mainstream scientific community? The intelligent design movement happened. Design theorists Charles Thaxton, Walter Bradley, and Roger Olsen got the ball rolling in the 1980s in their book The Mystery of Life’s Origin, noting that specified complexity raised a significant problem for the origin of life. They then went further by suggesting that intelligence should be regarded as a viable explanation in accounting for it. Separately, in the first edition of my book The Design Inference (Cambridge, 1998), I argued that specification and small probability combined to produce a reliable criterion for inferring intelligent agency. Although the term specified complexity did not occur in that book, it was there implicitly. The “specified” of “specified complexity” was there in the form of specification. And the small probability corresponded precisely to increased complexity as an information-theoretic notion. More on this shortly. But the gamechanger in revising specified complexity’s scientific fortunes occurred in 2002. That year I published the sequel to The Design Inference. Reviewed in Nature and titled No Free Lunch, it was subtitled Why Specified Complexity Cannot Be Purchased Without Intelligence. All cards were now on the table. Scientists who rejected intelligent design and yet had previously seen merit in specified complexity now withdrew their support of the concept. Before proceeding, I want to underscore that in formalizing specified complexity as a precise information-theoretic concept, I attempted to preserve Orgel’s original intent. Critics of intelligent design, such as mathematician Jason Rosenhouse, suggest that I misappropriated Orgel, but I didn’t. Orgel appealed to information theory in formulating specified complexity, but in doing so he made some mistakes. At the end of this article, I’ll compare Orgel’s pre-theoretic version of specified complexity with the theoretic version described here. Readers can then see clearly what he was trying to do and how specified complexity, as developed here, improves on his work. Intuitive Specified Complexity Even though this article is titled “Specified Complexity Made Simple,” there’s a limit to how much the concept of specified complexity may be simplified before it can no longer be adequately defined or explained. Accordingly, specified complexity, even when made simple, will still require the introduction of some basic mathematics, such as exponents and logarithms, as well as an informal discussion of information theory, especially Shannon and Kolmogorov information. I’ll get to that in the subsequent sections. At this early stage in the discussion, however, it seems wise to lay out specified complexity in a convenient non-technical way. That way readers lacking mathematical and technical facility will still be able to grasp the gist of specified complexity. In this section, I’ll present an intuitively accessible account of specified complexity. Just as all English speakers are familiar with the concept of prose even if they’ve never thought about how it differs from poetry, so too we are all familiar with specified complexity even if we haven’t carefully defined it or provided a precise formal mathematical account of it. In this section I’ll present a user-friendly account of specified complexity by means of intuitively compelling examples. Even though non-technical readers may be inclined to skip the rest of this article, I would nonetheless encourage all readers to dip into the subsequent sections, if only to persuade themselves that specified complexity has a sound rigorous basis to back up its underlying To get the ball rolling, let’s consider an example by internet personality David Farina, known popularly as “Prof. Dave.” In arguing against the use of small probability arguments to challenge Darwinian evolutionary theory, Farina offers the following example: Let’s say 10 people are having a get-together, and they are curious as to what everyone’s birthday is. They go down the line. One person says June 13^th, another says November 21^st, and so forth. Each of them have a 1 in 365 chance of having that particular birthday. So, what is the probability that those 10 people in that room would have those 10 birthdays? Well, it’s 1 in 365 to the 10th power, or 1 in 4.2 times 10 to the 25, which is 42 trillion trillion. The odds are unthinkable, and yet there they are sitting in that room. So how can this be? Well, everyone has to have a birthday. Farina’s use of the term “unthinkable” brings to mind Vizzini in The Princess Bride. Vizzini keeps uttering the word “inconceivable” in reaction to a man in black (Westley) steadily gaining ground on him and his henchmen. Finally, his fellow henchman Inigo Montoya remarks, “You keep using that word — I do not think it means what you think it means.” Similarly, in contrast to Farina, an improbability of 1 in 42 trillion trillion is in fact quite thinkable. Right now you can do even better than this level of improbability. Get out a fair coin and toss it 100 times. That’ll take you a few minutes. You’ll witness an event unique in the history of coin tossing and one having a probability of 1 in 10 to the 30, or 1 in a million trillion The reason Farina’s improbability is quite thinkable is that the event to which it is tied is unspecified. As he puts it, “One person says June 13^th, another says November 21^st, and so forth.” The “and so forth” here is a giveaway that the event is unspecified. But now consider a variant of Farina’s example: Imagine that each of his ten people confirmed that their birthday was January 1. The probability would in this case again be 1 in 42 trillion trillion. But what’s different now is that the event is specified. How is it specified? It is specified in virtue of having a very short description, namely, “everyone here was born New Year’s Day.” The complexity in specified complexity refers to probability: the greater the complexity, the smaller the probability. There is a precise information-theoretic basis for this connection between probability and complexity that we’ll examine in the next section. Accordingly, because the joint probability of any ten birthdays is quite low, their complexity will be quite high. Nothing surprising here. For things to get interesting with birthdays, complexity needs to be combined with specification. A specification is a salient pattern that we should not expect a highly complex event to match simply by chance. Clearly, a large group of people that all share the same birthday did not come together by chance. But what exactly is it that makes a pattern salient so that, in the presence of complexity, it becomes an instance of specified complexity and thereby defeats chance? That’s the whole point of specified complexity. Sheer complexity, as Farina’s example shows, cannot defeat chance. So too, the absence of complexity cannot defeat chance. For instance, if we learn that a single individual has a birthday on January 1, we wouldn’t regard anything as amiss or afoul. That event is simple, not complex, in the sense of probability. Leaving aside leap years and seasonal effects on birth rates, 1 out 365 people will on average have a birthday on January 1. With a worldwide population of 8 billion people, many people will have that birthday. But a group of exactly 10 people all in the same room all having a birthday of January 1 is a different matter. We would not ascribe such a coincidence to chance. But why? Because the event is not just complex but also specified. And what makes a complex event also specified—or conforming to a specification—is that it has a short description. In fact, we define specifications as patterns with short descriptions. Such a definition may seem counterintuitive, but it actually makes good sense of how we eliminate chance in practice. The fact is, any event (and by extension any object or structure produced by an event) is describable if we allow ourselves a long enough description. Any event, however improbable, can therefore be described. But most improbable events can’t be described simply. Improbable events with simple descriptions draw our attention and prod us to look for explanations other than chance. Take Mount Rushmore. It could be described in detail as follows: for each cubic micrometer in a large cube that encloses the entire monument, register whether it contains rock or is empty of rock (treating partially filled cubic micrometers, let us stipulate, as empty). Mount Rushmore can be enclosed in a cube of under 50,000 cubic meters. Moreover, each cubic meter contains a million trillion cubic micrometers. Accordingly, 50 billion trillion filled-or-empty cells could describe Mount Rushmore in detail. Thinking of each filled-or-empty cell as a bit then yields 50 billion trillion bits of information. That’s more information than contained in the entire world wide web (there are currently 2 billion websites globally). But of course, nobody attempts to describe Mount Rushmore that way. Instead, we describe it succinctly as “a giant rock formation that depicts the US presidents George Washington, Thomas Jefferson, Abraham Lincoln, and Teddy Roosevelt.” That’s a short description. At the same time, any rock formation the size of Mount Rushmore will be highly improbable or complex. Mount Rushmore is therefore both complex and specified. That’s why, even if we knew nothing about the history of Mount Rushmore’s construction, we would refuse to attribute it to the forces of chance (such as wind and erosion) and instead attribute it to design. Consider a few more examples in this vein. Take the game of poker. There are 2,598,960 distinct possible poker hands, and so the probability of any poker hand is 1/2,598,960. Consider now two short descriptions, namely, “royal flush” and “single pair.” These descriptions have roughly the same description length. Yet there are only 4 ways of getting a royal flush and 1,098,240 ways of getting a single pair. This means the probability of getting a royal flush is 4/2,598,960 = .00000154 but the probability of getting a single pair is 1,098,240/2,598,960 = .423. A royal flush is therefore much more improbable than a single pair. Suppose now that you are playing a game of poker and you come across these two hands, namely, a royal flush and a single pair. Which are you more apt to attribute to chance? Which are you more apt to attribute to cheating, and therefore to design? Clearly, a single pair would, by itself, not cause you to question chance. It is specified in virtue of its short description. But because it is highly probable, and therefore not complex, it would not count as an instance of specified complexity. Witnessing a royal flush, however, would elicit suspicion, if not an outright accusation of cheating (and therefore of design). Of course, given the sheer amount of poker played throughout the world, royal flushes will now and then appear by chance. But what raises suspicion that a given instance of a royal flush may not be the result of chance is its short description (a property it shares with “single pair”) combined with its complexity/improbability (a property it does not share with “single pair”). Let’s consider one further example, which seems to have become a favorite among readers of the recently released second edition of The Design Inference. In the chapter on specification, my co-author Winston Ewert and I consider a famous scene in the film The Empire Strikes Back, which we then contrast with a similar scene from another film that parodies it. Quoting from the chapter: Darth Vader tells Luke Skywalker, “No, I am your father,” revealing himself to be Luke’s father. This is a short description of their relationship, and the relationship is surprising, at least in part because the relationship can be so briefly described. In contrast, consider the following line uttered by Dark Helmet to Lone Starr in Spaceballs, the Mel Brooks parody of Star Wars: “I am your father’s brother’s nephew’s cousin’s former room­mate.” The point of the joke is that the relationship is so compli­cated and contrived, and requires such a long description, that it evokes no suspicion and calls for no special explanation. With everybody on the planet connected by no more than “six degrees of separation,” some long description like this is bound to identify anyone In a universe of countless people, Darth Vader meeting Luke Skywalker is highly improbable or complex. Moreover, their relation of father to son, by being briefly described, is also specified. Their meeting therefore exhibits specified complexity and cannot be ascribed to chance. Dark Helmet meeting Lone Starr may likewise be highly improbable or complex. But given the convoluted description of their past relationship, their meeting represents an instance of unspecified complexity. If their meeting is due to design, it is for reasons other than their past relationship. Before we move to a more formal treatment of specified complexity, we are well to ask how short is short enough for a description to count as a specification. How short should a description be so that combined with complexity it produces specified complexity? As it is, in the formal treatment of specified complexity, complexity and description length are both converted to bits, and then specified complexity can be defined as the difference of bits (the bits denoting complexity minus the bits denoting specification). When specified complexity is applied informally, however, we may calculate a probability (or associated complexity) but we usually don’t calculate a description length. Rather, as with the Star Wars/ Spaceballs example, we make an intuitive judgment that one description is short and natural, the other long and contrived. Such intuitive judgments have, as we will see, a formal underpinning, but in practice we let ourselves be guided by intuitive specified complexity, treating it as a convincing way to distinguish merely improbable events from those that require further scrutiny. Shannon and Kolmogorov Information The first edition of The Design Inference as well as its sequel, No Free Lunch, set the stage for defining a precise information-theoretic measure of specified complexity. There was, however, still more work to be done to clarify the concept. In both these books, specified complexity was treated as a combination of improbability or complexity on the one hand and specification on the other. As presented back then, it was an oil-and-vinegar combination, with complexity and specification treated as two different types of things exhibiting no clear commonality. Neither book therefore formulated specified complexity as a unified information measure. Still, the key ideas for such a measure were in those earlier books. In this section, I review those key information-theoretic ideas. In the next section, I’ll join them into a unified whole. Let’s start with complexity. As noted earlier, there’s a deep connection between probability and complexity. This connection is made clear in Shannon’s theory of information. In this theory, probabilities are converted to bits. To see how this works, consider the tossing a coin 100 times, which yields an event of probability 1 in 2^100 (the caret symbol here denotes exponentiation). But that number also corresponds to 100 bits of information since it takes 100 bits to characterize any sequence of 100 coin tosses (think of 1 standing for heads and 0 for tails). In general, any probability p corresponds to –log(p) bits of information, where the logarithm here and elsewhere in this article is to the base 2 (as needed to convert probabilities to bits). Think of a logarithm as an exponent: it’s the exponent to which you need to raise the base (here always 2) in order to get the number to which the logarithmic function is applied. Thus, for instance, a probability of p = 1/10 corresponds to an information measure of –log(1/10) ≈ 3.322 bits (or equivalently, 2^(–3.322) ≈ 1/10). Such fractional bits allow for a precise correspondence between probability and information measures. The complexity in specified complexity is therefore Shannon information. Claude Shannon (1916–2001) introduced this idea of information in the 1940s to understand signal transmissions (mainly of bits, but also for other character sequences) across communication channels. The longer the sequence of bits transmitted, the greater the information and therefore its complexity. Because of noise along any communication channel, the greater the complexity of a signal, the greater the chance of its distortion and thus the greater the need for suitable coding and error correction in transmitting the signal. So the complexity of the bit string being transmitted became an important idea within Shannon’s theory. Shannon’s information measure is readily extended to any event E with a probability P(E). We then define the Shannon information of E as –log(P(E)) = I(E). Note that the minus sign is there to ensure that as the probability of E goes down, the information associated with E goes up. This is as it should be. Information is invariably associated with the narrowing of possibilities. The more those possibilities are narrowed, the more the probabilities associated with those probabilities decrease, but correspondingly the more the information associated with those narrowing possibilities For instance, consider a sequence of ten tosses of a fair coin and consider two events, E and F. Let E denote the event where the first five of these ten tosses all land heads but where we don’t know the remaining tosses. Let F denote the event where all ten tosses land heads. Clearly, F narrows down the range of possibilities for these ten tosses more than E does. Because E is only based on the first five tosses, its probability is P(E) = 2^(–5) = 1/(2^5) = 1/32. On the other hand, because F is based on all ten tosses, its probability is P(F) = 2^(–10) = 1/(2^10) = 1/1,024. In this case, the Shannon information associated with E and F is respectively I(E) = 5 bits and I(F) = 10 bits. Shannon information, however, is not enough to understand specified complexity. For that, we also need Kolmogorov information, or what is also called Kolmogorov complexity. Andrei Kolmogorov (1903–1987) was the greatest probabilist of the 20th century. In the 1960s he tried to make sense of what it means for a sequence of numbers to be random. To keep things simple, and without loss of generality, we’ll focus on sequences of bits (since any numbers or characters can be represented by combinations of bits). Note that we made the same simplifying assumption for Shannon information. The problem Kolmogorov faced was that any sequence of bits treated as the result of tossing a fair coin was equally probable. For instance, any sequence of 100 coin tosses would have probability 1/(2 ^100), or 100 bits of Shannon information. And yet there seemed to Kolmogorov a vast difference between the following two sequences of 100 coin tosses (letting 0 denote tails and 1 denote heads): The first just repeats the same coin toss 100 times. It appears anything but random. The second, on the other hand, exhibits no salient pattern and so appears random (I got it just now from an online random bit generator). But what do we mean by random here? Is it that the one sequence is the sort we should expect to see from coin tossing but the other isn’t? But in that case, probabilities tell us nothing about how to distinguish the two sequences because they both have the same small probability of occurring. Kolmogorov’s brilliant stroke was to understand the randomness of these sequences not probabilistically but computationally. Interestingly, the ideas animating Kolmogorov were in the air at that time in the mid 1960s. Thus, both Ray Solomonoff and Gregory Chaitin (then only a teenager) also came up with the same idea. Perhaps unfairly, Kolmogorov gets the lion’s share of the credit for characterizing randomness computationally. Most information-theory books (see, for instance, Cover and Thomas’s The Elements of Information Theory), in discussing this approach to randomness, will therefore focus on Kolmogorov and put it under what is called Algorithmic Information Theory (AIT). Briefly, Kolmogorov’s approach to randomness is to say that a sequence of bits is random to the degree that it has no short computer program that generates it. Thus, with the first sequence above, it is non-random since it has a very short program that generates it, such as a program that simply says “repeat ‘0’ 100 times.” On the other hand, there is no short program (so far as we can tell) that generates the second sequence. It is a combinatorial fact (i.e., a fact about the mathematics of counting or enumerating possibilities) that the vast majority of bit sequences cannot be characterized by any program shorter than the sequence itself. Obviously, any sequence can be characterized by a program that simply incorporates the entire sequences and then simply regurgitates it. But such a program fails to compress the sequence. The non-random sequences, by having programs shorter than the sequences themselves, are thus those that are compressible. The first of the sequences above is compressible. The second, for all we know, isn’t. Kolmogorov’s information (also known as Kolmogorov complexity) is a computational theory because it focuses on identifying the shortest program that generates a given bit-string. Yet there is an irony here: it is rarely possible to say with certainly that a given bit string is truly random in the sense of having no compressible program. From combinatorics, with its mathematical counting principles, we know that the vast majority of bit sequences must be random in Kolmogorov’s sense. That’s because the number of short programs is very limited and can only generate very few longer sequences. Most longer sequences will require longer programs. But if for an arbitrary bit sequence D we define K(D) as the length of the shortest program that generates D, it turns out that there is no computer program that calculates K(D). Simply put, the function K is non-computable. This fact from theoretical computer science matches up with our common experience that something may seem random for a time, and yet we can never be sure that it is random because we might discover a pattern clearly showing that the thing in fact isn’t random (think of an illusion that looks like a “random” inkblot only to reveal a human face on closer Yet even though K is non-computable, in practice it is a useful measure, especially for understanding non-randomness. Because of its non-computability, K doesn’t help us to identify particular non-compressible sequences, these being the random sequences. Even with K as a well-defined mathematical function, we can’t in most cases determine precise values for it. Nevertheless, K does help us with the compressible sequences, in which case we may be able to estimate it even if we can’t exactly calculate it. What typically happens in such cases is that we find a salient pattern in a sequence, which then enables us to show that it is compressible. To that end, we need a measure of the length of bit sequences as such. Thus, for any bit sequence D, we define |D| as its length (total number of bits). Because any sequence can be defined in terms of itself, |D| forms an upper bound on Kolmogorov complexity. Suppose now that through insight or ingenuity, we find a program that substantially compresses D. The length of that program, call it n, will then be considerably less than |D| — in other words, n < |D|. Although this program length n will be much shorter than D, it’s typically not possible to show that this program of length n is the very shortest program that generates D. But that’s okay. Given such a program of length n, we know that K(D) cannot be greater than n because K(D) measures the very shortest such program. Thus, by finding some short program of length n, we’ll know that K(D) ≤ n < |D|. In practice, it’s enough to come up with a short program of length n that’s substantially less than |D|. The number n will then form an upper bound for K(D). In practice, we use n as an estimate for K(D). Such an estimate, as we’ll see, ends up in applications being a conservative estimate of Kolmogorov complexity. Specified Complexity as a Unified Information Measure With the publication of the first edition of The Design Inference and its sequel No Free Lunch, elucidating the connection between design inferences and information theory became increasingly urgent. That there was a connection was clear. The first edition of The Design Inference sketched, in the epilogue, how the relation between specifications and small probability (complex) events mirrored the transmission of messages along a communication channel from sender to receiver. Moreover, in No Free Lunch, both Shannon and Kolmogorov information were explicitly cited in connection with specified But even though specified complexity as characterized back then employed informational ideas, it did not constitute a clearly defined information measure. Specified complexity seemed like a kludge of ideas from logic, statistics, and information. Jay Richards, guest-editing a special issue of Philosophia Christi, asked me to clarify the connection between specified complexity and information theory. In response, I wrote an article titled “Specification: The Pattern That Signifies Intelligence,” which appeared in that journal in 2005. In that article, I defined specified complexity as a single measure that combined under one roof all the key elements of the design inference, notably, small probability, specification, probabilistic resources, and universal probability bounds. Essentially, in the measure I articulated there, I attempted to encapsulate the entire design inferential methodology within a single mathematical In retrospect, all the key pieces for what is now the fully developed informational account of specified complexity were there in that article. But my treatment of specified complexity in that article left substantial room for improvement. I used a counting measure to enumerate all the descriptions of a given length or shorter. I then placed this measure under a negative logarithm. This gave the equivalent of Kolmogorov information, suitably generalized to minimal description length. But because my approach was so focused on encapsulating the design-inferential methodology, the roles of Shannon and Kolmogorov information in its definition of specified complexity were muddied. My 2005 specified complexity paper fell stillborn from the press, and justly so given its lack of clarity. Eight years later, Winston Ewert, working with me and Robert Marks at the Evolutionary Informatics Lab, independently formulated specified complexity as a unified measure. It was essentially the same measure as in my 2005 article, but Ewert clearly articulated the place of both Shannon and Kolmogorov information in the definition of specified complexity. Ewert, along with Marks and me as co-authors, published this work under the title “Algorithmic Specified Complexity,” and then published subsequent applications of this work (see the Evolutionary Informatics Lab publications page). With Ewert’s lead, specified complexity, as an information measure, became the difference between Shannon information and Kolmogorov information. In symbols, the specified complexity SC for an event E was thus defined as SC(E) = I(E) – K(E). The term I(E) in this equation is just, as we saw in the last section, Shannon information, namely, I(E) = –log(P(E)), where P(E) is the probability of E with respect to some underlying relevant chance hypothesis. The term K(E) in this equation, in line with the last section, is a slight generalization of Kolmogorov information, in which for an event E, K(E) assigns the length, in bits, of the shortest description that precisely identifies E. Underlying this generalization of Kolmogorov information is a binary, prefix-free, Turing complete language that maps descriptions from the language to the events they identify. There’s a lot packed into this last paragraph, so explicating it all is not going to be helpful in an article titled “Specified Complexity Made Simple.” For the details, see chapter 6 of the second edition of The Design Inference. Still, it’s worth highlighting a few key points to show that SC, so defined, makes good sense as a unified information measure and is not merely a kludge of Shannon and Kolmogorov information. What brings Shannon and Kolmogorov information together as a coherent whole in this definition of specified complexity is event-description duality. Events (and the objects and structures they produce) occur in the world. Descriptions of events occur in language. Thus, corresponding to an event E are descriptions D that identify E. For instance, the event of getting a royal flush in the suit of hearts corresponds to the description “royal flush in the suit of hearts.” Such descriptions are, of course, never unique. The same event can always be described in multiple ways. Thus, this event could also be described as “a five-card poker hand with an ace of hearts, a king of hearts, a queen of hearts, a jack of hearts, and a ten of hearts.” Yet this description is quite a bit longer than the other. Given event-description duality, it follows that: (1) an event E with a probability P(E) has Shannon information I(E), measured in bits; moreover, (2) given a binary language (one expressed in bits—and all languages can be expressed in bits), for any description D that identifies E, the number of bits making up D, which in the last section we defined as |D|, will be no less than the Kolmogorov information of E (which measures in bits the shortest description that identifies E). Thus, because K(E) ≤ |D|, it follows that SC(E) = I(E) – K(E) ≥ I(E) – |D|. The most important take away here is that specified complexity makes Shannon information and Kolmogorov information commensurable. In particular, specified complexity takes the bits associated with an event’s probability and subtracts from it the bits associated with their minimum description length. Moreover, in estimating K(E), we then use I(E) – |D| to form a lower bound for specified complexity. It follows that specified complexity comes in degrees and could take on negative values. In practice, however, we’ll say an event exhibits specified complexity if it is positive and large (with what it means to be large depending on the relevant probabilistic resources). There’s a final fact that makes specified complexity a natural information measure and not just an arbitrary combination of Shannon and Kolmogorov information, and that’s the Kraft inequality. To apply the Kraft inequality of specified complexity here depends on the language that maps descriptions to events being prefix-free. Prefix-free languages help to ensure disambiguation, so that one description is not the start of another description. This is not an onerous condition, and even though it does not hold for natural languages, transforming natural languages into prefix-free languages leads to negligible increases in description length (see chapter 6 of the second edition of The Design Inference). What the Kraft inequality does for the specified complexity of an event E is guarantee that all events having the same or greater specified complexity, when considered jointly as one grand union, nonetheless have probability less than or equal to 2 raised to the negative power of the specified complexity. In other words, the probability of the union of all events F with specified complexity no less than that of E (i.e., SC(F) ≥ SC(E)), will have probability less than or equal to 2^(–SC(E)). This result, so stated, may not seem to belong in an article attempting to make specified complexity simple. But it is a big mathematical result, and it connects specified complexity to a probability bound that’s crucial for drawing design inferences. To illustrate how this all works, let’s turn next to an example of cars driving along a road. A Tale of Ten Malibus The example considered here to illustrate what a given value of specified complexity means is adapted from section 3.6 of the second edition of The Design Inference, from which I quote extensively. Suppose you witness ten brand new Chevy Malibus drive past you on a public road in immediate, uninter­rupted succession. The question that crosses your mind is this: Did this succession of ten brand new Chevy Malibus happen by chance? Your first reaction might be to think that this event is a publicity stunt by a local Chevy dealership. In that case, the succession would be due to design rather than to chance. But you don’t want to jump to that conclusion too quickly. Perhaps it is just a lucky coincidence. But if so, how would you know? Perhaps the coincidence is so improbable that no one should expect to observe it as happening by chance. In that case, it’s not just unlikely that you would observe this coincidence by chance; it’s unlikely that anyone would. How, then, do you determine whether this succession of identical cars could reasonably have resulted by chance? Obviously, you will need to know how many opportunities exist to observe this event. It’s estimated that in 2019 there were 1.4 billion motor vehicles on the road worldwide. That would include trucks, but to keep things simple let’s assume all of them are cars. Although these cars will appear on many different types of roads, some with traffic so sparse that ten cars in immediate succession would almost never happen, to say nothing of ten cars having the same late make and model, let’s give chance every opportunity to succeed by assuming that all these cars are arranged in one giant succession of 1.4 billion cars arranged bumper to bumper. But it’s not enough to look at one static arrangement of all these 1.4 billion cars. Cars are in motion and continually rearranging themselves. Let’s therefore assume that the cars completely reshuffle themselves every minute, and that we might have the opportunity to see the succession of ten Malibus at any time across a hundred years. In that case, there would be no more than 74 quadrillion opportunities for ten brand new Chevy Malibus to line up in immediate, uninterrupted succession. So, how improbable is this event given these 1.4 billion cars and their repeated reshuffling? To answer this question requires knowing how many makes and models of cars are on the road and their relative proportions (let’s leave aside how different makes are distributed geographically, which is also relevant, but introduces needless complications for the purpose of this illustration). If, per impossibile, all cars in the world were brand new Chevy Malibus, there would be no coincidence to explain. In that case, all 1.4 billion cars would be identical, and getting ten of them in a row would be an event of probability 1 regardless of reshuffling. But clearly, nothing like that is the case. Go to Cars.com, and using its car-locater widget you’ll find 30 popular makes and over 60 “other” makes of vehicles. Under the make of Chevrolet, there are over 80 models (not counting variations of models—there are five such variations under the model Malibu). Such numbers help to assess whether the event in question happened by chance. Clearly, the event is specified in that it answers to the short description “ten new Chevy Malibus in a row.” For the sake of argument, let’s assume that achieving that event by chance is going to be highly improbable given all the other cars on the road and given any reasonable assumptions about their chance distribution. But there’s more work to do in this example to eliminate chance. No doubt, it would be remarkable to see ten new Chevy Malibus drive past you in immediate, uninterrupted succession. But what if you saw ten new red Chevy Malibus in a row drive past you? That would be even more striking now that they all also have the same color. Or what about simply ten new Chevies in a row? That would be less striking. But note how the description lengths covary with the probabilities: “ten new red Chevy Malibus in a row” has a longer description length than “ten new Chevy Malibus in a row,” but it corresponds to an event of smaller probability than the latter. Conversely, “ten new Chevies in a row” has shorter description length than “ten new Chevy Malibus in a row,” but it corresponds to an event of larger probability than the latter. What we find in examples like this is a tradeoff between description length and probability of the event described (a tradeoff that specified complexity models). In a chance elimination argument, we want to see short description length combined with small probability (implying a larger value of specified complexity). But typically these play off against each other. “Ten new red Chevy Malibus in a row” corresponds to an event of smaller probability than “ten new Chevy Malibus in a row,” but its description length is slightly longer. Which event seems less readily ascribable to chance (or, we might say, more worthy of a design inference)? A quick intuitive assess­ment suggests that the probability decrease outweighs the increase in description length, and so we’d be more inclined to eliminate chance if we saw ten new red Chevy Malibus in a row as opposed to ten of any color. The lesson here is that probability and description length are in tension, so that as one goes up the other tends to go down, and that to eliminate chance both must be suitably low. We see this tension by contrasting “ten new Chevy Malibus in a row” with “ten new Chevies in a row,” and even more clearly with simply “ten Chevies in a row.” The latter has a shorter description length (lower description length) but also much higher probability. Intuitively, it is less worthy of a design inference because the increase in probability so outweighs the decrease in description length. Indeed, ten Chevies of any make and model in a row by chance doesn’t seem farfetched given the sheer number of Chevies on the road, certainly in the United States. But there’s more: Why focus simply on Chevy Malibus? What if the make and model varied, so that the cars in succession were Honda Accords or Porsche Carreras or whatever? And what if the number of cars in succession varied, so it wasn’t just 10 but also 9 or 20 or whatever? Such questions underscore the different ways of specifying a succession of identical cars. Any such succession would have been salient if you witnessed it. Any such succession would constitute a specification if the description length were short enough. And any such succession could figure into a chance elimination argument if both the description length and the probability were low enough. A full-fledged chance-elimination argument in such circumstances would then factor in all relevant low-probability, low-description-length events, balancing them so that where one is more, the other is less. All of this can, as we by now realize, be recast in information-theoretic terms. Thus, a probability decrease corresponds to a Shannon information increase, and a description length increase corresponds to a Kolmogorov information increase. Specified complexity, as their difference, now has the following property (we assume, as turns out to be reasonable, that some fine-points from theoretical computer science, such as the Kraft inequality, are approximately applicable ): if the specified complexity of an event is greater than or equal to n bits, then the grand event consisting of all events with at least that level of specified complexity, has probability less than or equal to 2^(–n). This is a powerful result and it provides a conceptually clean way to use specified complexity to eliminate chance and infer design. Essentially, what specified complexity does is consider an archer with a number of arrows in his quiver and a number of targets of varying size on a wall, and asks what is the probability that any one of these arrows will by chance land on one of these targets. The arrows in the quiver correspond to complexity, the targets to specifications. Raising the number 2 to the negative of specified complexity as an exponent then becomes the grand probability that any of these arrows will hit any of these targets by chance. Formally, the specified complexity of an event is the difference between its Shannon information and its Kolmogorov information. Informally, the specified complexity of an event is a combination of two properties, namely, that the event has small probability and that it has a description of short length. In the formal approach to specified complexity, we speak of algorithmic specified complexity. In the informal approach, we speak of intuitive specified complexity. But typically it will be clear from context which sense of the term “specified complexity” is intended. In this article, we’ve defined and motivated algorithmic specified complexity. But we have not provided actual calculations of it. For calculations of algorithmic specified complexity as applied to real-world examples, I refer readers to sections 6.8 and 7.6 in the second edition of The Design Inference. Section 6.8 looks at general examples whereas section 7.6 looks at biological examples. In each of these sections, my co-author Winston Ewert and I examine examples where specified complexity is low, not leading to a design inference, and also where it is high, leading to a design For instance, in section 6.8 we take the so-called “Mars face,” a naturally occurring structure on Mars that looks like a face, and contrast it with the faces on Mount Rushmore. We argue that the specified complexity of the Mars face is too small to justify a design inference but that the specified complexity of the faces on Mount Rushmore is indeed large enough to justify a design inference. Similarly, in section 7.6, we take the binding of proteins to ATP, as in the work of Anthony Keefe and Jack Szostak, and contrast it with the formation of protein folds in beta-lactamase, as in the work of Douglas Axe. We argue that the specified complexity of random ATP binding is close to 0. In fact, we calculate a negative value of the specified complexity, namely, –4. On the other hand, for the evolvability of a beta-lactamase fold, we calculate a specified complexity of 215, which corresponds to a probability of 2^(–215), or roughly a probability of 1 in 10^65. With all these numbers, we estimate a Shannon information and a Kolmogorov information and then calculate a difference. The validity of these estimates and the degree to which they can be refined can be disputed. But the underlying formalism of specified complexity is rock solid. The details of that formalism and its applications go beyond an article titled “Specified Complexity Made Simple.” Those details can all be found in the second edition of The Design Inference. Appendix: Orgelian Specified Complexity Leslie Orgel, as noted at the start of this article, introduced the term “specified complexity” in his 1973 book The Origins of Life. Although specified complexity as developed by Winston Ewert, Robert Marks, and me and summarized in this article attempts to get at the same informational reality that Orgel was trying to grasp, our formulations differ in important ways. For a fuller understanding of specified complexity, it will therefore help to review what Orgel originally had in mind and to see where our formulation of the concept improves on his. Strictly speaking, this section is of historical interest. It is therefore cast as an appendix. Because The Origins of Life is out of print and hard to get, I quote from it extensively, offering exegetical commentary. I focus on the three pages of his book where Orgel introduces and then discusses specified complexity (pages 189–191). Orgel introduces the term “specified complexity” in a section titled “Terrestrial Biology.” Elsewhere in his book, Orgel also considers non-terrestrial biology, which is why the title of his book refers to the origins (plural) of life—radically different forms of life might arise in different parts of the universe. To set the stage for introducing specified complexity, Orgel discusses the various commonly cited defining features of life, such as reproduction or metabolism. Thinking these don’t get at the essence of life, he introduces the term that is the focus of this article: It is possible to make a more fundamental distinction between living and nonliving things by examining their molecular structure and molecular behavior. In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple, well-specified structures because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures which are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. (p. 189) So far, so good. Everything Orgel writes here makes good intuitive sense. It matches up with the three types of order discussed at the start of this article: repetitive order, random order, complex specified order. Wanting to put specified complexity on a firmer theoretical basis, Orgel next connects it to information theory: These vague ideas can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. One can see intuitively that many instructions are needed to specify a complex structure. On the other hand, a simple repeating structure can be specified in rather few instructions. Complex but random structures, by definition. need hardly be specified at all. (p. 190) Orgel’s elaboration here of specified complexity calls for further clarification. His use of the term “information content” is ill-defined. He unpacks it in terms of “minimum number of instructions needed to specify a structure.” This suggests a Kolmogorov information measure. Yet complex specified structures, according to him, require lots of instructions, and so suggest high Kolmogorov information. By contrast, specified complexity as developed in this article requires low Kolmogorov information. At the same time, for Orgel to write that “complex but random structures … need hardly be specified at all” suggests low Kolmogorov complexity for random structures, which is exactly the opposite of how Kolmogorov information characterizes randomness. For Kolmogorov, the random structures are those that are incompressible, and thus, in Orgel’s usage, require many instructions to specify (not “need hardly be specified at all”). Perhaps Orgel had something else in mind—I am trying to read him charitably—but from the vantage of information theory, his options are limited. Shannon and Kolmogorov are, for Orgel, the only games in town. And yet, Shannon information, focused as it is on probability rather than instruction sets, doesn’t clarify Orgel’s last remarks. Fortunately, Orgel elaborates on them with three examples: These differences are made clear by the following example. Suppose a chemist agreed to synthesize anything that could be described accurately to him. How many instructions would he need to make a crystal, a mixture of random DNA-like polymers or the DNA of the bacterium E. coli? (p. 190) This passage seem promising for understanding what Orgel is getting at with specified complexity. Nonetheless, it also suggests that Orgel is understanding information entirely in terms of instruction sets for building chemical systems, which then weds him entirely to a Kolmogorov rather than Shannon view of information. In particular, nothing here suggests that he will bring both views of information together under a coherent umbrella. Here’s is how Orgel elaborates the first example, which is replete with the language of short descriptions (as in the account of specified complexity given in this article): To describe the crystal we had in mind, we would need to specify which substance we wanted and the way in which the molecules were to be packed together in the crystal. The first requirement could be conveyed in a short sentence. The second would be almost as brief, because we could describe how we wanted the first few molecules packed together, and then say “and keep on doing the same.” Structural information has to be given only once because the crystal is regular. (p. 190) This example has very much the feel of our earlier example in which Kolmogorov information was illustrated in a sequence of 100 identical coin tosses (0 for tails) described very simply by “repeat ‘0’ 100 times.” For specified complexity as developed in this article, an example like this one by Orgel yields a low degree of specified complexity. It combines both low Shannon information (the crystal forms reliably and repeatedly with high probability and thus low complexity) and low Kolmogorov information (the crystal requires a short description of instruction set). It exhibits specified non-complexity, or what could be called specified simplicity. Orgel’s next example, focused on randomness, is more revealing, and indicates a fatal difficulty with his approach to specified complexity: It would be almost as easy to tell the chemist how to make a mixture of random DNA-like polymers. We would first specify the proportion of each of the four nucleotides in the mixture. Then, we would say, “Mix the nucleotides in the required proportions, choose nucleotide molecules at random from the mixture, and join them together in the order you find them.” In this way the chemist would be sure to make polymers with the specified composition, but the sequences would be random. (p. 190) Orgel’s account of forming random polymers here betrays information-theoretic confusion. Previously, he was using the terms “specify” and “specified” in the sense of giving a full instruction set to bring about a given structure—in this case, a given nucleotide polymer. But that’s not what he is doing here. Instead, he is giving a recipe for forming random nucleotide polymers in general. Granted, the recipe is short (i.e., bring together the right separate ingredients and mix), suggesting a short description length since it would be “easy” to tell a chemist how to produce it. But the synthetic chemist here is producing not just one random polymer but a whole bunch of them. And even if the chemist produced a single such polymer, it would not be precisely identified. Rather, it would belong to a class of random polymers. To identify and actually build a given random polymer would require a large instructional set, and would thus indicate high, not low Kolmogorov information, contrary to what Orgel is saying here about random polymers. Finally, let’s turn to the example that for Orgel motivates his introduction of the term “specified complexity” in the first place: It is quite impossible to produce a corresponding simple set of instructions that would enable the chemist to synthesize the DNA of E. coli. In this case, the sequence matters: only by specifying the sequence letter-by-letter (about 4,000,000 instructions) could we tell the chemist what we wanted him to make. The synthetic chemist would need a book of instructions rather than a few short sentences. (p. 190) Given this last example, it becomes clear that for Orgel, specified complexity is all about requiring a long instructional set to generate a structure. Orgel’s takeaway, then, is this: It is important to notice that each polymer molecule on a random mixture has a sequence just as definite as that of E. coli DNA. However, in a random mixture the sequences are not specified. Whereas in E. coli, the DNA sequence is crucial. Two random mixtures contain quite different polymer sequences, but the DNA sequences in two E. coli cells are identical because they are specified. The polymer sequences are complex but random: although E. coli DNA is also complex, it is specified In a unique way. (pp. 190–191) This is confused. The reason it’s confused is that Orgel’s account of specified complexity commits a category mistake. He admits that a random sequence requires just as long an instruction set to generate as E. coli DNA because both are, as he puts it, “definite.” Yet with random sequences, he looks at an entire class or range of random sequences whereas with E. coli DNA, he is looking at one particular sequence. Orgel is correct, as far as he goes, that from an instruction set point of view, it’s easy to generate elements from such a class of random sequences. And yet, from an instruction set point of view, it is no easier to generate a particular random sequence than a particular non-random sequence, such as E. coli DNA. That’s the category mistake. Orgel is applying instruction sets in two very different ways, one to a class of sequences, the other to particular sequences. But he fails to note the difference. The approach to specified complexity that Winston Ewert and I take, as characterized in this article, takes a different tack. Repetitive order yields high probability and specification, and therefore combines low Shannon and low Kolmogorov information, yielding, as we’ve seen, what can be called specified simplicity. This is consistent with Orgel. But note, our approach yields a specified complexity value (albeit a low one in this case). Specified complexity, as a difference between Shannon and Kolmogorov complexity, takes continuous values and thus comes in degrees. For repetitive order, specified complexity, as characterized in this article, will thus take on low values. That said, Orgel’s application of specified complexity to distinguish a random nucleotide polymer from E. coli DNA diverges sharply from how specified complexity as outlined in this article applies to these same polymers. A random sequence, within the scheme outlined in this article, will have large Shannon information but also, because it has no short description, will have large Kolmogorov information, so the two will cancel each other, and the specified complexity of such a sequence will be low or indeterminate. On the other hand, for E. coli DNA, within the scheme outlined in this article, there will be work to do in showing that it actually exhibit specified complexity. The problem is that the particular sequence in question will have low probability and thus high Shannon information. At the same time, that particular sequence will be unlikely to have a short exact description. Rather, what will be needed to characterize the E. coli DNA as exhibiting specified complexity within the scheme of this article is a short description to which the sequence answers but which also describes an event of small probability, thus combining high Shannon information with low Kolmogorov information. Specified complexity as characterized in this article and applied to this example will thus mean that the description will include not just the particular sequence in question but a range of sequences that answer to the description. Note that there is no category mistake here as there was with Orgel. The point of specified complexity as developed in this article is always with matching events and descriptions of those events, where any particular event is described provided it answers to the description. For instance, a die rolls exhibiting a 6 answers to the description “an even die roll.” So, is there a simple description of the E. coli DNA that shows this sequence to exhibit specified complexity in the sense outlined in this article? That’s in fact not an easy question to answer. The truth of Darwinian evolution versus intelligent design hinges on the answer. Orgel realized this when he wrote the following immediately after introducing the concept of specified complexity, though his reference to miracles is a red herring (at issue is whether life is the result of intelligence, and there’s no reason to think that intelligence as operating in nature need act miraculously): Since, as scientists, we must not postulate miracles we must suppose that the appearance of “life” is necessarily preceded by a period of evolution. At first, replicating structures are formed that have low but non-zero information content. Natural selection leads to the development of a series of structures of increasing complexity and information content, until one is formed which we are prepared to call “living.” (p. 192) Orgel is here proposing the life evolves to increasing levels of complexity, where at each stage nothing radically improbable is happening. Natural selection is thus seen as a probability amplifier that renders probable what otherwise would be improbable. Is there a simple description to which the E. coli DNA answers and whose corresponding event is highly improbable not just when the isolated nucleotides making up the E. coli DNA are viewed as a purely random mixture but rather by factoring in their evolvability via Darwinian evolution? That’s a tough question to answer precisely because evaluating the probability of forming E. coli DNA with or without natural selection is far from clear. Given Orgel’s account of specified complexity, he would have to say that the E. coli DNA exhibits specified complexity. But within the account of specified complexity given in this article, ascribing specified complexity always requires doing some work, finding a description to which an observed event answers, showing the description to be short, and showing the event precisely identified by the description has small probability, implying high Shannon information and low Kolmogorov information. For intelligent design in biology, the challenge in demonstrating specified complexity is always to find a biological system that can be briefly described (yielding low Kolmogorov complexity) and whose evolvability, even by Darwinian means, has small probability (yielding high Shannon information). Orgel’s understanding of specified complexity is quite different. In my view, it is not only conceptually incoherent but also stacks the deck unduly in favor of Darwinian evolution. To sum up, this appendix has presented Orgel’s account of specified complexity at length so that readers can decide for themselves which account of specified complexity they prefer, Orgel’s or the one presented in this article.
{"url":"https://billdembski.com/intelligent-design/specified-complexity-made-simple/","timestamp":"2024-11-09T11:16:13Z","content_type":"text/html","content_length":"138390","record_id":"<urn:uuid:be0e4112-6f46-4f3e-9a0d-5d9c824dcb41>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00052.warc.gz"}
How do you find the x and y intercept of x+y=2? | HIX Tutor How do you find the x and y intercept of #x+y=2#? Answer 1 Both $x$ and $y$ intercepts are $2$. To find #x#-intercept, we should put #y=0# and to find #y#-intercept, we should put #x=0#. Putting #x=0#, we get #y=2# and putting #y=0#, we get #x=2# Hence both #x# and #y# intercepts are #2#. graph{2-x [-4.396, 5.604, -1.42, 3.58]} Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-find-the-x-and-y-intercept-of-x-y-2-8f9af915e8","timestamp":"2024-11-14T10:44:43Z","content_type":"text/html","content_length":"575521","record_id":"<urn:uuid:f2fd043d-b684-4fc4-a31f-0c60eee9832d>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00578.warc.gz"}
Walking route You can change any of the parameters that make up the route, the length, type (circular or straight), starting point and even the series of signs. Distance (km): [2 ] Type: [Straight (start point different from end point)] Select Sign series or tag: [Sign series] The series / Tag: [--- A series/ tag must be selected --- ] The sign used as the starting point: [--- A sign must be selected --- ]
{"url":"https://www.streetsigns.co.il/doCreateRoute.asp?s=473&e=3&d=2&t=1&c=2","timestamp":"2024-11-06T12:13:51Z","content_type":"text/html","content_length":"164169","record_id":"<urn:uuid:73792b0f-5dea-4cd5-b9d4-5dd019bbdca0>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00681.warc.gz"}
TOPAL Working Group 1 Every Thursday at 09:30 2 Schedule 2024 • 26/11/2024 : [Hmat part2] • 07/11/2024 : JCAD, ExaSoft and GuixHPC meetings • 24/11/2024 : [Dimitri] • 02/10/2024 : [Hmat part1] • 26/09/2024 : [Salman] • 12/09/2024 : [Laercio] • 27/06/2024 : [Ana/Abel] • 20/06/2024 : [Alycia/Raphael] • 13/06/2024 : canceled • 06/06/2024 : [Lionel] • 30/06/2024 : [Philippe] • 28/05/2024 : [Erik] • 23/05/2024 : [Hayfa] • 21/05/2024 : [Esragul] • 02/05/2024 : [SOLHARIS/NUMPEX] • 15/04/2024 : [Jean-François] • 11/04/2024 : Journée EDMI • 04/04/2024 : canceled • 28/03/2024 : [Somesh] • 21/03/2024 : [internships] • 14/03/2024 : [C++ part3] • 08/02/2024 : [Alycia and Thomas] • 06/02/2024 : [Mohamed] • 01/02/2024 : [C++ part2] • 18/01/2024 : [C++ part1] • 11/01/2024 : [Hayfa] 2.1 Tuesday 26 November 2024 2.1.1 Speakers: ENSTA ParisTech - Inria POEMS 2.1.2 Title: H-matrix day 2 2.2 Thursday 24 October 2024 2.2.1 Speakers: Dimitri Walther 2.2.2 Title: Hierarchical partitioner for electromagnetic simulation of complex 3D objects 2.2.3 Summary: In the context of numerical simulations of electromagnetism, integral methods are among the most widely used because of their power. These methods lead to the solution of dense linear problems and are therefore very expensive. For this reason, hierarchical compression methods have been developed that drastically reduce the cost associated with these matrices. They are based on a hierarchical partitioning of the matrix, and therefore of the mesh, and the efficiency of the compression depends on this partitioning. In this context, the aim of the thesis is to develop efficient and scalable hierarchical partitioners to optimize the compression of the matrix. These partitioners have to be integrated into the CEA/Cesta simulation code currently in production and into the HPC environment of the CEA DAM supercomputers. 2.3 Wednesday 2 Octobre 2024 2.3.1 Speakers: ENSTA Paris - Inria POEMS 2.3.2 Title: H-matrix day 1 2.4 Thursday 26 September 2024 2.4.1 Speakers: Salman Toor (Uppsala University) 2.4.2 Title: FEDn - A scalable federated machine learning framework for cross-device and cross-silo environments 2.4.3 Summary: Federated machine (FL) learning has opened new avenues for privacy-preserving data analysis. The classical implementation of federated learning has several limitations, including restricted scalability, efficiency concerns, and vulnerabilities to cyberattacks. Recent research has shown that due to the distributed nature of the whole training process in FL, the previously reported impact of attacks on centralized model training cannot be directly mapped to federated model training. The attack landscape needs to be adapted for FL. In this talk, I will cover the architectural details of FEDn, a framework designed to address the scalability, efficiency and security limitations of federated machine learning. I will discuss how FEDn effectively supports both cross-device and cross-silo use cases. I will also cover our ongoing efforts to understand the impact of cyberattacks on federated learning and potential mitigation strategies. 2.5 Thursday 12 September 2024 2.5.1 Speakers: Laercio Lima Pilla 2.5.2 Title: Recent works on Scheduling Algorithms, Libraries and Tools for all (SALT4all) 2.5.3 Summary: In this short talk, I will talk about our recent efforts to improve the execution time and resource utilization of two different kinds of applications in two different environments. I will first explain how, in the context of Diane Orhan’s PhD, we found an optimal algorithm for scheduling a streaming application (Software-Defined Radio) on homogeneous resources, and where we go from there. Then I will discuss how, in the context of Alan Lira Nunes’ PhD, we worked to propose optimal algorithms for assigning work to distributed learning (Federated Learning) agents on heterogeneous resources. We will see how problems that look very different at a first glance may share parts of their solutions. 2.6 Thursday 27 June 2024 2.6.1 Speakers: Ana Hourcau 2.6.2 Title: Exploitation de la multi-précision pour accélérer les calculs 2.6.3 Summary: L'objectif de cette présentation est d'évaluer l'implémentation multi-précision d'opérations sur des matrices, telles que la factorisation de Cholesky, pour en étudier les gains en optimisation selon plusieurs critères. L'utilisation de précisions faibles peut permettre l'accélération des calculs, notamment sur les architectures avec des unités tensorielles qui exploitent largement la demi précision. L'impact numérique doit toutefois pouvoir être maitrisé pour garantir une cohérence du résultat final. Ces travaux ont été intégrés dans l'application ExaGeoStat développée par KAUST, permettant de recueillir des données de climat et météo pour prédire d'éventuelles observations manquantes. Ces prédictions nécessitent des résolutions de matrices de covariances définies positives et symétriques. Une première implémentation avait déjà été réalisée, se basant sur un critère algébrique, en définissant des bandes de régions le long de la diagonale pour appliquer la précision correspondante. En effet, sur la diagonale de la matrice, les variances sont les plus importantes, la précision appliquée doit être la plus forte. Plus on s'éloigne de la diagonale et plus la précision est faible. Les conversions sont gérées en manipulant une version de la donnée pour chaque précision, impliquant un coût mémoire à prendre en compte. Une autre implémentation utilisant le support d'exécution PaRSEC propose de diminuer le surcoût mémoire au prix de plus de conversions, intervenant à chaque appel de noyaux de calcul. On propose d'utiliser le support d'exécution StarPU afin de rendre la gestion plus dynamique et éviter d'avoir plusieurs versions des données. La conversion se fait à l'exécution, en choisissant le noyau correspondant à la précision voulue. De plus, afin de mieux adapter la précision des données, la précision de chaque tuile est basée sur un critère numérique en fonction de la norme de Frobenius de cette tuile par rapport à celle de la matrice. Enfin, lorsque la bibliothèque BLAS le permet, on utilisera le noyau GEMM étendu qui propose d'effectuer les calculs avec des matrices dans des précisions différentes, évitant ainsi le surcoût de la conversion. C'est le cas notamment de la bibliothèque cuBLAS, qui permet d'exploiter la demi précision, qui n'est disponible que sur GPU. 2.6.5 Speakers: Abel Calluaud 2.6.6 Title: Vers un solveur hiérarchique performant parallélisé par les tâches 2.6.7 Summary: La recherche de performance pour la résolution de grands systèmes d'équations linéaires est un enjeu majeur pour de nombreuses applications scientifiques et industrielles. Le paradigme de tâches Sequential Task Flow (STF) facilite l'expression de telles opérations d'algèbres linéaires en soumettant des tâches à un support exécutif généraliste qui les ordonnance efficacement sur les ressources disponibles. L'état de l'art tire parti d'un découpage en tuiles des matrices opérandes qui permet d'extraire des tâches indépendantes exploitant efficacement les unités de calculs et mémoires caches de l'architecture matérielle. La complexité algorithmique cubique de la factorisation directe reste cependant prohibitive pour de grandes tailles de matrice. Les approximations de rang faible permettent de réduire cette complexité en exploitant la structure de rang faible inhérente à certains problèmes comme ceux rencontrés en électromagnétisme ou en géophysique. Ainsi, la structure de rang permet de compresser certaines tuiles sous forme \(B=UV^t\), c'est le format Tile Low Rank (TLR) que nous avons introduit dans la bibliothèque CHAMELEON. Pour ce faire, nous avons intégré les noyaux de calcul nécessaires dans la librairie RAPACK pour manipuler des combinaisons de matrices stockées sous forme compressée ou dense. En réduisant le coût mémoire et calculatoire, les résultats préliminaires ont montré que l'approche améliore le passage à l'échelle pour la factorisation de matrices de grande taille. Notre implémentation repose sur les algorithmes tuilés existants de CHAMELEON et du support exécutif basé sur les tâches StarPU. Le modèle de programmation a récemment été étendu avec le support de tâches récursives, qui permet d'exprimer des tâches qui peuvent dynamiquement soumettre un sous-graphe. C'est un verrou important pour l'extension prévue de notre travail au format Tile Hierarchical Low Rank (THLR) qui gagne un ordre de grandeur supplémentaire par rapport à TLR en appliquant récursivement le découpage en tuiles. La librairie CHAMELEON-HMAT fournit une telle implémentation, mais le grain des tâches est fixé statiquement par la taille de tuiles, limitant sévèrement le parallélisme exploitable. Notre projet CHAMELEON-RAPACK vise à exprimer les opérations via les tâches récursives, rendant possibles de nouvelles optimisations pour adapter l'exécution à des ressources matérielles hétérogènes. 2.7 Thursday 20 June 2024 2.7.1 Speakers: Alycia Lisito 2.7.2 Title: Efficient HPL on top of runtime systems 2.7.3 Summary: De nos jours, les machines sont de plus en plus grosses, complexes et hétérogènes. Cela rend plus difficile d'avoir un code générique qui soit perfomant, indifféremment de l'architecture de la machine. Afin d'atteindre cet objectif, de plus en plus d'algorithmes sont implémentés sur des supports d'exécution à base de tâches. Dans ce papier, nous proposons une implémentation de HPL sur ces supports d'execution à base de taches. Pour cela, nous avons implémenté l'algorithme dans Chameleon, qui est une bibliothèque déjà optimisée et performante, avec le support d'exécution StarPU. Nous expliquons comment arriver à une implementation performance et montrons comment nous arrivons à obtenir 93% en pic de performance par rapport au code de référence sans pivotage. 2.7.5 Speakers: Raphael Bourgouin 2.7.6 Title: Parallelism in mixture of experts decoder models 2.7.7 Summary: Nowadays, the amount of digital data is increasing exponentially, and so is the importance of machine learning, especially natural language processing models. However, training these models requires an increasing quantity of resources, which poses a great challenge. This talk aims to improve the efficiency of parallel training by utilizing the possibilities offered by mixture of experts in decoder models. 2.8 Thursday 06 June 2024 2.8.1 Speakers: Lionel Eyraud-Dubois 2.8.2 Title: Tightening I/O lower bounds through the hourglass dependency pattern 2.8.3 Summary: When designing an algorithm, one cares about arithmetic/computational complexity, but data movement (I/O) complexity plays an increasingly important role that highly impacts performance and energy consumption. For a given algorithm and a given I/O model, scheduling strategies such as loop tiling can reduce the required I/O down to a limit, called the I/O complexity, inherent to the algorithm itself. The objective of I/O complexity analysis is to compute, for a given program, its minimal I/O requirement among all valid schedules. We consider a sequential execution model with two memories, an infinite one, and a small one of size \(S\) on which the computations retrieve and produce data. The I/O is the number of reads and writes between the two memories. We identify a common “hourglass pattern” in the dependency graphs of several common linear algebra kernels. Using the properties of this pattern, we mathematically prove tighter lower bounds on their I/O complexity, which improves the previous state-of-the-art bound by a parametric ratio. This proof was integrated inside the IOLB automatic lower bound derivation tool. 2.9 Thursday 30 May 2024 2.9.1 Speakers: Philippe Swartvagher 2.9.2 Title: Making reproducible and publishable large-scale HPC experiments 2.9.3 Summary: For a long time, scientific publications focused only on experimental results, ignoring how, concretely, the results were obtained, making difficult for readers, but also for the author, to reproduce the experiments. Things are slowly changing: publication of so-called "artifacts" are encouraged by journals and conferences. However, releasing scripts and programs used for experiments can be challenging: how to organize the material? how to clearly document the instructions? how to ensure reproducibility of the experiments? how to ensure long-term availability? Several answers are possible to all these questions. In this talk, I will try to summarize how and why my methodology to build reproducible artifacts evolved over several years in the research area. 2.10 Tuesday 28 May 2024 2.10.1 Speakers: Erik Saule 2.10.2 Title: Optimizing Distributed Dataflow Graph Algorithms 2.10.3 Summary: Dataflow graph algorithms are algorithms where the calculation flow on vertices based on a total order. These dataflow algorithms can be used to compute graph colorings, maximum independent sets, matchings, and other. One of the bottleneck in these algorithms is the longest path induced by the total order. And these total orders are often derived from a random draw. We show statistically that we can derive better orders on some graphs (which exhibit small world properties) by changing the random order generation method at no additional runtime cost. We provide the basis for analyzing formally the length of the longest path of a category of Erdos-Renyi graphs. 2.11 Thursday 23 May 2024 2.11.1 Speakers: Hayfa Tayeb 2.11.2 Title: Dynamic Tasks Scheduling with Multiple Priorities on Heterogeneous Computing Systems 2.11.3 Summary: The efficient utilization of heterogeneous computing systems is crucial for scientists and industrial organizations to execute computationally intensive applications. Task-based programming has emerged as an effective approach for harnessing the processing power of these systems. However, effective scheduling of task-based applications is critical for achieving high performance. Typically, these applications are represented as directed acyclic graphs (DAGs), which can be optimized through careful scheduling to minimize execution time and maximize resource utilization. In this paper, we introduce MultiPrio, a dynamic task scheduler that aims to minimize the overall completion time of parallelized task-based applications. The goal is to find a trade-off between resource affinity, task criticality, and workload balancing on the resources. To this end, we compute scores for each task and manage the available tasks in the system with a data structure based on a set of priority queues. Tasks are assigned to available resources according to these scores, which are dynamically computed by heuristics based on task affinity and criticality. We also consider workload balancing across resources and data locality awareness. To evaluate the scheduler, we study the performance of dense and sparse linear algebra task-based applications and task-based FMM application using the StarPU runtime system on heterogeneous nodes. Our scheduler shows interesting results compared to other state-of-the-art schedulers in StarPU for regular applications, and excels at optimizing irregular workloads, improving performance by up to 31%. 2.12 Tuesday 21 May 2024 2.12.1 Speakers: Esragul Korkmaz 2.12.2 Title: $1.25(1+ε)$-Approximation Algorithm for Scheduling with Rejection 2.12.3 Summary: We address an offline job scheduling problem where jobs can either be processed on a limited supply of energy-efficient machines, or offloaded to energy-inefficient machines (with an unlimited supply), and the goal is to minimize the total energy consumed in processing all tasks. This scheduling problem can be formulated as a problem of scheduling with rejection, where rejecting a job corresponds to process it on an energy-inefficient machine and has a cost directly proportional to the processing time of the job. To solve this scheduling problem, we introduce a novel \(\frac{5}{4} (1+\epsilon)\) approximation algorithm (BEKP) by associating it to a Multiple Subset Sum problem. Our algorithm is an improvement over the existing literature, which provides a (\(\frac{3}{2} - \frac {1}{2m}\)) approximation for scheduling with arbitrary rejection costs. We evaluate and discuss the effectiveness of our approach through a series of experiments, comparing it to existing algorithms. 2.13 Thuesday 02 May 2024 2.13.1 ExaSoft 02/05 • 10h-10h30 : Présentation générale des WP3 (supports d'exécution) et WP4 (bibliothèques numériques) • 10h30-11h : Dynamic Task Graph Adaptation with Recursive Tasks (Thomas Morin) • 11h-11h30 : Integration of asynchronous network communications scheduling and local task scheduling (Alexandre Denis) • 11h30-12h : titre à venir (Bérenger Bramas ou Samuel Thibault) • 14h-14h30 : Towards scalable dense and sparse linear algebra using task-based programming models (Antoine Jego) • 14h30-15h : Efficient implementation of approximate computing algorithms (Alfredo Buttari) • 15h30-16h : Celeste: A task-parallel framework for tensor computations (Oguz Kaya) • 16h-16h30 : Extension of Chameleon to small dimension tensors for large distributed systems with applications to deep neural networks (Brieuc Nicolas) 2.13.2 Solharis 03/05 • 9:00-9:25 : Algorithmes d'allocation statique pour la planification d'applications haute performance (Lionel Eyraud-Dubois) • 9:25-9:50 : Batching small tasks on top of runtime systems for HPL (Alycia Lisito) • 9:50-10:15 : Ordonnancement sous contrainte mémoire dans un modèle de programmation à base de tâches (Loris Marchal) • 10:30-10:55 : De l'interaction entre les supports d'exécution à tâches HPC et les bibliothèques de communications (Philippe Swartvagher) • 10:55-11:20 : Programmation des architectures hétérogènes à l'aide de tâches divisibles (Pierre-André Wacrenier) • 11:20-11:55 : Solveurs rapides pour l'aéroacoustique haute fréquence (Emmanuel Agullo) 2.14 Monday 15 April 2024 2.14.1 Speakers: Jean-François 2.14.2 Title: StarONNX: a Dynamic Scheduler for Low Latency and High Throughput Inference on Heterogeneous Resources 2.14.3 Summary: Efficient execution of Deep Neural Network (DNN) models on heterogeneous processors is challenging, not only because of the heterogeneity of CPUs and hardware accelerators, but also because the problem is fundamentally bi-objective in many contexts, since both latency (time to perform an inference) and throughput (number of inferences per unit time) need to be optimized. We present StarONNX, a solution based on integrating ONNX Runtime in StarPU, which aims to optimize the distribution of inference tasks and resource management on heterogeneous architectures. This strategy relies on (i) the efficient execution of deep learning models by ONNX Runtime to maximize efficiency, and (ii) the orchestration of heterogeneous resources by StarPU to provide sophisticated scheduling and overlapping strategies for computation and communication. An essential point of the framework is the ability to split a CNN into two parts, one running on the GPU and the other on the CPU, thus increasing throughput by using all possible resources without compromising latency. We show that integrating the ONNX runtime into StarPU does not introduce significant overhead. We also evaluated our approach against the Triton Inference Server and showed a significant improvement in resource utilization and reduced latency. 2.15 Thursday 28 March 2024 2.15.1 Speakers: Somesh Singh 2.15.2 Title: High-performance sparse tensor computations 2.16 Thursday 21 March 2024 2.16.1 Speaker: Brieuc Nicolas 2.16.2 Title: Extending Chameleon to tensors contraction 2.16.4 Speaker: Adrien Aguila-Multner 2.16.5 Title: Efficient training of Neural Networks 2.16.7 Speaker: Ana Hourcau 2.16.8 Title: Exploiting mixed-precision to speed-up applications 2.17 Thursday 14 March 2024 2.17.1 Speakers: Pierre Esterie 2.17.2 Title: Existing features and new features to come from the C++ standard that can enhance HPC software (Part 3/3) 2.17.3 Summary: Open discussion about the language, design methods and its challenges. Let's discuss on the language, what would you like/need? What's blocking or scary? Useful ressources. 2.18 Thursday 08 February 2024 2.18.1 Speaker 1 : Alycia Lisito 2.18.2 Title: Enhancing sparse direct solver scalability through runtime system automatic data partition 2.18.3 Summary: With the ever-growing number of cores per node, it is critical for runtime systems and applications to adapt the task granularity to scale on recent architectures. Among applications, sparse direct solvers are a time-consuming step and the task granularity is rarely adapted to large many-core systems. In this paper, we investigate the use of runtime systems to automatically partition tasks in order to achieve more parallelism and refine the task granularity. Experiments are conducted on the new version of the PaStiX solver, which has been completely rewritten to better integrate modern task-based runtime systems. The results demonstrate the increase in scalability achieved by the solver thanks to the adaptive task granularity provided by the StarPU runtime system. 2.18.5 Speaker 2 : Thomas Morin 2.18.6 Title: Optimizing Parallel System Efficiency: Dynamic Task Graph Adaptation with Recursive Tasks 2.18.7 Summary: Task-based programming models significantly improve the efficiency of parallel systems. The Sequential Task Flow (STF) model focuses on static task sizes within task graphs, but determining optimal granularity during graph submission is tedious. To overcome this, we extend StarPU’s STF recursive tasks model, enabling dynamic transformation of tasks into subgraphs. Early evaluations on homogeneous shared memory reveal that this just-in-time adaptation enhances performance. 2.19 Thuesday 06 February 2024 2.19.1 Speakers: Mohamed Kherraz 2.19.2 Title: PyTorch Geometric framework with tutorials 2.20 Thursday 01 February 2024 2.20.1 Speakers: Pierre Esterie 2.20.2 Title: Existing features and new features to come from the C++ standard that can enhance HPC software (Part 2/3) 2.20.3 Summary: Parallelism and asynchrony support. Concurrency library additions : an attempt to a unified interface. 2.21 Thursday 18 January 2024 2.21.1 Speakers: Pierre Esterie 2.21.2 Title: Existing features and new features to come from the C++ standard that can enhance HPC software (Part 1/3) 2.21.3 Summary: Containers, views and algorithms. Memory handling and abstractions, linear algebra support. 2.22 Thursday 11 January 2024 2.22.1 Speakers: Hayfa Tayeb 2.22.2 Title: Autovesk: Automatic vectorized code generation from unstructured static kernels using graph transformations 2.22.3 Summary: Leveraging the SIMD capability of modern CPU architectures is mandatory to take full advantage of their increased performance. To exploit this capability, binary executables must be vectorized, either manually by developers or automatically by a tool. For this reason, the compilation research community has developed several strategies for transforming scalar code into a vectorized implementation. However, most existing automatic vectorization techniques in modern compilers are designed for regular codes, leaving irregular applications with non-contiguous data access patterns at a disadvantage. In this article, we present a new tool, Autovesk, that automatically generates vectorized code from scalar code, specifically targeting irregular data access patterns. We describe how our method transforms a graph of scalar instructions into a vectorized one, using different heuristics to reduce the number or cost of instructions. Finally, we demonstrate the effectiveness of our approach on various computational kernels using Intel AVX-512 and ARM SVE. We compare the speedups of Autovesk vectorized code over GCC, Clang LLVM, and Intel automatic vectorization optimizations. We achieve competitive results on linear kernels and up to 11x speedups on irregular kernels. 3 Schedule 2023 • 21/12/2023 : Portes fermees Inria • 14/12/2023 : canceled • 07/12/2023 : [Somesh] • 30/11/2023 : round table • 23/11/2023 : [Abel] • 16/11/2023 : canceled • 09/11/2023 : [Radja] (STORM talk) • 02/11/2023 : canceled (autumn holidays) • 26/10/2023 : canceled • 19/10/2023 : [Philippe] • 11/10/2023 : [Thomas + HPC Days] • 05/10/2023 : [Ahmed] • 28/09/2023 : [Kickoff Exasoft 28 et 29] • 22/06/2023 : [Abel] • 15/06/2023 : [Alycia] • 08/06/2023 : [Julia] • 01/06/2023 : canceled • 28/04/2023 : [Jean] • 27/04/2023 : [Alexandre] • 20/04/2023 : canceled (spring holidays) • 13/04/2023 : [Xunyi] • 06/04/2023 : Day of the doctoral school • 30/03/2023 : SOLHARIS workshop • 23/03/2023 : JLESC workshop • 16/03/2023 : canceled • 09/03/2023 : canceled • 02/03/2022 : [Aurore] • 23/02/2022 : [Clément] • 16/02/2022 : canceled (winter holidays) • 09/02/2023 : [Valentin] • 02/02/2023 : canceled • 24/01/2023 : [Quantic ATOS] • 19/01/2023 : postpone • 12/01/2023 : [Brieuc] 3.1 Thursday 07 December 2023 3.1.1 Speakers: Somesh Singh 3.1.2 Title: High-performance sparse tensor computations 3.1.3 Summary: Tensors arise in several application domains such as scientific computing, machine learning, and data analytics. The talk will focus on two operations on sparse tensors: • tensor contraction and • querying for nonzeros in a tensor. Tensor contraction is a higher-dimensional analog of matrix-matrix multiplication. Querying for nonzeros in a tensor is a subproblem in a recently developed tensor decomposition method. We investigate hashing-based methods for improving the performance of sparse tensor computations on shared-memory systems. 3.2 Thursday 23 November 2023 3.2.1 Speakers: Abel Calluaud 3.2.2 Title: State of the art on H-matrix libraries 3.2.3 Summary: 3.3 Thursday 09 November 2023 3.3.1 Speakers: Radjasouria Vinaygame 3.3.2 Title: Rethinking Data Race Detection in MPI-RMA Programs 3.3.3 Summary: Supercomputers are capable of increasingly more computations, and nodes forming them need to communicate even more efficiently with each other. The Message Passing Interface (MPI) proposes a communication model based on one-sided communications called the MPI Remote Memory Access (MPI-RMA). Thanks to these operations, applications can improve the overlap of communications with computations. However, one-sided communications are complex to write since they are subject to data races. This paper rethinks an existing on-the-fly data race detection algorithm for MPI-RMA programs by improving the storage of memory accesses in a Binary Search Tree using a new insertion algorithm based on fragmentation and merging algorithms. Thus, experimental results on real-life applications show that this new insertion algorithm improves the accuracy of the data race detection and can reduce the overhead of the analysis at runtime by a factor up to two. 3.4 Thursday 19 October 2023 3.4.1 Speakers: Philippe Swartvagher 3.4.2 Title: Using Mixed-Radix Decomposition to Enumerate Computational Resources of Deeply Hierarchical Architecture 3.4.3 Summary: Considering the increasing number of hierarchy levels (sockets, NUMA domain, L3 cache, …) in HPC systems and the importance of process mapping for MPI application performance, we propose a procedure for enumerating the computational elements in the hierarchy in different orders. We explore two use cases: MPI rank reordering for applications using subcommunicators and core se- lection for applications that do not use all cores on a node. Results of micro-benchmarks executing collective operations in subcommunicators show a performance difference up to a factor 4 between the best and the worst rank orderings. By changing the rank orders, we observe an impact on the performance of the Splatt application. The evaluation of the strong scalability of a conjugate gradient benchmark showed that considering all hierarchy levels in the core selection policy can give better performance than using only options available with common MPI application launchers. 3.5 Wednesday 11 October 2023 3.5.1 Speakers: Samuel Thibault, Vincenç Beltran and Thomas Hérault. 3.5.2 Title: HPC Days following Gwenole'PhD defense 3.5.3 Summary: (Amphi LaBRI) The schedule is the following: • 9:00-9h45 : Samuel Thibault "Task scheduling in StarPU" • 9h45:10h30 : Vincenç Beltran "OmpSs-2: A Programming Model For Distributed and Heterogeneous Systems" • 10h30-11h: Coffee break • 11h-11h45 : Thomas Herault "Template Task Graphs: a composable task-based programming interface for distributed hybrid systems" 3.6 Thursday 05 October 2023 3.6.1 Speakers: Ahmed Abdourahman Mahamoud 3.6.2 Title: Combine ROTOR with Pipe and DDP 3.6.3 Summary: Scaling up deep neural network capacity has been known as an effective approach to improving model quality for several different machine learning tasks. ROTOR is an activation checkpointing method that helps scaling models for a single accelerator by significantly decreasing the memory usage when training sequential models. Pipe and DDP are distributed training methods in which the deep learning model is partitioned across multiple devices for Pipe or copied on multiple devices for DDP. We combined ROTOR with Pipe and DDP in order to use it in a multi-device architecture. During this project, we worked on the adaptation of the algorithms of ROTOR for the pipelining context and we analyzed ROTOR’s performances in a multi-device architecture with sequential models. 3.7 Thursday 22 June 2023 3.7.1 Speakers: Abel Calluaud 3.7.2 Title: Analyse de performance d’un solveur direct hiérarchique 3.7.3 Summary: Talk for COMPAS 2023. La recherche de performance pour la résolution de systèmes linéaires de grande taille est la clé pour de nombreuses applications de simulation numérique. Ainsi, le code de simulation ARLENE du CEA s’intéresse à la résolution des équations de Maxwell en fréquence pour étudier les problèmes de furtivité électromagnétique. Il repose sur une méthode de factorisation directe extrêmement gourmande en mémoire et en temps de calcul, limitant la taille des problèmes qui peuvent être étudiés. Les besoins industriels pour la résolution de systèmes toujours plus grands ont motivé l’utilisation de techniques de compression hiérarchique pour réduire les besoins en termes de mémoire et de puissance de calcul. Les approximations H-matrices ont rendu atteignable en temps raisonnable la résolution sur de très grands systèmes en ramenant la complexité de la factorisation à O(nlog(n)) contre O(n3) pour un algorithme dense. L’introduction de H-matrices introduit toutefois des difficultés pour une implémentation efficace de par la grande combinatoire de noyaux de calculs qui varient en taille et en intensité arithmétique. Les algorithmes sur ce format hiérarchique s’expriment naturellement à l’aide d’un paradigme de tâches dont l’ordonnancement est délégué à un support d’exécution dédié. Basé sur le modèle Séquential Task Flow (STF), celui-ci bénéficie d’extensions permettant d’exploiter les particularités du domaine, notamment la prise en compte des dépendances hiérarchiques de données à grain fin afin d’éviter des synchronisations excessives. Cette approche a permis de factoriser efficacement des problèmes qui comportent jusqu’à des dizaines de millions d’inconnues en quelques heures sur quelques dizaines de milliers de cœurs. Cependant, l’irrégularité des calculs laisse une question de recherche ouverte sur l’adaptation de la granularité des tâches à l’architecture matérielle. Les petits noyaux peuvent d’une part souffrir d’inefficacités et les larges tâches gagneraient à être parallélisées. Dans ce contexte, nous proposons une méthodologie et des métriques afin d’évaluer la performance des noyaux par rapport aux limites matérielles. Nous présentons les résultats de performance des noyaux et en faisons une analyse détaillée. Nous mettons en évidence les problèmes de granularité et faisons une estimation du gain potentiel d’une stratégie de fusion pour le grain faible et de parallélisation pour le gros grain. 3.8 Thursday 15 June 2023 3.8.1 Speakers: Alycia Lisito 3.8.2 Title: New parallel features in the sparse solver PaStiX 3.8.3 Summary: Talk for Sparse Days 2023. In this talk, we will present new parallel features in the sparse solver PaStiX. We will start by introducing the distributed MPI version of the initialisation of the factorisation step and solve step. We will then discuss about the new multi-threaded and distributed versions of PaStiX's solve (with its internal schedulers). Finally, we will present a left looking version of the task submission in the POTRF and GETRF functions of the scheduler StarPU. 3.9 Thursday 08 June 2023 3.9.1 Speakers: 3.9.2 Title: ELF (Efficient deep Learning Frameworks) associate team 3.9.3 Summary: Defriefing of the ELF workshop 3.10 Friday 28 April 2023 3.10.1 Speakers: Jean Kossaifi 3.10.2 Title: ELF (Efficient deep Learning Frameworks) associate team 3.10.3 Summary: First workshop organizing in the framework of ELF (Efficient deep Learning Frameworks) associate team between Topal and Anima AI + Science Lab from Caltech (http://tensorlab.cms.caltech.edu/users/ The schedule is the following: Jean will give an overview of tensor methods for deep learning, starting with a short introduction to tensor decomposition, how to leverage tensor methods to design better deep models, for improved performance or speed, model compression and robustness and finally practical implementation using TensorLy and TensorLy-Torch. He will also touch on how to leverage tensor methods for improving quantum circuit simulations and neural operators. • 10.30 to 12.00 : Discussion and brainstorming on efficient training of large deep learning models using re-materialization, offloading, pipelining techniques • 14.30 to 16.00 : Discussion and brainstorming on the high-performance computing aspects of tensor computations (dense, sparse, compressed approaches, low-level kernel optimization, runtime systems considerations) 3.11 Thursday 27 April 2023 3.11.1 Speakers: Alexandre Honorat 3.11.2 Title: Sequential Scheduling of Dataflow Graphs for Memory Peak Minimization 3.11.3 Summary: Many computing systems are constrained by their fixed amount of shared memory. Modeling applications with task or Synchronous DataFlow (SDF) graphs makes it possible to analyze and optimize their memory peak. The problem studied here is the memory peak minimization of such graphs when scheduled sequentially. Regarding task graphs, former work has focused on the Series-Parallel Directed Acyclic Graph (SP-DAG) subclass and proposed techniques to find the optimal sequential algorithm w.r.t. memory peak. As main contributions, we propose task graph transformations and an optimized branch and bound algorithm to solve the problem on a larger class of task graphs. The approach also applies to SDF graphs after converting them to task graphs. However, contributions about SDF graphs will be skipped for this presentation. We evaluate our approach on classic benchmarks, on which we always outperforms the state-of-the-art. 3.12 Thursday 13 April 2023 3.12.1 Speakers: Xunyi 3.12.2 Title: Rockmate: an Efficient, Fast, Automatic and Generic Tool for Re-materialization in PyTorch 3.12.3 Summary: In recent years, very large networks have emerged. These networks induce huge memory requirements both because of the number of parameters and the size of the activations that must be kept in memory to perform back-propagation. Memory issues for training have been identified for a long time. Indeed, training is usually performed on computing resources such as GPUs or TPUs, on which memory is limited. Therefore, different approaches have been proposed. 3.13 Thursday 30 March 2023 3.13.1 Speakers: 3.13.3 Summary: The working group will be held jointly with the HPC-SCALABLE-ECOSYSTEM project. 3.14 Thursday 23 March 2023 3.14.1 Speakers: 3.14.3 Summary: The 15th JLESC Workshop gathers leading researchers in high-performance computing from the JLESC partners INRIA, the University of Illinois, Argonne National Laboratory, Barcelona Supercomputing Center, Jülich Supercomputing Centre, RIKEN Center for Computational Science and the University of Tennessee to explore the most recent and critical issues in advancing the field of HPC from petascale to the extreme scale era. 3.15 Thursday 02 March 2023 3.15.1 Speakers: Aurore Li 3.15.2 Title: Some words about the ELFAM project 3.15.3 Summary: There are two parts in this presentation: • I would like to talk about some of my professional experiences before joining Topal team • I'll present the projet ELFRAM collaborating with Julia Gusak. I'll talk about some ongoning works and our perspectives. 3.16 Thursday 23 Febuary 2023 3.16.1 Speakers: Clément Richefort 3.16.2 Title: Toward a multilevel method for the Helmholtz equation 3.16.3 Summary: It is well known that multigrid methods are very competitive in solving a wide range of SPD problems. However achieving such performance for non-SPD matrices remains an open problem. In particular, two main issues may arise when solving a Helmholtz problem. Some eigenvalues become negative or even complex, requiring the choice of an adapted smoothing method for capturing them. Moreover, since the near-kernel space is oscillatory, the geometric smoothness assumption cannot be used to build efficient interpolation rules. We present some investigations about designing a method that converges in a constant number of iterations with respect to the wavenumber. The method builds on an ideal reduction-based framework and related theory for SPD matrices to correct an initial least squares minimization coarse selection operator formed from a set of smoothed random vectors. 3.17 Thursday 09 Febuary 2023 3.17.1 Speakers: Valentin Lefevre 3.17.2 Title: Efficient Execution of SpGEMM on Long Vector Architectures 3.17.3 Summary: In this talk, we will look at how long vector architectures are used in the context of the European Processor Initiative (EPI) and we will then focus on optimizing SpGEMM in this context. The Sparse GEneral Matrix-Matrix multiplication (SpGEMM) \(C = A \times B\) is a fundamental routine extensively used in domains like machine learning or graph analytics. Despite its relevance, the efficient execution of SpGEMM on vector architectures is a relatively unexplored topic. The most recent algorithm to run SpGEMM on these architectures is based on the SParse Accumulator (SPA) approach, and it is relatively efficient for sparse matrices featuring several tens of non-zero coefficients per column as it computes C columns one by one. However, when dealing with matrices containing just a few non-zero coefficients per column, the state-of-the-art algorithm is not able to fully exploit long vector architectures when computing the SpGEMM kernel. To overcome this issue we propose the SPA paRallel with Sorting (SPARS) algorithm, which computes in parallel several C columns among other optimizations, and the HASH algorithm, which uses dynamically sized hash tables to store intermediate output values. To combine the efficiency of SPA for relatively dense matrix blocks with the high performance that SPARS and HASH deliver for very sparse matrix blocks we propose H-SPA(t) and H-HASH (t), which dynamically switch between different algorithms. H-SPA(t) and H-HASH(t) obtain \(1.24 \times\) and \(1.57 \times\) average speed-ups with respect to SPA respectively, over a set of 40 sparse matrices obtained from the SuiteSparse Matrix Collection. For the 22 most sparse matrices, H-SPA(t) and H-HASH(t) deliver \(1.42 \times\) and \(1.99 \times\) average speed-ups respectively. 3.18 Tuesday 24 January 2023 3.18.1 Speakers: Simon Martiel 3.18.2 Title: Calcul Quantique et HPC 3.18.3 Summary: Après une rapide présentation des activités R&D d’Atos sur le calcul quantique, nous développerons deux sujets à la frontière en calcul hautes performances et calcul quantique. Nous présenterons les récents efforts pour la définition d’un framework basé sur C++ pour l’intégration de kernel quantiques au sein d’un flot de calcul classique. Nous survolerons ensuite certaines problématiques liées à la définition de kernels quantiques pour l’algèbre linéaire. Dans un second temps, nous aborderons le sujet de la simulation de processeurs quantiques via un système HPC classique. Nous présenterons le problème de la contraction de réseaux de tenseurs, ses liens avec le calcul hautes performances, ainsi que des résultats récents basés sur des techniques de méthodes formelles quantiques. 3.19 Thursday 12 January 2023 3.19.1 Speakers: Brieuc Nicolas 3.19.2 Title: Comparing mixed-precision solving with low-rank compression 3.19.3 Summary: Comparisons between mixed precision and low rank compression in the sparse direct solver PaStiX. 4 Schedule 2022 • 15/12/2022 : [Abel] • 08/12/2022 : [Hayfa] • 01/12/2022 : [Anne-Cecile] • 24/11/2022 : [Alycia] • 17/11/2022 : [Mathieu] • 10/11/2022 : canceled • 03/11/2022 : SBAC-PAD 2022 conference (https://project.inria.fr/sbac2022/) • 27/20/2022 : canceled • 20/20/2022 : canceled • 13/10/2022 : [Yanfei] • 06/10/2022 : [Lucas] • 29/09/2022 : [Alycia] • 22/09/2022 : [Xunyi] • 30/06/2022 : internship presentations • 23/06/2022 : Sparse Days (canceled) • 16/06/2022 : [Nathanaël] • 09/06/2022 : SOLHARIS (canceled) • 02/06/2022 : lecture for "Graph Partitioning and Sparse Matrix Ordering using Reinforcement Learning and Graph Neural Networks" • 26/05/2022 : canceled (public holiday) • 19/05/2022 : canceled • 12/05/2022 : auditions PR UB (Pierre + Abdou) • 05/05/2022 : [Clément] • 28/04/2022 : canceled (spring holidays) • 21/04/2022 : introduction à l'apprentissage par renforcement (part3) • 14/04/2022 : [Alycia] • 07/04/2022 : canceled • 31/03/2022 : introduction à l'apprentissage par renforcement (part2) • 24/03/2022 : introduction à l'apprentissage par renforcement (part1) • 17/03/2022 : canceled • 10/03/2022 : canceled • 03/03/2022 : [Julia] • 24/02/2022 : canceled (winter holidays) • 17/02/2022 : [Julia] • 10/02/2022 : SOLHARIS and HPC Scalabe Ecosystem half-days • 03/02/2022 : [Grégoire] • 27/01/2022 : [Valentin] • 20/01/2022 : canceled • 13/01/2022 : canceled • 06/01/2022 : [Cristina] 4.1 Thursday 15 December 2022 4.1.1 Speakers: Abel Calluaud 4.1.2 Title: Combined runtime system and compiler for direct hierarchical solver 4.1.3 Summary: Previous work on the boundary element method for solving Maxwell's equation in frequency has pushed the limits in terms of problems reachable by direct approaches using compression techniques. The distributed direct solver developed at CEA relies on the approximation by hierarchical matrices to reduce both computational and memory costs. Although these developments have met a growing demand for increased simulation accuracy, there are still open problems to pursue these research efforts in an HPC context. We propose to develop and compare several approaches to adapt the granularity of hierarchical tasks and extract parallelism to exploit the multicore computational nodes associated with massively parallel architectures such as GPUs. 4.2 Thursday 08 December 2022 4.2.1 Speakers: Hayfa Tayeb 4.2.2 Title: MulTreePrio scheduler for task-based applications 4.2.3 Summary: Effective scheduling is crucial for task-based applications to achieve high performance in heterogeneous computing systems. We propose MulTreePrio, a scheduler based on trees. Tasks are assigned to available resources based on priority scores. These scores are computed using heuristics built according to a set of rules that the scheduler must respect. 4.3 Thursday 01 December 2022 4.3.1 Speakers: Anne-Cecile Orgerie (common seminar with STORM) 4.3.2 Title: Some wrong ideas about energy consumption of distributed systems 4.3.3 Summary Distributed systems, such as Cloud computing, are increasingly spanning worldwide, with digital services hosted all around the globe and often belonging to complex systems, utilizing many other services and hardware resources themselves. Along with this increase comes an alarming growth of Cloud devices and their related energy consumption. Despite the distributed systems’ complexity, understanding how they consume energy is important in order to hunt wasted Joules and reduce their environmental impact. This talk will deal with measuring the energy consumption of distributed systems and wrong ideas about this consumption that can be found in literature. 4.4 Thursday 24 November 2022 4.4.1 Speakers: Alycia Lisito 4.4.2 Title: Sparse direct solver analysis and optimization for MHD simulation 4.4.3 Summary: Pre-defense of Internship 4.5 Thursday 17 November 2022 4.5.1 Speakers: Mathieu Vérité 4.5.2 Title: A Data Distribution Scheme for Dense Cholesky Factorization on Any Number of Nodes 4.5.3 Summary: This work focuses on the case of the parallel and distributed execution of the Cholesky factorization on a dense matrix using identical computing resources. We consider the problem of distributing a matrix divided into tiles to a set of nodes in order to reduce the overall volume of communication. The recently introduced Symmetric Block Cyclic (SBC) distribution is a pattern-based distribution that takes advantage of the symmetry of the input matrix \(A\) to reduce the volume of communication generated when performing Cholesky factorization compared to classical 2D Block Cyclic. Experimental results show that the overall communication reduction allows to achieve higher performance. However, SBC distribution scheme is only available for specific values of the number of nodes \(P\). In this work, we propose a greedy algorithm, Greedy ColRow & Matching (GCR&M), which enables to define pattern-based distributions with a structure similar to SBC and which achieves the same communication reduction but for any \(P\). It thus allows a generalization of distributions specifically adapted to the Cholesky factorization and more generally to kernels using a symmetric input. We present a theoretical evaluation of the distributions provided by the GCR&M algorithm. The flexibility and ease of programming induced by task-based runtime systems allowed us to carry out experiments to test out those distributions using Chameleon and StarPU. Results show that the solutions provided by GCR&M algorithm manage to achieve performance similar to SBC while being available for any \(P\). 4.6 Thursday 13 October 2022 4.6.1 Speakers: Yanfei Xiang 4.6.2 Title: Hybridizing deep learning solver and numerical linear algebra for the Helmholtz equations 4.6.3 Summary: In the recent years, research on scientific machine learning based on deep learning solvers has been increasingly applied to scientific computing and computational engineering. However, even though these newly data-driven deep learning solvers could be very effective as soon as they have been properly trained, they can (in general) provide us with a solution of limited accuracy. Furthermore, the computational cost in the training phase can be extremely expensive. In this talk, we presents some ways of hybridizing the newly emerging deep learning solvers and the more traditional numerical linear algebra techniques to let them benefit from each other. In the context of solving a heterogeneous Helmholtz equation, we first focus on introducing some mathematical ingredients from classical iterative solver into the training phase of a recently proposed deep neural network solver. The main benefit is a significant improvement in the training phase that is more robust and faster, which turns out to be applicable to the testing process as well. Furthermore, once the network solvers have been properly trained, their inferences can be applied as a nonlinear preconditioner in the traditional flexible GMRES and flexible FOM methods. This part demonstrates that these hybrid variants have clear advantages over both the newly emerging deep neural network approach and the classical iterative Krylov solver in terms of both computational cost and accuracy of the computed solution. 4.7 Thursday 06 October 2022 4.7.1 Speakers: Lucas Nesi (common seminar with STORM) 4.7.2 Title: Exploiting system-level heterogeneity to improve the performance of multi-phase task-based applications 4.7.3 Summary: HPC infrastructures often present intra-node (multi-core CPUs and multiple GPUs) and system-level heterogeneity (different nodes arranged into partitions). This heterogeneity provides opportunities for the applications to improve performance, especially in task-based multi-phase applications where each phase has different needs. Improving phases overlap and asynchronously, taking advantage of inter-node heterogeneity to better distribute the load, and adequately selecting the number of resources to use are examples of such opportunities. In this talk, we present strategies for (1) improving applications phase overlap by optimizing runtime and scheduling decisions; (2) computing a distribution considering phases' suitability over heterogeneous nodes while reducing the redistribution overhead; (3) strategies based on the Gaussian Process for the application to dynamically learn during execution time and adapt to the best set of heterogeneous nodes it has access. 4.8 Thursday 29 September 2022 4.8.1 Speakers: Alycia Lisito 4.8.2 Title: Distributed interface for PaStiX 4.9 Thursday 22 September 2022 4.9.1 Speakers: Xunyi Zhao 4.9.2 Title: Rockmate: to save memory by combining Rotor and Checkmate 4.10 Thursday 30 June 2022 4.10.1 Speakers: • Guillaume Bienfait : QemuNet to QemuWeb • Jean-Alexandre Collin : Reducing communications on a dense Cholesky factorization • Théotime Le Hellard : Rematerializing Optimally with pyTORch • Brieuc Nicolas : Mixed precision in PaStiX solver 4.10.2 Title: Internship presentations 4.10.3 Summary: 4.11 Thursday 16 June 2022 4.11.1 Speakers: Nathanaël Fijalkow 4.11.2 Title: Gragh Neural Networks 4.11.3 Summary: 4.12 Thursday 02 June 2022 4.12.1 Speakers: Alice Gatti (video) 4.12.2 Title: Graph Partitioning and Sparse Matrix Ordering using Reinforcement Learning and Graph Neural Networks 4.13 Thursday 05 May 2022 4.13.1 Speakers: Clément Richefort 4.13.2 Title: Interpolator built from local near-kernel spaces in multigrid methods applied to Helmholtz equation 4.13.3 Summary: Numerical simulations of various physical phenomenons give rise to the resolution of a potentially very large system of linear equations Ax=b. Multigrid methods are known to be scalable and quasi-optimal for solving such a system in many classes of problems. In this method, computation of x is accelerated thanks to a collection of coarse problems. The solution on the coarsest level is computed by a direct method, while few iterations of a given smoother is used on others to refine the approximation made by restriction and interpolation. Such a complementarity principle between a smoother targeting residual information and a direct method solving the near-kernel space of A guarantees the convergence of the method. In elliptic problems, it is easy to find appropriates smoother and interpolators to reach this complementarity since A is positive and has a smooth near-kernel space. However for indefinite problems like Helmholtz, smoothness property of near-kernel space and positiveness of resulting matrix can not be called to reach complementarity : usual smoothers amplify negative components while usual interpolators are not designed to catch wave-like near-kernel space anymore. Working on normal equations or executing Krylov iterations are good alternatives to usual smoothers in indefinite context, however finding appropriate operators to target near-kernel space is still an open question. From definition a theoretical Ideal Interpolator, this presentation will expose a new type of interpolator permitting to reach convergence in a constant number of iterations independently of matrix size with 2 or 3 multigrid levels. This interpolator works by combining a smoothing matrix to damp oscillatory components, an approximation of the Fine-Relaxation operator used in Ideal Interpolator, and a tentative interpolator built in order to target local near-kernel spaces computed from blocks of A. 4.14 Thursday 21 Appril 2022 4.14.1 Speakers: Grégoire Passault (Robhan team) 4.14.2 Title: Introduction à l'apprentissage par renforcement (part3) 4.15 Thursday 14 Appril 2022 4.15.1 Speakers: Alycia Lisito 4.15.2 Title: Validation and evaluation of the Chameleon Lapack interface 4.15.3 Summary: Chameleon is a dense linear algebra library that aims at replacing the standard APIs Lapack and Sclapack for distributed and heterogeneous architectures. It relies on tiled algorithms implemented on top of various runtime systems such as StarPU, Quark, Parsec, or OpenMP. The Chameleon library provides three interfaces to the user for more flexibility: an interface close to Lapack API to ease the switch from Lapack to Chameleon to the users, a tile interface to match the algorithm structure, and a tile asynchronous interface to enable pipelined algorithms. Over the time, support of the Lapack interface was not sustained and leftover from the conversion from Plasma was still impacting the performance of this interface. The objective of my internship was twofold, add checking functionnality to fix potential issues and ease its support, and evaluate the performance impact of the out-of-place copy to tile layout with respect to an in-place solution. During this talk, I will recall the principles of the tile algorithms in Chameleon, the respective data layout of Lapack and Chameleon and how we can implement a Lapack interface on top of the chameleon algorithm. I'll finish by presenting the evaluation of the performance of the library on multiple architectures from plafrim. 4.16 Thursday 31 March 2022 4.16.1 Speakers: Grégoire Passault (Robhan team) 4.16.2 Title: Introduction à l'apprentissage par renforcement (part2) 4.17 Thursday 24 March 2022 4.17.1 Speakers: Grégoire Passault (Robhan team) 4.17.2 Title: Introduction à l'apprentissage par renforcement (part1) 4.18 Thursday 03 March 2022 4.18.1 Speakers: Julia Gusak - Skoltech 4.18.2 Title: Neural Networks Memory Footprint Reduction during Training 4.18.3 Summary: Modern Deep Neural Networks (DNNs) require significant memory to store weights, activations, and other intermediate tensors during training. Hence, many models do not fit in the memory of one GPU device or can be trained using only a small per-GPU batch size. We will discuss two new approaches two reduce memory consumption during training by performing activation quantization or approximate matrix multiplication during backward pass. 4.19 Thursday 17 February 2022 4.19.1 Speakers: Julia Gusak - Skoltech 4.19.2 Title: Neural Networks Speed-up and Compression 4.19.3 Summary: Most modern neural networks are overparameterized and exhibit high computational costs. As a result, they can’t be efficiently deployed on embedded systems and mobile devices due to power and memory limitations. We present several approaches based on low-rank approximations to accelerate and compress neural networks during inference. Also, the memory footprint is one of the main limiting factors for large neural network training. And we discuss several techniques to reduce memory costs during the training phase based on approximate gradients computation during backpropagation. 4.20 Thursday 10 February 2022 4.20.1 Title: SOLHARIS and HPC Scalabe Ecosystem half-days 4.20.2 Wednesday 9/2 (9:45 - 12:45) • Scheduling algorithms 10:00 - 11:00 [chair: Loris Marchal] □ Maxime Gonthier "Memory-Aware Scheduling of Tasks Sharing Data on Multiple GPUs" □ Mathieu Verité / Lionel Eyraud-Dubois "I/O-Optimal Algorithms for Symmetric Linear Algebra Kernels" • Programming models and tools 11:15 - 12:45 [chair: Olivier Aumage] □ Marek Felsoci "An energy consumption study of coupled solvers for FEM/BEM linear systems: preliminary results" □ Philippe Swartvagher "Recent progress on starpu+nmad" □ Antoine Jego "Task-based programming models for scalable algorithms" 4.20.3 Thursday 10/2 (9:45 - 11:30) • Application 10:00 - 11:30 [chair: Vincent Perrier] □ Grégoire Pichon / Mathieu Faverge "Deciding Non-Compressible Blocks in Sparse Direct Solvers using Incomplete Factorization" □ Sangeeth Simon "Task Based Parallelization Of A Finite-Volume Code For Hyperbolic Conservation Laws" □ Romain Peressoni "Approximating MDS point clouds shape using only part of the distance matrix" 4.21 Thursday 03 February 2022 4.21.1 Speakers: Grégoire Pichon 4.21.2 Title: Trading Performance for Memory in Sparse Direct Solvers using Low-rank Compression 4.21.3 Summary: Sparse direct solvers using Block Low-Rank compression have been proven efficient to solve problems arising in many real-life applications. Improving those solvers is crucial for being able to 1) solve larger problems and 2) speed up computations. A main characteristic of a sparse direct solver using low-rank compression is at what point in the algorithm the compression is performed. There are two distinct approaches: (1) all blocks are compressed before starting the factorization, which reduces the memory as much as possible, or (2) each block is compressed as late as possible, which usually leads to better speedup. Approach 1 reaches a very small memory footprint generally at the expense of a greater execution time. Approach 2 achieves a smaller execution time but requires more memory. In this talk, we design a composite approach, to speedup computations while staying under a given memory limit. This should allow to solve large problems that cannot be solved with Approach 2 while reducing the execution time compared to Approach 1. We propose a memory-aware strategy where each block can be compressed either at the beginning or as late as possible. We first consider the problem of choosing when to compress each block, under the assumption that all information on blocks is perfectly known, i.e., memory requirement and execution time of a block when compressed or not. We show that this problem is a variant of the NP-complete Knapsack problem, and adapt an existing approximation algorithm for our problem. Unfortunately, the required information on blocks depends on numerical properties and in practice cannot be known in advance. We thus introduce models to estimate those values. Experiments on the PaStiX solver demonstrate that our new approach can achieve an excellent trade-off between memory consumption and computational cost. For instance on matrix Geo1438, Approach 2 uses three times as much memory as Approach 1 while being three times faster. Our new approach leads to an execution time only 30% larger than Approach 2 when given a memory 30% larger than the one needed by Approach 1. 4.22 Thursday 27 January 2022 4.22.1 Speakers: Valentin Le Fèvre 4.22.2 Title: Resilient algorithms in HPC and linear algebra for new architectures 4.22.3 Summary: In this talk, I will present various research topics on which I was involved during my PhD (at ENS de Lyon) or my post-doc position (at BSC). The first axis will be resilience for large-scale platforms. We will review different techniques for resilience, mainly checkpointing and replication, with a focus on process replication. Process replication allows an application to survive many errors, allowing to use checkpointing less frequently. We will study two different strategies: one which restarts processes at each checkpoint and one where processes are restarted only after a crash. The second axis will be sparse linear algebra. I will present a recent work on sparse Cholesky factorization, where the application is represented as a task-graph. We define thresholds to automatically adapt the granularity of the tasks depending on the input matrix, and present experimental results using the recent A64FX processor and OmpSs runtime. I will conclude with some on-going perspectives for processors based on the Risc-V architecture, in the context of the European Processor Initiative (EPI). 4.23 Thursday 06 January 2022 4.23.1 Speakers: Dr. Cristina Boeres, Instituto de Computação, Universidade Federal Fluminense, Brazil 4.23.2 Title: Towards Analyzing Computational Costs of Spark for SARS-CoV-2 Sequences Comparisons on a Commercial Cloud 4.23.3 Summary: This talk introduces the development of a Spark application, named Diff Sequences Spark which compares 540 SARS-CoV-2 sequences from South America in Amazon EC2 Cloud. The work analyzed the performance of the proposed application on selected memory and storage EC2 virtual machines instances (VMs) at on-demand and spot markets. Some preliminary conclusions were drawn: the execution times and financial costs of the memory-optimized VMs outperformed the storage optimized ones and the average execution times and monetary costs were reduced on spot VMs compared to their respective on-demand even in scenarios with several spot revocations. 4.23.4 Bio: Cristina Boeres is an associate professor at the Instituto de Computação (IC) of the Universidade Federal Fluminense (UFF), in Brazil. Her current research interests focus on various aspects of parallel and distributed computing in grids and clouds, including autonomic computing, scientific applications, resource management, scheduling, and fault tolerance. She has been involved with the organization of conferences such as the International Symposium on Computer Architecture and High Performance Computing, sponsored by IEEE and SBC, the Brazilian Symposium on High Performance Computational Systems, sponsored by SBC, and also, IPDPS 2019 held in Rio de Janeiro, Brazil. 5 Schedule 2021 • 16/12/2021 : [Clement] • 09/12/2021 : [Vinod] • 02/12/2021 : canceled (talk by Karim replanned in January, common WG with HiePACS) • 24/11/2021 : canceled (replaced by the overview of the Energy[scope] tool) • 18/11/2021 : Informal discussions around energy saving • 11/11/2021 : canceled (public holiday) • 04/11/2021 : canceled (university break) • 28/10/2021 : [Mathieu] • 21/10/2021 : [Alena] • 14/10/2021 : [Jean-François] • 07/10/2021 : [Xunyi] • 30/09/2021 : [Clement] • 23/09/2021 : Internship defenses (Nolan Bredel & Tom Moënne-Loccoz) • 16/09/2021 : canceled • 09/09/2021 : Coffee-break / Go-around • 08/07/2021 : moliere / skoltech-inria meeting • 02/07/2021 : solharis / hpc-scalable-ecosystem day • 24/06/2021 : [Olivier] • 17/06/2021 : [Lionel] • 10/06/2021 : [Nolan] • 04/06/2021 : [Clement] • 27/05/2021 : [Pavel] • 20/05/2021 : [Suraj] • 06/05/2021 : [Olivier] • 12/04/2021 : [Gwenole] • 08/04/2021 : Data Parallelism - Model Parallelism - Hybrid Parallelism (Fidle team, CNRS) • 01/04/2021 : Keras and Convolutional Neural Networks (Fidle team, CNRS) • 25/03/2021 : History and basic concepts of neural networks (Fidle team, CNRS) • 18/03/2021 : [Mathieu] • 11/03/2021 : [Hervé] • 04/03/2021 : Présentation de PlaFRIM • 25/02/2021 : [Mathieu] • 18/02/2021 : Winter Holidays • 11/02/2021 : [Tony] • 04/02/2021 : [Alena] • 28/01/2021 : [Esra] 5.1 Thursday 16 December 2021 5.1.1 Speakers: Clement Richefort 5.1.2 Title: Towards approximating ideal interpolator for multigrid methods applied to indefinite Helmholtz equation 5.1.3 Summary: The indefiniteness of Helmholtz equation makes usual multi-level schemes unable to converge. Geometric and algebraic interpolators permitting to propagate the solution between each space don't give a good approximation of near-kernel components (NKC). A first idea is to build iterative methods combining local near-kernel spaces for targeting the global one, and use them as smoothers between each level for correcting the bad approximation of usual interpolators. Another approach tries to approximate the so-called "ideal interpolator" which gives good convergence results but requires to inverse a potentially large sub-matrix of the system, making it hard to compute and very dense. Theoretical attempts of this "ideal interpolator" approximation will be presented. 5.2 Thursday 09 December 2021 5.2.1 Speakers: Dr. Vinod Rebello, Instituto de Computação, Universidade Federal Fluminense, Brazil 5.2.2 Title: Managing Vertical Memory Elasticity in Containers 5.2.3 Summary: Efficient resource utilization and throughput maximization are just two important objectives for service providers trying to reduce operating costs, for example in clusters, cloud data centers, and even cloudlets at the edge. While containers consume CPU, memory, and I/O resources elastically, orchestration frameworks must still allocate and schedule containers according to resource availability and limit the amount of resources that each can use to avoid adverse interference. In this short talk, I will present a tool to autonomically manage container memory allocations dynamically in order to increase the average number of containers that can be hosted on a server. Evaluations show that through the careful adjustments of memory limits, manipulation of pages between memory and swap, and the use of container preemption, improvements in memory utilization, cloud costs, and job throughput can be achieved without prejudicing container performance. 5.2.4 Bio: Vinod Rebello is an associate professor at the Instituto de Computação (IC) of the Universidade Federal Fluminense (UFF), in Brazil. His current research interests focus on various aspects of parallel and distributed computing in grids and clouds, including autonomic computing, scientific applications, resource management, scheduling and fault tolerance, and cybersecurity. He has experience in high performance computing, having been responsible for HPC services at IC-UFF and participated in several research projects funded through the European Union-Brazil Cooperation Program in ICT. Dr Rebello is an Associate Editor of the Journal of Parallel and Distributed Computing, a member of the ACM, IEEE and SBC (Brazilian Computing Society) and has been involved in the organisation of a number of their conferences, including acting as General Chair of IPDPS 2019. 5.3 Thursday 18 Novembre 2021 5.3.1 Speakers: /all 5.3.2 Title: Informal discussions around energy saving 5.4 Thursday 28 October 2021 5.4.1 Speakers: Mathieu Vérité 5.4.2 Title: Parallel distributed Cholesky factorization 5.4.3 Summary: In this work, we consider the parallel distributed Cholesky factorization on a set of homogeneous nodes. Our contribution is of two different natures. First, we provide a new parametric lower bound with an improved scaling factor on the amount of data that must be exchanged during the factorization, for a fixed number of nodes and a fixed memory size. This improvement of the bound is based on a specific treatment of data reuse in symmetric kernels such as the Cholesky factorization. Then, based on this work on lower bounds, we propose a new data distribution scheme, called Symmetric Block Cyclic, in replacement of the 2D and 3D Block Cyclic distributions for symmetric dense matrices. The Symmetric Block Cyclic distribution reduces the communication volume while keeping an excellent load balancing during the computation. Experiments with multicore nodes using the chameleon library show that this reduction of the communication volume effectively induces a reduction of the computing time both for the parallel distributed Cholesky factorization, as well as for subsequent Cholesky solves or the Cholesky inversion. 5.5 Thursday 21 October 2021 5.5.1 Speakers: Alena Shilova 5.5.2 Title: Efficient Combination of Rematerialization and Offloading for Training DNNs 5.5.3 Summary: Rematerialization and offloading are two well-known strategies to save memory during the training phase of deep neural networks, allowing data scientists to consider larger models, batch sizes or higher resolution data. Rematerialization trades memory for computation time, whereas Offloading trades memory for data movements. As these two resources are independent, it is appealing to consider the simultaneous combination of both strategies to save even more memory. We precisely model the costs and constraints corresponding to Deep Learning frameworks such as PyTorch or Tensorflow, we propose optimal algorithms to find a valid sequence of memory-constrained operations and finally, we evaluate the performance of proposed algorithms on realistic networks and computation platforms. Our experiments show that the possibility to offload can remove one third of the overhead of rematerialization, and that together they can reduce the memory used for activations by a factor 4 to 6, with an overhead below 20%. 5.6 Thursday 14 October 2021 5.6.1 Speakers: Jean-François David 5.6.2 Title: Realize an adaptive sampling algorithm for TFM imaging on GPU 5.6.3 Summary: Non-destructive testing (NDT) is a set of methods to characterize the state of integrity of structures or materials without having recourse to their degradation. These verifications are generally performed by an operator using various instruments including ultrasonic and electromagnetic translators. These are characterized by the emission of a signal that interacts with the part to be inspected. The signal refracted towards the translator carries the signature of the specificities of the structure and the defects that may be present. This last signal must then be interpreted in order to make a decision as to the presence or absence of an or not of an anomaly. The non-destructive testing by ultrasound linked to the use of multi-element translators allows very efficient reconstruction methods such as the Total Focusing Method (TFM), which remains however greedy in computing time. In order to overcome this problem, we propose a method based on barycentric interpolation: from a mesh, we build an uneven mesh (called adaptive) that adapts to the topology of the area to be reconstructed, thus minimizing the interpolation errors. 5.7 Thursday 7 October 2021 5.7.1 Speakers: Xunyi Zhao 5.7.2 Title: Regularization Effect of Dropout 5.7.3 Summary: As one of the most important regularization methods in deep learning, dropout has never been fully understood. The result that dropout does not work on the small datasets motivates us to study the regularization behavior of dropout. In this project, we try to understand how dropout regularizes deep learning models, especially what makes dropout not work with small datasets. Our focus lies in the classification task with fully connected deep neural networks, where standard dropout is applied in the training stage. 5.8 Thursday 30 September 2021 5.8.1 Speakers: Clement Richefort 5.8.2 Title: Multigrid methods applied to the Helmholtz equation 5.8.3 Summary: Afin d'améliorer la surface équivalente radar de différentes plateformes, le CEA développe des codes de calcul simulant le comportement électromagnétique d’objets 3D complexes. L’un de ces codes couple une méthode éléments finis à une équation intégrale (utilisée comme condition de rayonnement exacte). La partie éléments finis est aujourd’hui traitée par une méthode de décomposition de domaine, chaque sous-domaine étant résolu à l’aide d’un solveur direct pour matrices creuses. Cependant, avec l’augmentation des capacités de calcul du CEA, les limitations de cette méthode apparaissent : l’augmentation du nombre de sous-domaines tend à dégrader la convergence de la méthode, et l’augmentation des tailles des sous-domaines rend l’utilisation d’un solveur direct compliquée, à cause du manque de scalabilité de ce type de solveur. Ce travail s'inscrit dans une première étape de recherche d'une méthode alternative à la décomposition de domaine : les méthodes multigrilles. Le principe de cette méthode est l’utilisation d’une collection de problèmes grossiers qui permettent d’accélérer le calcul de la solution fine. C’est une méthode itérative, qui est éprouvée pour des problèmes elliptiques, et ayant une scalabilité optimale (la résolution est en \(O(n)\), \(n\) étant le nombre d’inconnues du problème). Cependant, il est connu que les méthodes multigrilles ne sont pas performantes pour des problèmes à noyaux oscillants tels que l’électromagnétisme ou l’acoustique (ces équations conduisant à des problèmes indéfinis). L'objectif du stage est donc de faire une première analyse de la méthode pour l'équation de Helmholtz (cas acoustique). 5.9 Thursday 23 September 2021 5.9.1 Speaker : Nolan Bredel 5.9.2 Title: Integration of low rank support in distributed memory in PaStiX 5.9.3 Summary: At the beginning of the internship, the low rank support in PaStiX was not available in distributed memory. We will present how it has been implemented for the static and dynamic schedulers and for 5.9.5 Speaker : Tom Moënne-Loccoz 5.9.6 Title: Integration of Heteroprio scheduler for GPU performance into PaStiX-StarPU. 5.9.7 Summary: Performance comparisons between PaStiX-StarPU and PaStiX-PaRSEC with GPUs show a possibility for improvement on StarPU's side. The objective of the internship is to increase these performances through the integration of the Heteroprio scheduler into PaStiX-StarPU. 5.10 Thursday 09 September 2021 5.10.1 Title: Coffee-break - Go-around the table 5.11 Thursday 08 July 2021 5.11.1 Title: Inria Skoltech Moliere Associated Team 5.12 Friday 02 July 2021 5.12.1 Title: SOLHARIS and HPC Scalabe Ecosystem half-days 5.12.2 Middleware 9:30 - 10:30 - chair Bora Uçar • Philippe Swartavagher: Interferences between Communications and Computations in Distributed HPC Systems • Antoine Jego: Task-based programming models for scalable distributed memory parallel algorithms • Arthur Chevalier: HPC-Big Data Convergence: A library to apply highly parallel algorithms on big data clusters 5.12.3 Scheduling: 11:00-11:40 - chair Lionel Eyraud-Dubois • Maxime Gonthier: Locality-Aware Scheduling of Independent Tasks for Runtime Systems • Mathieu Vérité: Makespan lower bound combining communication and computation for dynamic task allocation on homogeneous resource: dense Cholesky factorization test case 5.12.4 Applications: 12:00-13:00 - chair Cédric Augonnet • Marek Felsoci: Towards memory-aware multi-solve two-stage solver for coupled FEM/BEM systems • Sangeeth Simon: Task-based parallelization of a multi-dimensional, higher order, finite volume code for the Euler flows • Romain Peressoni: Folding as a fast heuristic to compare point clouds 5.13 Thursday 24 June 2021 5.13.1 Speaker: Olivier Beaumont [with Thomas Lambert, Loris Marchal, Bastien Thomas] 5.13.2 Title: Data-Locality Aware Dynamic Schedulers for Independent Tasks with Replicated Inputs 5.13.3 Summary: Might be related to Qarnot Computing problems ! 5.14 Thursday 17 June 2021 5.14.1 Speaker: Lionel Eyraud-Dubois 5.14.2 Title: Rotor Tutorial 5.14.3 Summary: I will present a tutorial on how to use Rotor to perform Deep Learning with PyTorch, while limiting the memory usage. 5.15 Thursday 10 June 2021 5.15.1 Speaker: Nolan Bredel 5.15.2 Title: Présentation des nouvelles fonctionnalités dans ViTE 5.15.3 Summary: Un PFA sur ViTE a été réalisé cette année à l'Enseirb dans le but de le mettre à jour et d'ajouter plusieurs fonctionnalités. Ce travail s'est articulé autour de 3 axes : • intégration du moteur de rendu Vulkan, • intégration d'un plugin de statistique sur la trace, • mise à jour du plugin de visualisation de matrice. 5.16 Friday 04 June 2021 5.16.1 Speaker: Clément Gavoille (TADaaM Talk) 5.16.2 Title: Modélisation et projection de performances d’applications parallèles sur environnement ARM 5.16.3 Summary: L’évolution des architectures de processeurs rend la prédiction et l’évaluation des performances d’une application parallèle complexes. En effet, l’augmentation du nombre de cœurs de calcul, la multiplication des unités vectorielles et l’organisation interne du réseau en maillage influencent grandement le comportement des processeurs. Néanmoins, afin de préparer l’arrivée de futures machines, il est nécessaire d’avoir une idée des performances (e.g., temps d’exécution ou nombre d’opérations flottantes par seconde) des applications parallèles actuelles sur les futures générations de processeurs. Dans ce cadre, le CEA et ARM souhaitent développer une méthodologie de prédiction de performance : étant donné le comportement d’une application parallèle sur un processeur existant et les caractéristiques d’un processeur hypothétique, le but est de prédire les performances de cette application sur ce dernier. L’objectif de cette thèse est de définir un modèle de projection de performance d’applications parallèles basé sur l’évolution des processeurs incluant les changements des unités de calcul (par exemple, jeu d’instructions SVE), l’évolution du nombre de cœurs et l’augmentation des capacités mémoire (DDR, HBM, …). Ceci passera par l’expérimentation sur des machines existantes afin de valider le modèle et sur des variations de processeurs connus (changement de fréquence, changement du nombre de cœurs…). Cette phase permettra alors d’étudier l’impact architectural sur les performances. Une fois cette première étude effectuée, il sera alors possible de faire évoluer ce modèle pour raffiner les prédictions de performances notamment en cas de changements micro-architecturaux (ce qui est le cas lors d’un changement entre générations quasi-identique). 5.17 Thursday 27 May 2021 5.17.1 Speaker: Pavel Kus, Berenger Bramas 5.17.2 Title: Towards a parallel domain decomposition solver for immersed boundary finite element method 5.18 Thursday 20 May 2021 5.18.1 Speaker: Suraj Kumar 5.18.2 Title: Parallel Tensor Train through Hierarchical Decomposition 5.18.3 Summary: We consider the problem of developing parallel decomposition and approximation algorithms for high dimensional tensors. We focus on a tensor representation named Tensor Train (TT). It stores a d-dimensional tensor in O(ndr^2), much less than the O(n^d) entries in the original tensor, where 'r' is usually a very small number and depends on the application. Sequential algorithms to compute TT decomposition and TT approximation of a tensor have been proposed in the literature. We propose a parallel algorithm to compute TT decomposition of a tensor. We prove that the ranks of TT-representation produced by our algorithm are bounded by the ranks of unfolding matrices of the tensor. Additionally, we propose a parallel algorithm to compute approximation of a tensor in TT-representation. Our algorithm relies on a hierarchical partitioning of the dimensions of the tensor in a balanced binary tree shape and transmission of leading singular values of associated unfolding matrix from the parent to its children. We consider several approaches on the basis of how leading singular values are transmitted in the tree. We present an in-depth experimental analysis of our approaches for different low rank tensors and also assess them for tensors obtained from quantum chemistry simulations. Our results show that the approach which transmits leading singular values to both of its children performs better in practice. Compression ratios and accuracies of the approximations obtained by our approaches are comparable with the sequential algorithm and, in some cases, even better than that. 5.19 Thursday 6 May 2021 5.19.1 Speaker: Olivier Beaumont, Lionel Eyraud-Dubois, Alena Shilova 5.19.2 Title: Pipelined Model Parallelism: Complexity Results and Memory Considerations 5.19.3 Summary: The training phase in Deep Neural Networks has become an important source of computing resource usage and because of the resulting volume of computation, it is crucial to perform it efficiently on parallel architectures. Even today, data parallelism is the most widely used method, but the associated requirement to replicate all the weights on the totality of computation resources poses problems of memory at the level of each node and of collective communications at the level of the platform. In this context, the model parallelism, which consists in distributing the different layers of the network over the computing nodes, is an attractive alternative. Indeed, it is expected to better distribute weights (to cope with memory problems) and it does not imply large collective communications since only forward activations are communicated. However, to be efficient, it must be combined with a pipelined / streaming approach, which leads in turn to new memory costs. The goal of this paper is to model these memory costs in detail, to analyze the complexity of the associated throughput optimization problem under memory constraints and to show that it is possible to formalize this optimization problem as an Integer Linear Program (ILP). 5.19.5 Ressources: 5.20 Thursday 12 April 2021 5.20.1 Speaker: Gwenolé Lucas (STORM Talk) 5.20.2 Title: Programming heterogeneous architectures using divisible tasks 5.20.3 Summary: Task-based runtime systems are used to face the prevalence of heterogeneous architectures and exploit their computational power. As such, StarPU handles the communications and scheduling operations of the task-graphs users provide. However, it is confronted to some limitations, as it works on graphs that are entirely and sequentially submitted at startup, leading to a significant overhead and a static graph. To address these problems, we introduce hierarchical tasks to StarPU, which can insert subgraphs when they are executed. This approach allows for more dynamic graphs, that can evolve at runtime as subgraphs are inserted, for example to modify its granularity locally in order to meet the optimal grain of the different nodes. Similarly, submitting the initial graph with a large grain (and therefore fewer tasks) and then refining it to a smaller grain at runtime by adding subgraphs, could help reduce the aforementioned overhead, as the subgraphs of independent hierarchical tasks can be submitted in parallel. 5.21 Thursday 8 April 2021 5.21.1 Speaker: Bertrand Cabot (IDRIS) 5.21.2 Title: Data Parallelism - Model Parallelism - Hybrid Parallelism 5.21.3 Summary: • We will continue the activity on Deep Neural Networks and we will watch a part of the Video from Bertrand Cabot about parallelism in DNN. • Then, Rémi will summarize the beginning of the session 7 with information about the CNRS/IDRIS Jean Zay calculation plateform. 5.21.4 Ressources: 5.22 Thursday 1 April 2021 5.22.1 Speaker: Jean-Luc Parouty 5.22.2 Title: Keras and Convolutional Neural Networks 5.22.3 Summary: • We will continue the activity on Deep Neural Networks and we will watch a video from Jean-Luc Parouty about Convolutional Neural Networks (sequence 2.2) with practical examples using Keras. • Then, Rémi will summarize the end of the session 2 with information about how to explore hyper-parameters with Keras. • It is not absolutely necessary, but everyone should be able to watch the introduction on CNN (sequence 2.1). 5.22.4 Ressources: 5.23 Thursday 25 March 2021 5.23.1 Speaker: Jean-Luc Parouty 5.23.2 Title: History and basic concepts of neural networks 5.23.3 Summary: • As a prerequisite, everyone should be able to watch the introduction of the class (7min) and the one on the tools (18min). • The video 1.2 "De la régression linéaire au neurone artificiel" (22min) will be played at the beginning of the working group for the first reactions or questions. • Depending on the remaining time we will also watch the following 1.3 "La controverse des neurones". 5.23.4 Ressources: 5.24 Thursday 18 March 2021 5.24.1 Speaker: Mathieu Faverge 5.24.2 Title: Large scale SVD using polar decomposition 5.24.3 Summary: 5.25 Thursday 11 March 2021 5.25.1 Speaker: Hervé Mathieu 5.25.2 Title: Efficacité énergétique d'un programme en HPC 5.25.3 Summary: La page donne des éléments sur ce qui a amené à (et ce qu'est) l'outil `energy[scope]`. 5.26 Thursday 4 March 2021 5.26.1 Speaker: Julien Lelaurain 5.26.2 Title: Connaissez vous la Plateforme Fédérative pour la Recherche en Informatique et Mathématiques (PlaFRIM) ? 5.26.3 Summary: Au programme : • Présentation générale • Présentation technique • Prise en main de SLURM et des modules • Prise en main de GUIX Pour en savoir plus sur la plateforme, c'est ici https://www.plafrim.fr/ ! 5.27 Thursday 25 February 2021 5.27.1 Speaker: Mathieu Vérité 5.27.2 Title: Effect Of Communications On Dynamic Allocation For Distributed Memory Execution Of Dense Tiled Linear Algebra Operations: Cholesky Factorization 5.27.3 Summary: In the context of large scale linear algebra operations, distributed execution on several computation resources require careful distribution of data since transfer over the network connecting those resources is often more costly than computation tasks. Though static allocations currently is the only practical implementation for distributed resolution, one can consider the potential benefit of applying dynamic runtime scheduler strategies at resource level to simultaneously handle the concurrent difficulty of balancing load between resources while minimizing data transfers under contention constraint. Using Cholesky factorization as a study case, we propose to investigate this question by testing out StarPU dynamic allocation and scheduling strategies in simple configurations mimicking typical architectures of parallel computation platform at node or global level. The performance of the strategies are evaluated in comparison with various bounds based on considerations about load balancing and minimum necessary data transfers. Results tend to show that even straightforward adaptations of StarPU allocation and scheduling strategies can outperform the classical 2D block cyclic method which is yet the default data allocation strategy widely used for dense tiled cases. The good performance of some dynamic schedulers may help lead the way to better designed 2.5D or 3D static allocation strategies. On a side notice, the experiments conducted in this study put some focus on the lack of a theoretical bound taking into account both workload and data transfers in the case of distributed execution on several homogeneous resources sharing a communication medium. 5.28 Thursday 11 February 2021 5.28.1 Speaker: Tony Delarue 5.28.2 Title: Hands-on tutorial on PaStiX with GPU 5.28.3 Summary: As the core of a large number of simulation tools, the resolution of large linear systems often represents the dominant part of the computing time. Massively parallel versions are needed to maintain advances in multi-physics and multi-scale simulations, especially when targeting exascale platforms. The aim is therefore to address the major challenge of designing and building numerically robust solvers on runtime systems that can scale up and push back the limits of existing industrial codes of the EoCoE project. In this talk, we will present a hands-on tutorial on the PaStiX solver about the capacity to exploit modern GPU accelerators through runtime systems. 5.29 Thursday 4 February 2021 5.29.1 Speaker: Alena Shilova 5.29.2 Title: Memory Saving Strategies for Deep Neural Network Training 5.29.3 Summary: Training Deep Neural Networks is known to be an expensive operation, both in terms of computational cost and memory load. Indeed, during training, all intermediate layer outputs (called activations) computed during the forward phase must be stored until the corresponding gradient has been computed in the backward phase. These memory requirements sometimes prevent to consider larger batch sizes and deeper networks, so that they can limit both convergence speed and accuracy. Recent works have proposed several ways to deal with this issue. The first one consists in discarding some activations and then recompute them later when needed, which is also known as Rematerialization. The second one is Offloading some of the computed forward activations from the memory of the GPU to the memory of the CPU. Both approaches require a careful choice of candidates among activations, which should be recomputed/offloaded and then scheduling operations/transfers so that the overhead was small. For both problems we present our solutions and we compare our solutions with previous state-of-the-art, showing that they achieve much better performance in a wide variety of situations. 5.30 Thursday 28 January 2021 5.30.1 Speaker: Esragul Korkmaz 5.30.2 Title: Deciding Non-Compressible Blocks in Sparse Direct Solvers using Incomplete Factorization 5.30.3 Summary: In the recent years, many works have tried to enhance sparse direct solvers by introducing computation using low-rank compression formats. This approach has been shown very promising for reducing memory footprint and execution time on a large spectrum of solvers. Among them, sparse direct solvers based on supernodal approaches despite providing a better scalability, suffer from the low-rank update step. For this family of solver, the low-rank format, while reducing greatly the memory footprint, does not have the expected impact on the execution time. In this paper, we study two solutions to overcome this issue. The first one studies the impact of K-Way partitioning to reorder the unknowns in this context. The second solution tackles the overprice of the low-rank updates by identifying the blocks that have poor compression rates. It is expected that a good identification of these blocks would reduce the flops overhead that may appear in the Block Low-Rank updates. We show that block incomplete LU factorization allows identifying at low cost most of the non-compressible blocks by their fill-in levels. This identification enables to postpone the block compression step to trade small extra memory consumption for a better time to solution. Both solutions are validated within the PaStiX library with a large set of application matrices and demonstrate sequential and multi-threaded speedup up to 84%, for small memory overhead of less than 3% with respect to the original version.
{"url":"https://topal.gitlabpages.inria.fr/topal-working-group/","timestamp":"2024-11-02T12:11:26Z","content_type":"application/xhtml+xml","content_length":"218582","record_id":"<urn:uuid:54d8cc09-be69-4864-a8d4-ee3bf58623fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00108.warc.gz"}
1. (3 marks) Suppose a price-taking consumer chooses goods 1 and 2 to maximize her utility... 1. (3 marks) Suppose a price-taking consumer chooses goods 1 and 2 to maximize her utility... 1. Suppose a price-taking consumer chooses goods 1 and 2 to maximize her utility given her wealth. Her budget constraint could be written as p1x1 + p2x2 = w, where (p1,p2) are the prices of the goods, (x1,x2) denote quantities of goods 1 and 2 she chooses to consume, and w is her wealth. Assume her preferences are such that demand functions exist for this consumer: xi(p1,p2,w),i = 1,2. Prove these demand functions must be homogeneous of degree zero. As per above question we are given that consumer has option of purchasing two goods ( x1 and x2) at their respective given prices (p1 and p2). The consumer has to choose what quantity of x1 and x2 to be purchased at p1 and p2 given total wealth w to maximize the utility level which is determined by tangency ofBudget constraint ( p1.x1 + p2.x2 = w) and Indifference curve to maximie the utiity with the given wealth. ll (marshallian) demand functions are homogenous of degree zero in prices and income. This is a well known result. So if we multiply all prices and income by the same amount, demand doesn't change. This is very intuitive. Imagine the government would just add a zero to all prices and to your paycheck. There is no reason for you to change your demand as nothing really got cheaper nor did your income increase in real terms.
{"url":"https://justaaa.com/economics/25634-1-3-marks-suppose-a-price-taking-consumer-chooses","timestamp":"2024-11-13T12:09:28Z","content_type":"text/html","content_length":"41608","record_id":"<urn:uuid:71a35a0d-a491-440e-a83d-007e630cf71e>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00270.warc.gz"}
What is a Break-Even Point and How to Calculate Inflation, too, is something to consider, especially for long-term holdings. In general, the break-even price for an options contract will be the strike price plus the cost of the premium. For a 20-strike call option that cost $2, the break-even price would be $22. For a put option with otherwise same details, the break-even price would instead be $18. Both marginalist and Marxist theories of the firm predict that due to competition, firms will always be under pressure to sell their goods at the break-even price, implying no room for long-run profits. Having a successful business can be easier and more achievable when you have this information. Call Option Breakeven Point Example Upon doing so, the number of units sold cell changes to 5,000, and our net profit is equal to zero. In accounting, the margin of safety is the difference between actual sales and break-even sales. Managers utilize the margin of safety to know how much sales can decrease before the company or project becomes unprofitable. Why Should You Perform a Break-Even Analysis? 1. The put position’s breakeven price is $180 minus the $4 premium, or $176. 2. This break-even calculator allows you to perform a task crucial to any entrepreneurial endeavor. 3. However, these limitations can be mitigated by using alternative methods of measuring break even. 4. The breakeven point would equal the $10 premium plus the $100 strike price, or $110. 5. The contribution margin represents the revenue required to cover a business’ fixed costs and contribute to its profit. Start with a free account to explore 20+ always-free courses and hundreds of finance templates and cheat sheets. Taking it one step further, you can even use the break-even point formula to project your net profit for the year. For example, let’s assume your team of three works 20 hours per week for 48 weeks of the year. If you started your business on the first day of the calendar year, you’d reach your break-even point in early April. How to Conduct Break-Even Analysis Divide the fixed costs by the revenue per unit minus the variable costs per unit. The break-even price is mathematically the amount of monetary receipts that equal the amount of monetary contributions. With sales matching costs, the related transaction is said to be break-even, sustaining no losses and earning no profits in the process. First we need to calculate the break-even point per unit, so we will divide the $500,000 of fixed costs by the $200 contribution margin per unit ($500 – $300). First we take the desired dollar amount of profit and divide it by the contribution margin per unit. A guide to Generally Accepted Accounting Principles (GAAP) If your price is too high, you might be falling short of your break-even point because customers won’t buy at that price. Lowering your selling price will increase the sales needed to break even. But this can be offset by the increased volume of purchases from new customers. The break-even point (BEP) is the amount of product or service sales a business needs to make to begin earning more than you spend. Break-even analysis purpose It helps businesses choose pricing strategies, and manage costs and operations. In stock and options trading, break-even analysis helps find the minimum price movements required to cover trading costs and make a profit. Traders can use break-even analysis to set realistic profit targets, manage risk, and make informed trading decisions. Break-even analysis involves a calculation of the break-even point (BEP). For example, if you sell products with high-cost components, a premium pricing strategy might be the one to go with. It gives investors insight into when a company is expected to offset its costs for the first time. First, it tells you exactly how many times you need to sell a product to offset the running costs of your business. Which level you use really depends on whether you just want to understand the profitability of a single product or your entire business. Break-even (or break even), often abbreviated as B/E in finance, (sometimes called point of equilibrium) is the point of balance making neither a profit nor a loss. It involves a situation when a business makes just enough revenue to cover its total costs.[1] Any number below the break-even point constitutes a loss while any number above it shows a profit. The term originates in finance but the concept has been applied in other fields. This gives you the number of units you need to sell to cover your costs per month. To calculate BEP, you also need the amount of fixed costs that needs to be covered by the break-even units sold. Alternatively, the break-even point can also be calculated by dividing the fixed costs by the contribution margin. Yes, you would want to use the average cost per unit along with the average selling price to get the contribution margin per unit in the formula. Take the fixed costs and divide by the difference between the selling price and cost per unit https://www.business-accounting.net/ ($16.58), and that will tell you how many units have to be sold to break even. For options trading, the breakeven point is the market price that an underlying asset must reach for an option buyer to avoid a loss if they exercise the option. If the stock is trading above that price, then the benefit of the option has not exceeded its cost. Assume that an investor pays a $5 premium for an Apple stock (AAPL) call option with a $170 strike price. This means the issuance of notes and bonds that the investor has the right to buy 100 shares of Apple at $170 per share at any time before the options expire. The breakeven point for the call option is the $170 strike price plus the $5 call premium, or $175. However, a product or service’s comparably low price may create the perception that the product or service may not be as valuable, which could become an obstacle to raising prices later on. In the event that others engage in a price war, pricing at break-even would not be enough to help gain market control. With racing-to-the-bottom pricing, losses can be incurred when break-even prices give way to even lower prices. Being a cost leader and selling at the break-even price requires a business to have the financial resources to sustain periods of zero earnings. However, after establishing market dominance, a business may begin to raise prices when weak competitors can no longer undermine its higher-pricing efforts.
{"url":"https://smoknpython.com/2022/08/03/what-is-a-break-even-point-and-how-to-calculate/","timestamp":"2024-11-13T13:07:04Z","content_type":"text/html","content_length":"79557","record_id":"<urn:uuid:dbb7adcb-12df-4a3e-bb29-2bac105d7f36>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00654.warc.gz"}