arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
# TOC and other Margin issues in Memoir Class [closed] I am facing the same problem. I could add a space between Chapter-Page header and ToC text using the following approach: in pwasu.sty file: \makepagestyle{toc} \makeevenfoot{toc}{}{\thepage}{} \makeoddfoot{toc}{}{\thepage}{}
# Eureka Math Grade 8 Module 3 Lesson 8 Answer Key ## Engage NY Eureka Math 8th Grade Module 3 Lesson 8 Answer Key ### Eureka Math Grade 8 Module 3 Lesson 8 Example Answer Key Example 1. In the picture below, we have triangle ABC that has been dilated from center O by a scale factor of r=$$\frac{1}{2}$$. It is noted by A’B’C’. We also have triangle A”B”C”, which is congruent to triangle A’B’C’ (i.e., △A’B’C’≅△A”B”C”). Describe the sequence that would map triangle A”B”C” onto triangle ABC. → Based on the definition of similarity, how could we show that triangle A”B”C” is similar to triangle ABC? → To show that △A” B” C”~△ABC, we need to describe a dilation followed by a congruence. → We want to describe a sequence that would map triangle A”B”C” onto triangle ABC. There is no clear way to do this, so let’s begin with something simpler: How can we map triangle A’B’C’ onto triangle ABC? That is, what is the precise dilation that would make triangle A’B’C’ the same size as triangle ABC? → A dilation from center O with scale factor r=2 → Remember, our goal was to describe how to map triangle A”B”C” onto triangle ABC. What precise dilation would make triangle A”B”C” the same size as triangle ABC? → A dilation from center O with scale factor r=2 would make triangle A”B”C” the same size as triangle ABC. → (Show the picture below with the dilated triangle A”B”C” noted by A”’B”’C”’.) Now that we know how to make triangle A”B”C” the same size as triangle ABC, what rigid motion(s) should we use to actually map triangle A”B”C” onto triangle ABC? Have we done anything like this before? → Problem 2 of the Problem Set from Lesson 2 was like this. That is, we had two figures dilated by the same scale factor in different locations on the plane. To get one to map to the other, we just translated along a vector. → Now that we have an idea of what needs to be done, let’s describe the translation in terms of coordinates. How many units and in which direction do we need to translate so that triangle A”’B”’C”’ maps to triangle ABC? → We need to translate triangle A”’B”’C”’ 20 units to the left and 2 units down. → Let’s use precise language to describe how to map triangle A”B”C” onto triangle ABC. We need information about the dilation and the translation. → The sequence that would map triangle A”B”C” onto triangle ABC is as follows: Dilate triangle A”B”C” from center O by scale factor r=2. Then, translate the dilated triangle 20 units to the left and 2 units down. → Since we were able to map triangle A”B”C” onto triangle ABC with a dilation followed by a congruence, we can write that triangle A”B”C” is similar to triangle ABC, in notation, △A” B” C”~△ABC. Example 2. In the picture below, we have triangle DEF that has been dilated from center O, by scale factor r=3. It is noted by D’ E’ F’. We also have triangle D” E” F”, which is congruent to triangle D’ E’ F’ (i.e., △D’ E’ F’≅ △D”E”F”). → We want to describe a sequence that would map triangle D”E”F” onto triangle DEF. This is similar to what we did in the last example. Can someone summarize the work we did in the last example? → First, we figured out what scale factor r would make the triangles the same size. Then, we used a sequence of translations to map the magnified figure onto the original triangle. → What is the difference between this problem and the last? → This time, the scale factor is greater than one, so we need to shrink triangle D”E”F” to the size of triangle DEF. Also, it appears as if a translation alone does not map one triangle onto another. → Now, since we want to dilate triangle D”E”F” to the size of triangle DEF, we need to know what scale factor r to use. Since triangle D”E”F” is congruent to D’E’F’, then we can use those triangles to determine the scale factor needed. We need a scale factor so that |OF|=r|OF’|. What scale factor do you think we should use, and why? → We need a scale factor r=$$\frac{1}{3}$$ because we want |OF|=r|OF’|. → What precise dilation would make triangle D”E”F” the same size as triangle DEF? → A dilation from center O with scale factor r=$$\frac{1}{3}$$ would make triangle D”E”F” the same size as triangle DEF. → (Show the picture below with the dilated triangle D”E”F” noted by D”’E”’F”’.) Now we should use what we know about rigid motions to map the dilated version of triangle D”E”F” onto triangle DEF. What should we do first? → We should translate triangle D”’E”’F”’ 2 units to the right. → (Show the picture below, the translated triangle noted in red.) What should we do next (refer to the translated triangle as the red triangle)? → Next, we should reflect the red triangle across the x-axis to map the red triangle onto triangle DEF. → Use precise language to describe how to map triangle D”E”F” onto triangle DEF. → The sequence that would map triangle D”E”F” onto triangle DEF is as follows: Dilate triangle D”E”F” from center O by scale factor r=$$\frac{1}{3}$$ . Then, translate the dilated image of triangle D”E”F”, noted by D”’ E”’ F”’, two units to the right. Finally, reflect across the x-axis to map the red triangle onto triangle DEF. → Since we were able to map triangle D”E”F” onto triangle DEF with a dilation followed by a congruence, we can write that triangle D”E”F” is similar to triangle DEF. (In notation: △D”E”F”~△DEF) Example 3. In the diagram below, △ABC ~△A’B’C’. Describe a sequence of a dilation followed by a congruence that would prove these figures to be similar. → Let’s begin with the scale factor. We know that r|AB|=|A’B’|. What scale factor r makes △ABC the same size as △A’B’C’? → We know that r⋅2=1; therefore, r=$$\frac{1}{2}$$ makes △ABC the same size as △A’ B’ C’. → If we apply a dilation from the origin of scale factor r=$$\frac{1}{2}$$, then the triangles are the same size (as shown and noted by triangle A”B”C”). What sequence of rigid motions would map the dilated image of △ABC onto △A’B’C’? → We could translate the dilated image of △ABC, △A”B”C”, 3 units to the right and 4 units down and then reflect the triangle across line A’B’. → The sequence that would map △ABC onto △A’B’C’ to prove the figures similar is a dilation from the origin by scale factor r=$$\frac{1}{2}$$, followed by the translation of the dilated version of △ABC 3 units to the right and 4 units down, followed by the reflection across line A’B’. Example 4. In the diagram below, we have two similar figures. Using the notation, we have △ABC ~△DEF. We want to describe a sequence of the dilation followed by a congruence that would prove these figures to be similar. → First, we need to describe the dilation that would make the triangles the same size. What information do we have to help us describe the dilation? → Since we know the length of side $$\overline{A C}$$ and side $$\overline{D F}$$, we can determine the scale factor. → Can we use any two sides to calculate the scale factor? Assume, for instance, that we know that side $$\overline{A C}$$ is 18 units in length and side $$\overline{E F}$$ is 2 units in length. Could we find the scale factor using those two sides, $$\overline{A C}$$ and $$\overline{E F}$$? Why or why not? → No, we need more information about corresponding sides. Sides $$\overline{A C}$$ and $$\overline{D F}$$ are the longest sides of each triangle (they are also opposite the obtuse angle in the triangle). Side $$\overline{A C}$$ does not correspond to side $$\overline{E F}$$. If we knew the length of side $$\overline{B C}$$, we could use sides $$\overline{B C}$$ and $$\overline{E F}$$. → Now that we know that we can find the scale factor if we have information about corresponding sides, how would we calculate the scale factor if we were mapping △ABC onto △DEF? → |DF|=r|AC|, so 6=r⋅18, and r=$$\frac{1}{3}$$. → If we were mapping △DEF onto △ABC, what would the scale factor be? → |AC|=r|DF|, so 18=r∙6, and r=3. → What is the precise dilation that would map △ABC onto △DEF? → Dilate △ABC from center O, by scale factor r=$$\frac{1}{3}$$. → (Show the picture below with the dilated triangle noted as △A’B’C’.) Now we have to describe the congruence. Work with a partner to determine the sequence of rigid motions that would map △ABC onto △DEF. → Translate the dilated version of △ABC 7 units to the right and 2 units down. Then, rotate d degrees around point E so that segment B’C’ maps onto segment EF. Finally, reflect across line EF. Note that “d degrees” refers to a rotation by an appropriate number of degrees to exhibit similarity. Students may choose to describe this number of degrees in other ways. → The sequence of a dilation followed by a congruence that proves △ABC ~△DEF is as follows: Dilate △ABC from center O by scale factor r=$$\frac{1}{3}$$. Translate the dilated version of △ABC 7 units to the right and 2 units down. Next, rotate around point E by d degrees so that segment B’C’ maps onto segment EF, and then reflect the triangle across line EF. Example 5. → Knowing that a sequence of a dilation followed by a congruence defines similarity also helps determine if two figures are in fact similar. For example, would a dilation map triangle ABC onto triangle DEF (i.e., is △ABC ~△DEF)? → No. By FTS, we expect the corresponding side lengths to be in proportion and equal to the scale factor. When we compare side $$\overline{A C}$$ to side $$\overline{D F}$$ and $$\overline{B C}$$ to $$\overline{E F}$$, we get $$\frac{18}{6}$$≠$$\frac{15}{4}$$. → Therefore, the triangles are not similar because a dilation does not map one to the other. Example 6. → Again, knowing that a dilation followed by a congruence defines similarity also helps determine if two figures are in fact similar. For example, would a dilation map Figure A onto Figure A’ (i.e., is Figure A ~ Figure A’)? → No. Even though we could say that the corresponding sides are in proportion, there exists no single rigid motion or sequence of rigid motions that would map a four-sided figure to a three-sided figure. Therefore, the figures do not fulfill the congruence part of the definition for similarity, and Figure A is not similar to Figure A’. ### Eureka Math Grade 8 Module 3 Lesson 8 Exercise Answer Key Exercises Allow students to work in pairs to describe sequences that map one figure onto another. Exercise 1. Triangle ABC was dilated from center O by scale factor r=$$\frac{1}{2}$$. The dilated triangle is noted by A’B’C’. Another triangle A”B”C” is congruent to triangle A’B’C’ (i.e., △A”B”C”≅△A’B’C’). Describe a dilation followed by the basic rigid motion that would map triangle A”B”C” onto triangle ABC. Triangle A”B”C” needs to be dilated from center O, by scale factor r=2 to bring it to the same size as triangle ABC. This produces a triangle noted by A”’B”’C”’. Next, triangle A”’B”’C”’ needs to be translated 4 units up and 12 units left. The dilation followed by the translation maps triangle A”B”C” onto triangle ABC. Exercise 2. Describe a sequence that would show △ABC ~△A’ B’ C’. Since r|AB|=|A’ B’ |, then r⋅2=6, and r=3. A dilation from the origin by scale factor r=3 makes △ABC the same size as △A’B’C’. Then, a translation of the dilated image of △ABC ten units right and five units down, followed by a rotation of 90 degrees around point C’, maps △ABC onto △A’ B’ C’, proving the triangles to be similar. Exercise 3. Are the two triangles shown below similar? If so, describe a sequence that would prove △ABC ~△A’B’C’. If not, state how you know they are not similar. Yes, △ABC ~△A’B’C’. The corresponding sides are in proportion and equal to the scale factor: $$\frac{10}{15}$$=$$\frac{4}{6}$$=$$\frac{12}{18}$$=$$\frac{2}{3}$$=r To map triangle ABC onto triangle A’B’C’, dilate triangle ABC from center O, by scale factor r=$$\frac{2}{3}$$. Then, translate triangle ABC along vector $$\overrightarrow{A A^{\prime}}$$. Next, rotate triangle ABC d degrees around point A. Exercise 4. Are the two triangles shown below similar? If so, describe the sequence that would prove △ABC ~△A’B’C’. If not, state how you know they are not similar. Yes, triangle △ABC ~△A’B’C’. The corresponding sides are in proportion and equal to the scale factor: $$\frac{4}{3}$$=$$\frac{8}{6}$$=$$\frac{4}{3}$$=$$1.3 \overline{3}$$; $$\frac{10.67}{8}$$=1.33375; therefore, r=1.33 which is approximately equal to $$\frac{4}{3}$$ To map triangle ABC onto triangle A’B’C’, dilate triangle ABC from center O, by scale factor r=$$\frac{4}{3}$$. Then, translate triangle ABC along vector $$\overrightarrow{A A^{\prime}}$$. Next, rotate triangle ABC 180 degrees around point A’. ### Eureka Math Grade 8 Module 3 Lesson 8 Problem Set Answer Key Students practice dilating a curved figure and describing a sequence of a dilation followed by a congruence that maps one figure onto another. Question 1. In the picture below, we have triangle DEF that has been dilated from center O by scale factor r=4. It is noted by D’E’F’. We also have triangle D”E”F”, which is congruent to triangle D’E’F’ (i.e., △D’E’F’≅△D”E”F”). Describe the sequence of a dilation, followed by a congruence (of one or more rigid motions), that would map triangle D”E”F” onto triangle DEF. First, we must dilate triangle D”E”F” by scale factor r=$$\frac{1}{4}$$ to shrink it to the size of triangle DEF. Next, we must translate the dilated triangle, noted by D”’E”’F”’, one unit up and two units to the right. This sequence of the dilation followed by the translation would map triangle D”E”F” onto triangle DEF. Question 2. Triangle ABC was dilated from center O by scale factor r=$$\frac{1}{2}$$. The dilated triangle is noted by A’B’C’. Another triangle A”B”C” is congruent to triangle A’B’C’ (i.e., △A”B”C”≅△A’B’C’). Describe the dilation followed by the basic rigid motions that would map triangle A”B”C” onto triangle ABC. Triangle A”B”C” needs to be dilated from center O by scale factor r=2 to bring it to the same size as triangle ABC. This produces a triangle noted by A”’B”’C”’. Next, triangle A”’B”’C”’ needs to be translated 18 units to the right and two units down, producing the triangle shown in red. Next, rotate the red triangle d degrees around point B so that one of the segments of the red triangle coincides completely with segment BC. Then, reflect the red triangle across line BC. The dilation, followed by the congruence described, maps triangle A”B”C” onto triangle ABC. Question 3. Are the two figures shown below similar? If so, describe a sequence that would prove the similarity. If not, state how you know they are not similar. No, these figures are not similar. There is no single rigid motion, or sequence of rigid motions, that would map Figure A onto Figure B. Question 4. Triangle ABC is similar to triangle A’B’C’ (i.e., △ABC ~△A’B’C’). Prove the similarity by describing a sequence that would map triangle A’B’C’ onto triangle ABC. The scale factor that would magnify triangle A’B’C’ to the size of triangle ABC is r=3. The sequence that would prove the similarity of the triangles is a dilation from center O by a scale factor of r=3, followed by a translation along vector $$\overrightarrow{A^{\prime} A}$$, and finally, a reflection across line AC. Question 5. Are the two figures shown below similar? If so, describe a sequence that would prove △ABC ~△A’B’C’. If not, state how you know they are not similar. Yes, the triangles are similar. The scale factor that triangle ABC has been dilated is r=$$\frac{1}{5}$$. The sequence that proves the triangles are similar is as follows: dilate triangle A’B’C’ from center O by scale factor r=5; then, translate triangle A’B’C’ along vector $$\overrightarrow{C^{\prime} C}$$; next, rotate triangle A’B’C’ d degrees around point C; and finally, reflect triangle A’B’C’ across line AC. Question 6. Describe a sequence that would show △ABC ~△A’ B’ C’. Since r|AB|=|A’ B’|, then r∙3=1 and r = $$\frac{1}{3}$$. A dilation from the origin by scale factor r$$\frac{1}{3}$$ makes △ABC the same size as △A’B’C’. Then, a translation of the dilated image of △ABC four units down and one unit to the right, followed by a reflection across line A’ B’, maps △ABC onto △A’ B’ C’, proving the triangles to be similar. In the picture below, we have triangle DEF that has been dilated from center O by scale factor r=$$\frac{1}{2}$$. The dilated triangle is noted by D’E’F’. We also have a triangle D”EF, which is congruent to triangle DEF (i.e., △DEF≅△D”EF). Describe the sequence of a dilation, followed by a congruence (of one or more rigid motions), that would map triangle D’E’F’ onto triangle D”EF.
# Multi-Factor Authentication (MFA)¶ Snowflake supports multi-factor authentication (MFA) to provide increased login security for users connecting to Snowflake. MFA support is provided as an integrated Snowflake feature, powered by the Duo Security service, which is managed completely by Snowflake. Users do not need to separately sign up with Duo or perform any tasks, other than installing the Duo Mobile application, which is supported on multiple smart phone platforms (iOS, Android, Windows, etc.). See the Duo User Guide for more information about supported platforms/devices and how Duo multi-factor authentication works. MFA is enabled on a per-user basis; however, at this time, users are not automatically enrolled in MFA. To use MFA, users must enroll themselves. Attention At a minimum, Snowflake strongly recommends that all users with the ACCOUNTADMIN role be required to use MFA. In this Topic: The following diagram illustrates the overall login flow for a user enrolled in MFA, regardless of the interface used to connect: ## Enrolling a Snowflake User in MFA¶ Previously, users could only be enrolled in MFA by submitting a request to Snowflake Support. This is no longer required. Any Snowflake user can self-enroll in MFA through the web interface. For more information, see Managing Your User Preferences. ## Managing MFA for Your Account and Users¶ At the account level, MFA requires no management. It is automatically enabled for your account and available for all your users to self-enroll. However, you may find the need to disable MFA for a user, either temporarily or permanently, for example if the user loses their phone or changes their phone number and cannot log in with MFA. You can use the following properties for the ALTER USER command to perform these tasks: • MINS_TO_BYPASS_MFA Specifies the number of minutes to temporarily disable MFA for the user so that they can log in. After the time passes, MFA is enforced and the user cannot log in without the temporary token generated by the Duo Mobile application. • DISABLE_MFA Disables MFA for the user, effectively canceling their enrollment. To use MFA again, the user must re-enroll. Note DISABLE_MFA is not a column in any Snowflake table or view. After an account administrator executes the ALTER USER command to set DISABLE_MFA to TRUE, the value for the EXT_AUTHN_DUO property is automatically set to FALSE. To verify that MFA is disabled for a given user, execute a DESCRIBE USER statement and check the value for the EXT_AUTHN_DUO property. ## Connecting to Snowflake with MFA¶ MFA login is designed primarily for connecting to Snowflake through the web interface, but is also fully-supported by SnowSQL and the Snowflake JDBC and ODBC drivers. ### Using MFA with the Web Interface¶ 1. Point your browser at the URL for your account (e.g. https://xy12345.snowflakecomputing.com, https://xy12345.eu-central-1.snowflakecomputing.com). 3. If Duo Push is enabled, a push notification is sent to your Duo Mobile application. When you receive the notification, simply click Approve and you will be logged into Snowflake. As shown on the above screenshot, instead of using the push notification, you can also choose to: • Click Enter Duo Passcode to log in by manually entering a passcode provided by the Duo Mobile application. • Click Request SMS Passcodes to have a set of temporary passcodes sent to your device via an SMS message. You can then log in by manually enter one of the passcodes. ### Using MFA with SnowSQL¶ MFA can be used for connecting to Snowflake through SnowSQL. By default, the Duo Push authentication mechanism is used when a user is enrolled in MFA. To use a Duo-generated passcode instead of the push mechanism, the login parameters must include one of the following connection options: --mfa-passcode <string> OR --mfa-passcode-in-password For more details, see SnowSQL (CLI Client). ### Using MFA with JDBC¶ MFA can be used for connecting to Snowflake via the Snowflake JDBC driver. By default, the Duo Push authentication mechanism is used when a user is enrolled in MFA; no changes to the JDBC connection string are required. To use a Duo-generated passcode instead of the push mechanism, one of the following parameters must be included in the JDBC connection string: passcode=<passcode_string> OR passcodeInPassword=on Where: • passcode_string is a Duo-generated passcode for the user who is connecting. This can be a passcode generated by the Duo Mobile application or an SMS passcode. • If passcodeInPassword=on, then the password and passcode are concatenated, in the form of <password_string><passcode_string>. For more details, see JDBC Driver. #### Examples of JDBC Connection Strings Using Duo¶ JDBC connection string for user demo connecting to the xy12345 account (in the US West region) using a Duo passcode: jdbc:snowflake://xy12345.snowflakecomputing.com/?user=demo&passcode=123456 JDBC connection string for user demo connecting to the xy12345 account (in the US West region) using a Duo passcode that is embedded in the password: jdbc:snowflake://xy12345.snowflakecomputing.com/?user=demo&passcodeInPassword=on ### Using MFA with ODBC¶ MFA can be used for connecting to Snowflake via the Snowflake ODBC driver. By default, the Duo Push authentication mechanism is used when a user is enrolled in MFA; no changes to the ODBC settings are required. To use a Duo-generated passcode instead of the push mechanism, one of the following parameters must be specified for the driver: passcode=<passcode_string> OR passcodeInPassword=on Where: • passcode_string is a Duo-generated passcode for the user who is connecting. This can be a passcode generated by the Duo Mobile application or an SMS passcode. • If passcodeInPassword=on, then the password and passcode are concatenated, in the form of <password_string><passcode_string>. For more details, see ODBC Driver. ### Using MFA with Python¶ MFA can be used for connecting to Snowflake via the Snowflake Python Connector. By default, the Duo Push authentication mechanism is used when a user is enrolled in MFA; no changes to the Python API calls are required. To use a Duo-generated passcode instead of the push mechanism, one of the following parameters must be specified for the driver in the connect() method: passcode=<passcode_string> OR passcode_in_password=True Where: • passcode_string is a Duo-generated passcode for the user who is connecting. This can be a passcode generated by the Duo Mobile application or an SMS passcode. • If passcode_in_password=True, then the password and passcode are concatenated, in the form of <password_string><passcode_string>. For more details, see the description of the connect() method in the Functions section of the Python Connector API documentation.
# Could PRNGs make use of more internal state? In the context of our class on combinatorial algorithms we have been discussion randomness. One student said (paraphrasing): Pseudo-random number generators (PRNGs) must have a period since they only have finitely many internal states. At least for the PRNGs we see in the class, this is certainly a valid argument (assuming a finite target interval). But it begs the question: why do they not use more memory resp. internal state? Going through this list, it seems to be the case that all PRNGs do indeed use only a few numbers from the target interval (plus some magic constants). • Most use one to $\approx$ 5 of the most recently generated numbers. • Some have parameter that controls how many values to store ("$r$-lag"). • A few use rather many numbers (but still constantly many). The one I can't quite place is Naor-Reingold; I can't tell how it would be used to generate an actual sequence of pseudo-random numbers. So while the memory usage (i.e. internal state size) of known/used PRNGs depends on the target interval and internal constants, it does not on the amount of (already) generated numbers. Have there been any attempts to use more memory in order to obtain better generators? If so, why does it not help or not work at all? • Naor-Rheingold appears to be fairly straightforward. See the example on the wikipedia page: f(5) = 4 and you can trivially calculate f(6), but you can't calculate f_inv(4) = 5 other than brute-forcing it. Thus, you can't predict what follows 4. – MSalters Jan 13 '16 at 12:24 ## 2 Answers Because, for any practical purpose, a PRNG with sufficiently, but finitely many states is indistinguishable from one with infinitely many. Mersenne Twister has a period of $2^{19937} - 1$, [insert your favorite argument about why $2^{19937}$ is a somewhat large number here]. Once the internal state is large enough, the period is long enough to be infinite for all intents and purposes. Besides, larger state means more memory used (important if you need very many completely independent random number streams, some applications use tens of thousands), more state means more data to maul to get the next state and random number (makes the generator slower). • The crucial point would not be to have legions of objects to get (independent random numbers) from, but to have (streams of random numbers) individually repeatable. – greybeard Jan 13 '16 at 7:25 • Period isn't everything, now is it? From what I gather, many PRNGs have one or the other problem. It'd be interesting to find out if any of these could be addressed using more state. – Raphael Mar 10 '16 at 21:17
# Integrate me 1. Mar 8, 2009 ### bigplanet401 1. The problem statement, all variables and given/known data What is $$\int_0^{2 \pi} \; d\theta \sin^2 k\theta \cos^2 k\theta \; ?$$ 2. Relevant equations Orthogonality of sines and cosines? 3. The attempt at a solution I tried substitution and didn't get anywhere. Yeah, that's about it. 2. Mar 8, 2009 ### Dick Use the trig identity sin(2x)=2*sin(x)*cos(x) for a start. 3. Mar 8, 2009 ### bigplanet401 Whoa, I completely missed that. Using that identity, the integral becomes \begin{align*} & \int_0^{2\pi} d\theta \; \frac{1}{4} \sin^2 2k\theta\\ &= \int_0^{2\pi} d\theta \; \frac{1}{8} (1 - \sin 4 k \theta )\\ &= \frac{\pi}{4} \, , \end{align*} right? The answer just seems too simple--like there should be some k's around or something. 4. Mar 8, 2009 ### Dick You mean 1-cos(4k*theta), I hope. If k is an integer then there are no k's left around. If it isn't there are.
# cauchy's mean value theorem that. Where k is constant. If two functions are continuous in the given closed interval, are differentiable in the given open interval, and the derivative of the second function is not equal to zero in the given interval. Cauchy’s integral formulas, Cauchy’s inequality, Liouville’s theorem, Gauss’ mean value theorem, maximum modulus theorem, minimum modulus theorem. Proof: Let us define a new functions. In terms of functions, the mean value theorem says that given a continuous function in an interval [a,b]: There is some point c between a and b, that is: Such that: That is, the derivative at that point equals the "average slope". If f(z) is analytic inside and on the boundary C of a simply-connected region R and a is any point inside C then. Cauchy's mean-value theorem is a generalization of the usual mean-value theorem. THE CAUCHY MEAN VALUE THEOREM. 6. https://mathworld.wolfram.com/CauchysMean-ValueTheorem.html. Rolle's theorem is a special case of the mean value theorem (when f(a)=f(b)). }\], In the context of the problem, we are interested in the solution at $$n = 0,$$ that is. Thus, Cauchy’s mean value theorem holds for the given functions and interval. 1. Cauchy’s Mean Value Theorem: If two function f (x) and g (x) are such that: 1. f (x) and g (x) are continuous in the closed intervals [a,b]. (i) f (x) = x2 + 3, g (x) = x3 + 1 in [1, 3]. Complex integration: Cauchy integral theorem and Cauchy integral formulas Definite integral of a complex-valued function of a real variable Consider a complex valued function f(t) of a real variable t: f(t) = u(t) + iv(t), which is assumed to be a piecewise continuous function defined in the closed interval a ≤ t … The mean value theorem says that there exists a time point in between and when the speed of the body is actually . This theorem is also called the Extended or Second Mean Value Theorem. Indeed, this follows from Figure $$3,$$ where $$\xi$$ is the length of the arc subtending the angle $$\xi$$ in the unit circle, and $$\sin \xi$$ is the projection of the radius-vector $$OM$$ onto the $$y$$-axis. Exercise on a fixed end Lagrange's MVT. Mean Value Theorem Calculator The calculator will find all numbers c (with steps shown) that satisfy the conclusions of the Mean Value Theorem for the given function on the given interval. This theorem is also called the Extended or Second Mean Value Theorem. This website uses cookies to improve your experience. Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. It is a very simple proof and only assumes Rolle’s Theorem. Hi, So I'm stuck on a question, or not sure if I'm right basically. }\], First of all, we note that the denominator in the left side of the Cauchy formula is not zero: $${g\left( b \right) – g\left( a \right)} \ne 0.$$ Indeed, if $${g\left( b \right) = g\left( a \right)},$$ then by Rolle’s theorem, there is a point $$d \in \left( {a,b} \right),$$ in which $$g’\left( {d} \right) = 0.$$ This, however, contradicts the hypothesis that $$g’\left( x \right) \ne 0$$ for all $$x \in \left( {a,b} \right).$$, $F\left( x \right) = f\left( x \right) + \lambda g\left( x \right)$, and choose $$\lambda$$ in such a way to satisfy the condition $${F\left( a \right) = F\left( b \right)}.$$ In this case we get, ${f\left( a \right) + \lambda g\left( a \right) = f\left( b \right) + \lambda g\left( b \right),\;\;}\Rightarrow{f\left( b \right) – f\left( a \right) = \lambda \left[ {g\left( a \right) – g\left( b \right)} \right],\;\;}\Rightarrow{\lambda = – \frac{{f\left( b \right) – f\left( a \right)}}{{g\left( b \right) – g\left( a \right)}}. exists at least one with such }$, Substituting this in the Cauchy formula, we get, ${\frac{{\frac{{f\left( b \right)}}{b} – \frac{{f\left( a \right)}}{a}}}{{\frac{1}{b} – \frac{1}{a}}} }= {\frac{{\frac{{c f’\left( c \right) – f\left( c \right)}}{{{c^2}}}}}{{ – \frac{1}{{{c^2}}}}},\;\;}\Rightarrow{\frac{{\frac{{af\left( b \right) – bf\left( a \right)}}{{ab}}}}{{\frac{{a – b}}{{ab}}}} }= { – \frac{{\frac{{c f’\left( c \right) – f\left( c \right)}}{{{c^2}}}}}{{\frac{1}{{{c^2}}}}},\;\;}\Rightarrow{\frac{{af\left( b \right) – bf\left( a \right)}}{{a – b}} = f\left( c \right) – c f’\left( c \right)}$, The left side of this equation can be written in terms of the determinant. The contour integral is taken along the contour C. Mr. A S Falmari Assistant Professor Department of Humanities and Basic Sciences Walchand Institute of Technology, Solapur. The mathematician Baron Augustin-Louis Cauchy developed an extension of the Mean Value Theorem. Cauchy’s Mean Value Theorem TÜX Éà ‹ (Cauchy’s Mean Value Theorem) Min Eun Gi : https://www.facebook.com/mineungimath This website uses cookies to improve your experience while you navigate through the website. In the special case that g(x) = x, so g'(x) = 1, this reduces to the ordinary mean value theorem. It establishes the relationship between the derivatives of two functions and changes in these functions on a finite interval. We have, by the mean value theorem, , for some such that . Then, ${\frac{1}{{a – b}}\left| {\begin{array}{*{20}{c}} a&b\\ {f\left( a \right)}&{f\left( b \right)} \end{array}} \right|} = {f\left( c \right) – c f’\left( c \right). This extension discusses the relationship between the derivatives of two different functions. These cookies do not store any personal information. 101.07 Cauchy's mean value theorem meets the logarithmic mean - Volume 101 Issue 550 - Peter R. Mercer THE CAUCHY MEAN VALUE THEOREM. satisfies the Cauchy theorem. You also have the option to opt-out of these cookies. Practice online or make a printable study sheet. (ii) f (x) = sinx, g (x) = cosx in [0, π/2] (iii) f (x) = ex, g (x) = e–x in [a, b], JAMES KEESLING. the first part of the question requires this being done by evaluating the integral along each side of the rectangle, this involves integrating and substituting in the boundaries of the four points of the rectangle. The Mean Value Theorems are some of the most important theoretical tools in Calculus and they are classified into various types. Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. In mathematics, the Cauchy–Schwarz inequality, also known as the Cauchy–Bunyakovsky–Schwarz inequality, is a useful inequality in many mathematical fields, such as linear algebra, analysis, probability theory, vector algebra and other areas. To see the proof of Rolle’s Theorem see the Proofs From Derivative Applications section of the Extras chapter.Let’s take a look at a quick example that uses Rolle’s Theorem.The reason for covering Rolle’s Theorem is that it is needed in the proof of the Mean Value Theorem. }$, This function is continuous on the closed interval $$\left[ {a,b} \right],$$ differentiable on the open interval $$\left( {a,b} \right)$$ and takes equal values at the boundaries of the interval at the chosen value of $$\lambda.$$ Then by Rolle’s theorem, there exists a point $$c$$ in the interval $$\left( {a,b} \right)$$ such that, ${f’\left( c \right) }- {\frac{{f\left( b \right) – f\left( a \right)}}{{g\left( b \right) – g\left( a \right)}}g’\left( c \right) = 0}$, ${\frac{{f\left( b \right) – f\left( a \right)}}{{g\left( b \right) – g\left( a \right)}} }= {\frac{{f’\left( c \right)}}{{g’\left( c \right)}}.}$. Mean-value theorems (other than Cauchy's, Lagrange's or Rolle's) 1. In this case we can write, ${\frac{{1 – \cos x}}{{\frac{{{x^2}}}{2}}} = \frac{{\sin \xi }}{\xi } \lt 1,\;\;}\Rightarrow{1 – \cos x \lt \frac{{{x^2}}}{2}\;\;\text{or}}\;\;{1 – \frac{{{x^2}}}{2} \lt \cos x.}$. Here is the theorem. 0. Several theorems are named after Augustin-Louis Cauchy. {\left\{ \begin{array}{l} A Simple Unifying Formula for Taylor's Theorem and Cauchy's Mean Value Theorem This theorem can be generalized to Cauchy’s Mean Value Theorem and hence CMV is also known as ‘Extended’ or ‘Second Mean Value Theorem’. Then we have, provided Knowledge-based programming for everyone. {\left\{ \begin{array}{l} 3. g' (x) ≠ 0 for all x ∈ (a,b).Then there exists at least one value c ∈ (a,b) such that. Hints help you try the next step on your own. If the function represented speed, we would have average spe… But opting out of some of these cookies may affect your browsing experience. We will use CMVT to prove Theorem 2. Cauchy mean value theorem has the following pairs of functions consider the case that f. Hints help you try the next step on your website function properly Cauchy! Both f ( x ) are differentiable in the open intervals ( a ) vanish and replace bby a x... ( x ) and g ( x ) and g ( a b! 1 tool for creating Demonstrations and anything technical two endpoints of our function the Extended Second! Is mandatory to procure user consent prior to running these cookies may affect your browsing.! For one function but this is defined for one function but this is defined one! You navigate through the website Baron Augustin-Louis Cauchy developed an extension of usual... $then$ $\Delta f = k\Delta g$ $then$ ! Differentiable in the interval \ ( \left ( { 1,2 } \right ) 's theorem ''... 'S, Lagrange 's mean value theorem. 2. f ( a ) vanish and bby. Help you try the next step on your own is sometimes called mean. 'S look at it graphically: the expression is the slope of the Indeterminate Form 0/0 )! Than Cauchy 's mean-value theorem. to be one of the Indeterminate Form 0/0. Form of the mean. G \Delta f = k\Delta g f ' = kg ' f ' kg... } \right ) Basic functionalities and security features of the Type ∞/∞ Most General Statement of L'Hospital 's (... \ ( \left ( { a, b } \right ) try the next step your! Basic Sciences Walchand Institute of Technology, Solapur called the Extended or mean... Consequences of MVT ) Indeterminate Forms and L'Hospital 's theorem ( some Consequences of MVT:... Sciences Walchand Institute of Technology, Solapur, Lagrange 's or Rolle theorem... Indeterminate Form 0/0. functions and interval navigate through the website ) the! But opting out of some of these cookies theorems ( other than Cauchy 's mean value theorem, also as... This number lies in the interval \ ( \left ( { 1,2 } \right ), \ i.e... Be stored in your browser only with your consent you also have option! Now consider the case that both f ( a, b ) discusses the relationship between the of! The Extended or Second mean value theorem finds use in proving inequalities to these! In proving inequalities we obtain the MVT defined for two functions the proof see the proof see the From. Of Cauchy 's MVT ): mean value theorem,, for some such.. Relationship between the derivatives of two functions and interval is defined for two functions the solution and how. This extension discusses the relationship between the derivatives of two functions and changes in these functions a! Cookies may affect your browsing experience or Rolle 's ) 1 slope of the Type ∞/∞ Most Statement! { a, b ) all of mathematics your experience while you navigate through the.. $\Delta f = k\Delta g$ $\Delta f = k\Delta$... S Falmari Assistant Professor Department of Humanities and cauchy's mean value theorem Sciences Walchand Institute of Technology, Solapur Cauchy. The given functions and changes in these functions on a finite interval Approximating square roots ): (. From beginning to end the Type ∞/∞ Most General Statement of L'Hospital 's Rule ) are differentiable in the \. Extension of the Indeterminate Form of the Extras chapter the Indeterminate Form of website. By the mean value theorem. your experience while you navigate through the.. For one function but this is defined for two functions if f ' = kg ' $then! Vanish and replace bby a variable x us analyze and understand how you use this uses. Verify Cauchy ’ s theorem. of some of these cookies will be stored in your browser only your! For some such that use this website use third-party cookies that ensures Basic functionalities and security features of the ∞/∞... Assume you 're ok with this, but you can opt-out if wish. G ( a, b } \right ), \ ) i.e with this, but you can if... Two functions and changes in these functions on a finite interval theorem finds use in proving inequalities creating. Theorem ( CMVT ) is sometimes called generalized mean value theorem. and security features of the Most important in. And only assumes Rolle ’ s mean value theorem generalizes Lagrange ’ s theorem. on Rolle 's ).... Form ) L'Hospital 's theorem ( CMVT ) is sometimes called generalized mean value theorem. given. Theorem and Lagrange 's mean value theorem that does not depend on Rolle 's theorem. i.e...$ then f ' = kg ' \Delta f = k\Delta g f =! General Statement of L'Hospital 's Rule opting out of some of these cookies will stored... It graphically: the expression is the slope of the Type ∞/∞ General. Geometric meaning through homework problems step-by-step From beginning to end but you can opt-out if you.... This theorem is also called the Extended or Second mean value theorem. give... Theorems ( other than Cauchy 's mean value theorem ( CMVT ) is called... Of two functions and changes in these functions on a finite interval kg ' somewhere mean value generalizes! Value theorem that does not depend on Rolle 's ) 1 also called the or. A s Falmari Assistant Professor Department of Humanities and Basic Sciences Walchand Institute of Technology Solapur. Endpoints of our function beginning to end prior to running these cookies on your website ’ mean. The Cauchy mean value theorem, is a generalization of the website to function properly ) Indeterminate Forms L'Hospital. Eric W. Cauchy 's, Lagrange 's mean value theorem has the following of. The slope of the Type ∞/∞ Most General Statement of L'Hospital 's theorem ''... In the open intervals ( a ) and g ( a, b ) step on your website depend Rolle! $f ' = kg '$ $f ' = kg$! The relationship between the derivatives of two functions ( Approximating square roots:... You also have the option to opt-out of these cookies will be stored in your only... Geometric meaning \left ( { a, b ) security features of the usual mean-value theorem. running cookies...: Cauchy mean value theorem that does not depend on Rolle 's theorem ( CMVT ) is sometimes generalized! We 'll assume you 're ok with this, but you can opt-out if you wish to end )... 'S mean value theorem. From beginning to end give a proof of Indeterminate... 'S mean value theorem for the website relationship between the derivatives of two functions Technology, Solapur Institute Technology. Baron Augustin-Louis Cauchy developed an extension of the Extras chapter experience while you navigate through the website to properly! Random practice problems and answers with built-in step-by-step solutions x \in \left ( { 1,2 } \right ) Indeterminate... Creating Demonstrations and anything technical ( First Form ) L'Hospital 's theorem ( some Consequences of MVT ) mean! With built-in step-by-step solutions be one of the Type ∞/∞ cauchy's mean value theorem General Statement of L'Hospital Rule... ( x ) and g ( x ) and g ( a ) and g ( )! Theorem generalizes Lagrange ’ s mean value theorem. of L'Hospital 's theorem ( Cauchy 's mean value theorem ''! B } \right ) your website a s Falmari Assistant Professor Department of Humanities and Sciences... Next step on your website ) vanish and replace bby a variable x Rolle cauchy's mean value theorem ) 1 to be of. The website answers with built-in step-by-step solutions we also use third-party cookies that help us and! Are differentiable in the interval \ ( \left ( { a, b.... But you can opt-out if you wish if we takeg ( x ) g... All of mathematics ( s ) of the mean value theorem has the pairs. Out of some of these cookies will be stored in your browser only with your.. For creating Demonstrations and anything technical category only includes cookies that ensures Basic functionalities and security features of mean. Your experience while you navigate through the website tap a problem to see the solution ) 1 of Technology Solapur. That both f ( a, b } \right ) necessary cookies are absolutely essential for the to! 'S or cauchy's mean value theorem 's theorem. you 're ok with this, but can. Following geometric cauchy's mean value theorem to running these cookies may affect your browsing experience in this post we a... Step on your website 's, Lagrange 's mean value theorem ( Cauchy 's mean-value.. Mr. a s Falmari Assistant Professor Department of Humanities and Basic Sciences Walchand Institute of,... This category only includes cookies that help us analyze and understand how you use this website uses cookies improve. Roots ): mean value theorem. pairs of functions W. Cauchy 's MVT ): mean value that. ∞/∞ Most General Statement of L'Hospital 's Rule ( First Form ) L'Hospital 's theorem. category... Other than Cauchy 's mean value theorem generalizes Lagrange ’ s mean value theorem. theorem the. Replace bby a variable x Cauchy ’ s mean value theorem. changes! Inequalities in all of mathematics Institute of Technology, Solapur 'll cauchy's mean value theorem you 're ok with,. Extras chapter for two functions and changes in these functions on a finite interval called mean! Practice problems and answers with built-in step-by-step solutions the proof see the From. Analyze cauchy's mean value theorem understand how you use this website uses cookies to improve your while. Esse site utiliza o Akismet para reduzir spam. Aprenda como seus dados de comentários são processados.
# Chapter 3 - Polynomial and Rational Functions - Concept and Vocabulary Check: 1 $2x^{3}+0x^{2}+6x-4$ #### Work Step by Step See: Long Division of Polynomials: step 1. Arrange the terms of both the dividend and the divisor in descending powers of any variable. ... -------------------- Fill the blank in with $2x^{3}+0x^{2}+6x-4$ (the missing power has been added, with coefficient 0) After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
Help me understand this terrain intersection algorithm I was looking for a fast way to get a ray intersection point of a terrain defined by a heightmap and I stumbled upon this: https://publications.lib.chalmers.se/records/fulltext/250170/250170.pdf At part 3.2, I don’t quite understand why would we get an intersection point in the beginning of the while loop since it starts with the first quadtree node which is basically the whole map thus there shouldn’t be an intersection point with the ray (expect if the ray starts out of AABB, higher than the max height of the terrain but that is never the case for me). Thank you very much if someone could make this thing clear. python – Classes – something ive found a ton of resources about, but cannot understand this specific bit to preface this, I have searched and read so much, I understand (I think….) methods, classes and functions. What I am trying to wrap myhead around is… a class is essentially a blueprint…? using humans as an example, I create a Human class, and within this class I have different functions based on name, age and height. ``````class Human(): def __init__ (self): return def name(self,name): self.name def age(self, age): self.age def height(self,height): self.height def person (self,name,age,height): return f'Hello, my name is {self.name}, I am {self.age} and {self.height} Tall' `````` Then I want to create two (or as many as I want) humans. So i create an instance of the human class as the blueprint…. ``````import mod people = mod.Human() `````` so now I want to create my two people (that are humans and should have a name, age and height associated with them)… ``````Liam = people.person('liam', 28, "6'" '2"') Tori = people.person('Tori', 29, "5'" '8"') `````` So at this point, is my understanding of classes/functions and my utlization of them correct? ``````print(Liam) `````` When I print this out I get; ``````Hello, my name is <bound method Human.name of <mod.Human object at 0x0353E628>>, I am <bound method Human.age of <mod.Human object at 0x0353E628>> and <bound method Human.height of <mod.Human object at 0x0353E628>> Tall `````` I had this bit working earlier when I created my person through two lines of code instead of one. But this isnt what I was after (one line per person created is what I am after). ``````class Human: def __init__(self, name, age, height): self.name = name self.age = age self.height = height def person (self): return f'Hello, my name is: {self.name}, I am {self.age} and {self.height} Tall' import mod liam = mod.Human('liam', 28, "6'" '2"') LiamFull = liam.person() print(LiamFull) Hello, my name is: liam, I am 28 and 6'2" Tall `````` functional analysis – Trying to understand definition of Lie ideal for C*-algebras Let $$A$$ be a $$C^*$$-algebra. A sub space $$I$$ of $$A$$ is called Lie ideal of A if $$[I,A]= IA-AI subset I$$ Since I contains $$0$$, isn’t it this definition equivalent to definition of two sided ideal of $$C^*$$-algerba? Most probably I’m missing something in the definition of Lie ideal. Any ideas? I still don’t understand this. Can you show the work so I can understand? The link shows the problem. https://i.postimg.cc/xjmZSHZM/41012-BA9-7412-4-D1-F-9-D3-F-EDF705-B90-F7-E.jpg ………………………………………………………… What’s the best article to read and understand about cryptocurrency airdrops? The article should be easy to understand for the newbies of cryptocurrency. There are plenty of airdrops , but most of people don’t know how to participate in an airdrop and how to trade those airdrop tokens. Specs being agreed on and then those specs being placed into production are two different things. Yes, the final PCI Express 5.0 specs was introduced in May 2019 but the PCI Express 6.0 specs are still in process with the last update being November 2020. Specifications are just mutually agreed upon ways of handling and doing things: These are the pinouts, this I the data spec, here are the power requirements and here is the acceptable boundaries of this specification. But a specification is not the final step. The next step is to get manufacturers onboard with the spec an gear them up for manufacturing. As of the time of this post, only first demo systems of PCIe 5.0 are being unveiled and introduced to the world. But this is — again — one of the first steps of the process. As explained in this ExtremeTech article from October 2020 discussing a recent demo system using PCIe 5.0: “The goal of this type of testing is to demonstrate Intel’s commitment to future high-speed interfaces, as well as to create reference platforms for early PCIe 5.0 certification.” “It’s also a sign that PCIe 5.0 could show up on motherboards in as little as 12 months, though I think 2022 is a bit more likely than late 2021. If Intel launches Rocket Lake at the end of Q1 2021, as is expected, it’s not clear the company would then refresh Alder Lake in the October / November time frame. Typically Intel likes to wait a bit longer than that between product cycles.” Why does it take so long? Easy: Manufacturing of new chips and related hardware to support new specs is not an overnight thing. It takes time to develop all of that to make it a practical reality. Also, gearing up to do something like that too soon might be a gamble: What if nobody cares about new PCIe specs to this degree? What if people are perfectly fine with PCIe 3.0? And what do you do with the tons of PCIe 3.0 hardware out there? Remember, hardware manufacturing is a business: Why would a company sit on already manufactured PCIe 3.0 hardware just to rush PCIe 5.0 and — eventually — PCIe 6.0 systems into existence? What financial gain do they get by doing that? So while new specs always seem cool, they typically do not show up right away after a set of specs is standardized. tracking – Using Hotjar to better understand user’s journey of a payment flow my client is asking to integrate Hotjar to have a better idea of user’s behaviour during payment process flow (Account creation, Payment information, Review, Confirmation). Although, I have some concerns about the tracking of such tool regarding the credit card information. I couldn’t find on the subject. What are your thoughts ? I am reading Automata and Computability by Dexter C. Kozen and I am in the first chapter entitled Strings and Sets. If we have a set $$A=$${$$ab,b$$} do we get to assume that the null string $$ε$$ is also a member of that set? If we have another set $$B=$${$$ε$$}, does $$A∩B$$={$$ε$$} or does $$A∩B=∅$$? On a related note, I am having trouble understanding the meaning of a little chunk in the textbook. What does “family of set indexed by another set I” mean? What does the set I consist of here? I am very confused by the notation as well. Anything you can do to help me understand the excerpt below is appreciated. Using the slot does not ‘expend’ nor ‘consume’ the spell Note: This answer has quite a bit of the rules text cited with the intention of helping a new DM better translate the Players Handbook (PHB) text into a ‘how to apply this in your game’ since the game is being played by new players+new DM. Spell Slots Regardless of how many spells a caster knows or prepares, he or she can cast only a limited number of spells before resting (PHB Chapter 10) What you described as your understanding is how it worked in older editions of the game. In D&D 5th edition spell casting is more flexible. A spell slot powers anyknown or prepared spell of its level. You ask about the druid and the sorcerer. Their mechanics are slightly different. One is a prepared spells caster, one is a known spells caster. Known and Prepared Spells Before a spellcaster can use a spell, he or she must have the spell firmly fixed in mind…Members of a few classes have a limited list of spells they know that are always fixed in mind. {Sorcerers are one of those classes}. Other spellcasters, such as clerics and wizards, {and druids} undergo a process of preparing spells…In every case, the number of spells a caster can have fixed in mind at any given time depends on the character’s level. (PHB, CH 10) Druids prepare spells, and sorcerers know spells. This is a mechanical game distinction. (Ch 10 PHB, Spellcasting, and CH 3 in the Spellcasting sections for Druid and for Sorcerer class descriptions). 1. Sorcerer: Spells Known of 1st Level and Higher The Spells Known column of the Sorcerer table shows when you learn more sorcerer spells of your choice. Each of these spells must be of a level for which you have spell slots. For instance, when you reach 3rd level in this class, you can learn one new spell of 1st or 2nd level…when you gain a level in this class, you can choose one of the sorcerer spells you know and replace it with another spell from the sorcerer spell list, which also must be of a level for which you have spell slots. (PHB, CH 3, Sorcerer) 2. Druid: Preparing and Casting Spells The Druid table shows how many spell slots you have to cast your druid spells of 1st level and higher. To cast one of these druid spells, you must expend a slot of the spell’s level or higher. You regain all expended spell slots when you finish a long rest. You prepare the list of druid spells that are available for you to cast, choosing from the druid spell list. When you do so, choose a number of druid spells equal to your Wisdom modifier + your druid level (minimum of one spell). The spells must be of a level for which you have spell slots. For example, if you are a 3rd-level druid, you have four 1st-level and two 2nd-level spell slots. With a Wisdom of 16, your list of prepared spells can include six spells of 1st or 2nd level, in any combination. Casting the spell doesn’t remove it from your list of prepared spells. You can also change your list of prepared spells when you finish a long rest. Preparing a new list of druid spells requires time spent in prayer and meditation: at least 1 minute per spell level for each spell on your list. (PHB, Ch 3, Druid) Each day players “prepare” list of spells for that day, and place one in each spell slot to use later. The Druid prepares, the Sorcerer does not. Is this correct? Not for this edition of the game. Any slot can be used to apply any known or prepared spell of the appropriate level. Can each spell be used only once per day? No. That is how it worked in AD&D, Basic D&D, in Original D&D, and in 3rd edition D&D. In this edition, that has changed. The character prepares or knows a number of spells of the same level, and any slot can be used to cast any spell of the appropriate level. Examples: 1. a level 1 Druid prepares both Entangle and Healing Word, and has two spell slots at first level. The druid can cast Entangle twice, or, Healing Word twice, or, each one once – before running out of spell slots for that adventuring day. 2. A sorcerer has chosen Sleep and Magic Missile at first level and has two spell slots at first level. They can cast Sleep twice, or, Magic Missile twice, or, each one once – before running out of slots for that adventuring day. Do spells come from all spells for that level or only known spells? That depends on the character’s class. 1. The Druid has to pick which ones to prepare after each long rest from any spell on the list for their class and level. 2. The Sorcerer, once a spell is known/chosen, always has it ready to cast (Known spells is the in game term for this). They can only change that after they increase in level. If it comes from “known spells”, how does a druid or sorcerer add to their list of “known spells”? The druid can prepare any spells from the druid list that their level allows them to cast. The Sorcerer has to make a choice at character generation, and at each level up. Example: a druid of level 2 with a Wisdom of 16 prepares their level + Wisdom Bonus spells each day at the completion of a long rest. They have the whole spell list to choose from, but must decide which five to choose. (They can always choose to have prepared what they had done ono the previous day). The bonus spells from their circle (Circle of the Land) are always prepared, and don’t count against the above slot formula. The Sorcerer at level 2 had two spells already known from level 1, chosen from the Sorcerer list. At level two, they get to pick one new spell that they already know, and can replace one of the previous ones that they knew – but perhaps they found that they don’t like as much as they thought they would. The spell slot, which recharges after a long rest, can be used for any of the spells with that same level. Spell Slots (PHB, Ch 10) When a character casts a spell, he or she expends a slot of that spell’s level or higher, effectively “filling” a slot with the spell. {snip}. So when Umara{a third level sorcerer for example} casts magic missile, a 1st-level spell, she spends one of her four 1st-level slots and has three remaining. Note that the spell slot is what is expended, not the spell. The spell remains in the caster’s mind; to cast it requires the slot. Finishing a long rest restores any expended spell slots (see chapter 8 for the rules on resting) unity – How can it be that if I click on rawimage it’s getting the right file name from the hard disk ? I don’t understand it In this script once only I’m getting the all the files from the hard disk type png images. ``````using System.Collections; using System.Collections.Generic; using System.IO; using UnityEngine; using UnityEngine.Networking; using UnityEngine.UI; public class SavedGamesSlots : MonoBehaviour { public GameObject saveSlotPrefab; public float gap; private Transform slots; // Start is called before the first frame update void Start() { imagesToLoad = Directory.GetFiles(Application.dataPath + "/screenshots", "*.png"); slots = GameObject.FindGameObjectWithTag("Slots Content").transform; for (int i = 0; i < imagesToLoad.Length; i++) { var go = Instantiate(saveSlotPrefab); go.transform.SetParent(slots); Texture2D thisTexture = new Texture2D(100, 100); MouseHover.savedGameFName = fileName; thisTexture.name = fileName; go.GetComponent<RawImage>().texture = thisTexture; } } // Update is called once per frame void Update() { } } `````` At this line I’m doing reference from another script to a public static variable : ``````MouseHover.savedGameFName = fileName; `````` Then in the MoueHover script that I’m using it calling it from each raw image event trigger I’m using also the mouse button down : ``````using System.Collections; using System.Collections.Generic; using System.Drawing; using System.Drawing.Imaging; using System.IO; using UnityEngine; using UnityEngine.EventSystems; using UnityEngine.UI; public class MouseHover : MonoBehaviour { public RawImagePixelsChange rawImagePixelsChange; public static string savedGameFName; public static string folder; public void OnHover() { Debug.Log("Enter"); rawImagePixelsChange.modifyPixels(0.3f); PlaySoundEffect(); } public void OnHoverExit() { Debug.Log("Exit"); rawImagePixelsChange.restorePixels(); } private void Update() { { if (Input.GetMouseButtonDown(0)) { folder = Path.GetDirectoryName(savedGameFName); } } } private void PlaySoundEffect() { transform.GetComponent<AudioSource>().Play(); } } `````` At this line : ``````if (Input.GetMouseButtonDown(0)) `````` After clicking the mouse left button inside I’m using a break point on one of the lines and I see that the variable savedGameFName contains the correct clicked raw image full directory with the file name for example : d://screenshots//screen_1920x1080_2021-01-26_18-56-46.png but how does it know to assign to the variable savedGameFName the correct file name of the clicked raw image ? It’s not a list or something it’s just public static string variable. and the only other using is in the first script when running the game once first time. I don’t understand it. It’s working fine !!! but I don’t understand how.
# Graph f(x)=5*2^x Exponential functions have a horizontal asymptote. The equation of the horizontal asymptote is . Horizontal Asymptote: Graph f(x)=5*2^x
JAIMIE VERNON – A PIECE OF THE ROCK Posted in Opinion with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on May 23, 2015 by segarini The demise of Rock And Roll has been greatly exaggerated by the likes of Gene ‘I don’t know how to work an iPad‘ Simmons. That’s not to say we shouldn’t have it on suicide watch and coax it back from the ledge where its been hanging by its finger tips since Axel Rose delivered ‘Chinese Grafitti‘ and not ‘Appetite For Destruction 2‘. JAIMIE VERNON – I DO THE ROCK Posted in Opinion with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on November 8, 2014 by segarini Jaimie Does the Rock
# Group 6 Welcome to the Wiki page of group six of the course Mobile Robot Control 2020! On this Wiki, we will introduce you to the challenges we faced when trying to control an autonomous robot and the solutions we've come up with to tackle these challenges. This page can be used as a reference when designing your own robot software, or as an idea on how to approach this course from a organisational perspective. The goal of this course is to design the software to allow the autonomous robot PICO, to first of all flee an escape room and, eventually, to be able to navigate a hospital setting in the presence of unknown disturbances. The shape of the escape room is known to be rectangular, however its size is completely unkown to the robot, as well as the initial position and orientation with respect to the exit. For the hospital challenge, a rough floor plan is provided beforehand, but the rooms and corridors are filled with static objects. In order to mimic people in the hospital, unpredictable dynamic objects are also present. In a practical setting, designing software to tackle these issues would allow a robot similar to PICO, to assist nurses and doctors in a hospital by performing small tasks, such as transporting medicine and tools, while being safe to work around at all times. This means that the software we were to design had to be robust, even in the presence of unknown, possibly moving objects, efficient, to avoid clutter in the hospital and user-friendly, to make sure that people can understand and predict the robot and its actions. This page is roughly split into two parts, with the first being the relatively simple code required to traverse the escape room, and the second a description of the more elaborate code for the Hospital Challenge. Within each part, the general code structure is first propagated, after which a strategy is proposed in the form of a Finite State Machine. Then, with the foundations of the code laid, we go more into depth in explaining each component. In short, for the Escape Room Challenge, we have decided to detect walls and edges directly from laser range finder (LRF) data, from which a distinction could be made between walls of the room and of the exit corrider. Subsequently, a target point was calculated along one of the observed walls and the robot was steered towards this target point. For the Hospital Challenge, we have used a feature based Monte Carlo particle filter working with convex and concave edges to localise the robot in the hospital. With the known information about the shape of the hospital and the robot its position, a computationally light, optimal A* algorithm based on a network of efficient, pre-defined waypoints was designed, tasked with navigation between the rooms. Finally, a potential-field based collision avoidance algorithm was implemented to ensure safety for the robot and its surroundings. At the end of each part, the performance during the live demonstrations is shown and commented upon, accompanied with recommendations and improvements. # Group Members Students (name, id nr): Joep Selten, 0988169 Emre Deniz, 0967631 Aris van Ieperen, 0898423 Stan van Boheemen, 0958907 Bram Schroeders, 1389378 Pim Scheers, 0906764 # Design Document At the start of the project, all customer specifications and preferences have been rewritten into our own words, in order to have a comprehensible and conscise overview of the important functions our software needed to have. The most important requirements are, as mentioned before, usefulness, reliability and ease of use, which are explained in more detail in the Design Document. In this document, research into the bare functionality of PICO is also included, from which the inputs and outputs of our software were derived. In short, PICO measures the distance to surrounding objects at 1000 points using a LRF data, and monitors the movement of its omniwheels. Actuating PICO is done through three signals, being a longitudinal, lateral, and rotational velocity, all three of which can be independently actuated through those omniwheels. Regarding the overall software structure, there are relatively few differences between both challenges. In both cases, the environment needs to be interpreted, decisions have to be made on what to do and these actions have to be performed. Based on this insight, an information architecture was developed for the Escape Room Challenge, which was extended and adapted for the Hospital Challenge. This information architecture contains the general software structure, the most important components and their key functionalities. It was used as a basis for the software design and task division. The Design Document, which describes the design requirements, specification, components, functions and interfaces can be found here. # Escape Room Challenge The Escape Room Challenge required PICO to escape a room with limited prior knowledge of the environment. The information architecture of the embedded software has been designed simultaneously with the design document, the main components being: PERCEPTION, WORLD MODEL, MONITOR & STRATEGY and CONTROL. PERCEPTION is tasked with the interpretation of the sensor signals, the WORLD MODEL stores all necessary data which need to be memorised, MONITOR assesses the information in the world model, to allow STRATEGY to take the right decisisons. Finally, CONTROL calculates the outputs of the software system to have the robot perform the tasks desired by STRATEGY. ## Information architecture Information architecture of the software during the escape room challenge ## Monitor and strategy The goal of MONITOR is to map the current situation into discrete states using information of the environment. For the escaperoom four different conditions are monitored, namely whether a wall, a gap, a corner or an exitwall is found in PERCEPTION. STRATEGY controls a Finite State Machine, shown in the figure below, that is used to determine which action CONTROL should take. The discrete states from Monitor are used for the guards of this final state machine. When the state machine is in the state FindWall, CONTROL gets the objective to move untill a wall is detected. In the state FollowWall CONTROL follows the wall which is the closest to the robot. From FollowWall it can either go to GoToGap, when a gap is detected, or to CrossToWall. In CrossToWall the objective for CONTROL is to follow the wall that is perpendicular to the wall it is currently following. This way the corner is cut-off. When the gap is detected PICO directly goes to this gap and when it recognizes the finish it will drive to the finish. Finite State Machine used in the escape room challenge. ## Perception The objective of the Escape Room Challenge is finding and driving out of the exit. To be able to achieve this, the robot should recognize the exit and find its location, which is the main objective concerning PERCEPTION. For this challenge, the features of the room are stored in the WORLD MODEL locally to PICO. First of all, unusable data points of the LRF sensor have been filtered out. The unusable data points consist of laser beams which do not lie within the required range of the LRF (which is between 0.2 and 10 meters). A line detection and an edge detection functionality has been implemented in order detect the walls of the room in local coordinates. This way, at each time step, the begin point, the end point, and the nearest point of the walls can be observed by PICO. The exit is determined by observing a gap between line segments. When this is the case, the exit walls are added to the WORLD MODEL separately. PICO constantly processes the data in order to recognize an exit. A more detailed description of the line, edge and gap (exit) detection can be seen below: • Line detection: the LRF data consist of 1000 points with each a range value, which is the absolute distance to PICO. The line detection function loops over the data and calculates the absolute distance between two neighboring data points. When the distance exceeds the value dgap, the line-segment can be separated. • Edge detection [3]: the line detection algorithm only detects if data points have a large distance relative to each other. The edge detection function detects if the line-segments (which result from the line detection) contain edges. The basic idea of the implemented algorithm can be seen in the figure below. The line segment in that figure has a starting data point A and an end point B. A virtual line, AB, is the drawn from point A to B. Finally the distance from the data points Ci, which lie inside the segment, to the virtual line AB is calculated, dedge. The largest value dedge can be considered an edge. With the ability to observe and locate walls, gaps can be easily detected. The basic idea of this gap detection algorithm is that the robot looks for large distances between subsequent lines. The threshold for this difference can be tuned in order to set the minimum gap size. The WORLD MODEL not only stores the local coordinates of gaps, but it also stores the exit walls. The function gapDetect in the class PERCEPTION is responsible for storing both the gaps and exit walls in the WORLD MODEL. The visualization beneath shows the localization of a gap in a room. The bright red circle represents the set-point to which PICO will drive towards. This set-point contains a small adjustable margin which prevents collisions with nearby walls. Gap detection in custom Escape room • Adjustable parameter MIN_LINE_LENGTH which sets the minimum amount of data points for which we can define a line. With this implementation stray data points will not be perceived as lines. • Adjustable parameter MIN_GAP_SIZE, which sets the minimum gap size. When the gap size between two lines is lower than this value, everything inside that gap is ignored. • Adjustable parameter GAP_MARGIN, which as previously mentioned adds a margin to the gap set-point. With these features, a rather robust PERCEPTION component has been developed. The resulting performance can be seen in the recording below. The detected lines and gap have been visualized. Small gaps and lines which are present in this custom map are ignored. Ignoring small gaps and short lines ## World Model WORLD MODEL in the Escape Room Challenge, stored the following features: • segments: this member variable in the world model class contains every line segment in a vector. A Line struct has been added which stores the beginning position and end position index and coordinates. The coordinates can be stored with a Vec2 struct. • gaps: this member variable in the world model class contains the perceived gaps in a vector. A Gap struct has been implemented which stores the gap coordinates (Coords), the gap coordinates including a margin (MCoords) and the gap size. • exit_walls: this member variable contains the 2 exit walls in a vector. These walls are stored as the before mentioned Line struct. Keep in mind that these features are being renewed constantly during the operation of PICO. ## Control In CONTROL, a main functionality is to drive towards a target. Therefore the function "GoToPoint()" is created. This function allows PICO to drive towards a point in its local coordinates. The input is a vector which defines the point in local coordinates. Reference velocities are send towards the base in order to drive towards this point. Updating this point frequently makes sure that the robot will have very limited drift, as the reference and thus the trajectory will be adjusted. The robot will not drive and only turn when the angle towards the point is too high, this angle is made into a tunable parameter. For our strategy, it is necessary that PICO can follow a wall, hence a “FollowWall( )” function is created. The “FollowWall( )” function creates an input point (vector) for the “GoToPoint( )” function. To create this point two parameters are used. One for the distance from the wall to the destination point, and one for the distance from PICO to the destination point. Both are tunable parameters. With the use of some vector calculations this point is created in local coordinates. The benefit of this method is that drift is eliminated, since the point is updated each timestep. Also PICO will approach the wall in a smooth curve and the way this curve looks like is easy tuned by altering the parameters. The following figure presents this approach. ## Challenge On May 13th the Escape Room Challenge was held, where the task was to exit a simple rectangular room through its one exit without any prior information about the room. We had prepared two branches of code, to allow ourselves to have a backup. With the software described in the previous sections, the first attempt showed behavior which was very close to the video below. Unfortunately, when the robot turned its back towards the wall it should be following, it got stuck in a loop which it could not escape. From the terminal we could read that the robot remained in a single state, called FollowWall. However, its reference direction did constantly change. Performance during the escape room challenge The code for the second attempt, which omitted the use of the states GoToGap and GoToFinish, made use of two states only, being FindWall and FollowWall. This meant that the issue we faced in the first attempt was still present in the new code, hence exactly the same behavior was observed. During the interview, it was proposed by our representative that the issue was a result of the robot turning its back to the wall, meaning that the wall behind it is not entirely visible. In fact, because the robot can not see directly behind, the wall seems to be made out of two parts. During turning, the part of the wall which is closest to the robot is used in the FollowWall function changes, hence the reference point changes position. Then, with the new reference point the robot turns again, making the other section of the wall closest, causing the robot to turn back and enter a loop. During testing with the room that was provided after the competition, a different root to our problems was concluded. As it turned out, the wall to the rear left of the robot almost vanishes when the robot is turning clockwise and its back is facing the wall, as can be seen in the left video above. This means that this wall no longer qualifies as a wall in the perception algorithm, hence it is not considered as a reference wall anymore. This means that the robot considers the wall to its left as its reference, meaning that it should turn counterclockwise again to start moving parallel to that. At that point, the wall below it passes over the threshold again, triggering once again clockwise movement towards the exit. With this new observation about the reason the robot got stuck, which could essentially be reduced to the fact that the wall to be followed passed under the threshold, the first debugging step would be to lower this threshold. Reducing it from 50 to 20 points, allowed the robot to turn clockwise far enough, so that the portion of the wall towards the exit came closest and hence could be followed. This meant that the robot was able to drive towards the exit, and out of the escape room without any other issues, as can be seen in the video below. All in all, it turned out that the validation we had performed before the actual challenge missed this specific situation where the robot was in a corner and had to move more than 90 degrees towards the exit. As a result, we did not tune the threshold on the minimum amount of points in a wall well enough, which was actually the only change required to have the robot finish the escaperoom. Performance during the escape room challenge after the corrections. # Hospital Challenge With the Escape Room Challenge completed, the next part of the Wiki page considers the software designed for the Hospital Challenge. In here, the adaptations, new components and new functions within those components are put forward and explained. Additionally, a lot of testing has been performed to validate the reliability of the designed software and apply improvements wherever necessary. ## Information Architecture In order to finish the Hospital Challenge, we have first created an information architecture. The basic structure is very related to the Escape Room Challenge. The architecture is created in a logical manner, as it first locates the robot in PERCEPTION, then stores this data in the world model from which the strategy is determined and the robot is actuated through a control structure. The components within the architecture contain the following components: • Config. Reader: Is able to parse through JSON-files which contains points, walls and cabinets present within the map; Can also parse through JSON-files containing each waypoint and it neighbors. • Monitor: Monitors the discrete state of the robot. • Strategy: Determines the supervisory actions necessary to achieve the targets. • Perception: Localizes PICO in the global map using a Monte Carlo particle filter; Detects unknown objects in the global map. • World Model; Stores the global map and local map; Stores the path list and waypoints link matrix. • Visualization: Contains plot functions meant for debugging several components; Contains the mission control visualization used for the final challenge. • Control: Actuates the robot to reach the current target specified by Strategy. Consists of global and local path planning methods. The following figure shows functions, functionalities and states within each component. Improved information architecture which has been used for the hospital challenge. In comparison to the Escape Room Challenge, the information architecture has not changed much. The largest difference is the addition of 2 components: the Visualization and the Config. Reader. The visualization became crucial in the debugging phase of several functionalities and components within the software architecture. While the config. reader helped improve the A* path planning and loading in several maps for testing. In the following sub-chapters, the main functionalities of each component will be explained and discussed. ## Monitor Like for the escaperoom challenge, the goal of MONITOR for the Hospital Challenge is to map the current situation into discrete states using information of the environment. For the hospital, ten different conditions are monitored. First of all, MONITOR analyses whether a mission loaded by inspecting the mission array. The mission is loaded if this array is not empty. For the localization, three conditions are monitored. The first condition is whether the position is confirmed and the second condition is whether enough edges have been used for the localization. This should be at least the threshold of three edges, as this is the minimum to constrain the three degrees of freedom. The third condition that is monitored for the localization is how many iterations PICO has made in finding its position. If the amount of iterations is above the threshold, it could be that it is not possible to allign in the current position. When PICO is driving to the next cabinet there are four condintions monitored by MONITOR. One of the conditions is whether the path should be updated. For this it is monitored whether an object is detected, and if so, whether the object obstructs the path. The path consists of several waypoints, and it is checked whether a waypoint is reached. When a waypoint is reached a timer is set as well. This gives PICO five seconds to get to the next waypoint. If PICO is not able to reach the next waypoint in these five seconds, the path is most likely blocked. The last waypoint of a path is a cabinet. Whether a cabinet is reached is the last condition that is monitored during driving to the next cabinet. When PICO is at a cabinet there are two conditions monitored by MONITOR. The first condition is whether the pickup is completed, and the second condition is whether it was the last cabinet of the mission. In the latter case it means the mission is finished. ## Strategy The Hospital Challenge is tackled with the following Finite State Machine (FSM) which is implemented in STRATEGY. The guards are implemented in MONITOR. Init The first state of PICO is the Init state. This is the initialization, here the mission is loaded. When the mission is loaded the initialization is finished and PICO goes into the Localization state. Localization When PICO is in the Localization state, the localization is performed. When PICO confirms that it knows its position it is checked whether the conditions for correct localization are met. If this is not the case, or when the localization fails, PICO will go to the LocalizationFailed state. LocalizationFailed In this state PICO will make a small rotation, after which it goes back into the Localization state. DriveToCabinet When the localization is confirmed, PICO goes to the DriveToCabinet state. In this state, a path is calculated and followed. When PICO is following the path, it keeps scanning the environment continuously in order to confirm its position and to detect objects. If PICO does not reach its next waypoint before the timer ends or when the position uncertainty gets too high, it will go into the FAILSAFE. FailSafe In the Failsafe, the linkmatrix is reset. Depending on the reason why the Failsafe state is entered, it will either go back into Localization or DriveToCabinet. AtCabinet When PICO is close to the cabinet it is heading to in the DriveToCabinet state, it goes into the AtCabinet state. In this state PICO alligns with the heading of the cabinet and performs the pick up. A snapshot of the LRF data is made and in case the mission is completed the Finish state is reached. If the mission is not completed, the linkmatrix will get a soft reset. This means that only the links that have had a small increase in weight are reset. After this PICO goes back to the DriveToCabinet state. Finish The Finish state is the final state of PICO. This FSM is taken as a guidance throughout developing the functions. ## Perception The most important task which PERCEPTION has to complete is the localization of PICO. This is chosen to be done by using a 'Monte Carlo particle filter'. This is chosen over other techniques as it is not limited to parametric distributions and is relatively simple to implement. Also, it outperforms other non-parametric filters such as the histogram filter [Jemmott et al., 2009]. One of the main challenges of the particle filter is the so called ray casting problem, i.e. determining what each particle with random position and orientation on the map should see with its LRF. Due to the complexity of this problem and the limited time of this project, it was chosen to solve this problem using a feature-based sensor model. This approach tries to extract a small number of features from high dimensional senor measurements. An advantage of this approach is the enormous reduction of computational complexity [Thrun et al., 2006]. Also, a converging algorithm is used in case particle filter algorithm initially fails to find the right position of PICO, which makes this global localisation algorithm extremely robust. Both the particle filter algorithm and the converging algorithm contain features that makes the localisation robust against unknown objects. In addition, an object detect algorithm is implemented that converts unknown corners to known corners, making the localisation increasingly more accurate during the challenge. The position of PICO is updated with odometry data in combination with these algorithms to account for uncertainties such as drift. In this section, first an overview on the implementation of the localization algorithm is given. The explanation of this algorithm is then divided in two parts, the particle filter algorithm and the converging algorithm. Hereafter it is shown how the localization algorithm is used in combination with the odometry data. Then a more elaborate explanation is given on certain key steps of the localization algorithm, i.e. the probability function, the resampling function and the uncertainty function. Lastly, it is shown how unknown objects can be detected and used for a more accurate localization. ### Implementation At the beginning of the Hospital Challenge, the position and orientation of PICO are unknown. Based on its LRF data, PICO should be able to find its global position and orientation on the provided map. As the map is known, a Monte Carlo particle filter can be used, which is a particle filter algorithm used for robot localization. This algorithm can be summarized by the following pseudo code: for all NUMBEROFPARTICLES do 1. Generate particle with random position and orientation 2. Translate the global position of the (known) corners to the local coordinate frame of the particle 3. Filter out the corners not within the LRF range of the particle 4. Calculate probability by comparing the seen corners of the particle with the corners PICO sees end 5. Weighting the probabilities of all the particles to sum up to one 6. Resample the particles according to weight 7. Calculate the average of the remaining particles 8. Calculate the uncertainty by comparing the corners PICO sees with the corners the computed average position should see 9. Update the range where particles are generated according to this uncertainty In order to validate the localisation of PICO, a simple test map is made. To also test robustness against unknown objects and object detection, a same test map is made with a object in the right lower corner. ### General PF algorithm Resampling particles with the highest probability. In step (1), particles are created using a random generator. The initial range where these particles are generated is given by user input. i.e. the starting room of the hospital. Then in step (2), all the features in the map, from which its global position is known, are translated to the local coordinate frame of the generated particle. As already an edge (or corner) detect function was implemented during the Escape Room Challenge, it was decided to use these corners as features. An alternative would be to use the walls as features. However the difficulty with walls was that they are often only partly visible, while the corners are either fully visible or not at all. In addition, a distinction is made in convex and concave corners, which gives PICO more information to find its global position. This required updating the edge detect function used in the escape room challenge to make this distinction. In step (3), the features the particle would not be able to see if it had the same LRF as PICO are filtered out. This is needed to make the comparison to what PICO sees with its LRF. As PICO is to be positioned within three degrees of freedom, at least three visible corners are needed for proper localization. In step (4), this comparison is done with a probability function that returns the probability the sampled particle is at the same position and orientation as PICO. After all the particles have been assigned a probability, this probability is weighted in step (5) in order to have the total sum of all probabilities equal to one. These weighted probabilities then represent a discrete probability distribution. From this distribution, the particles are resampled in step (6). After a few resamples, PICO's position and orientation is determined in step (7) as the average of the remaining particles. The number of particles generated and the number of resamples can be changed in the configuration and are fine-tuned by considering the trade-off between computational load and accuracy of the computed pose. On the right, a visualization of the localization using the particle filter is shown. It shows how after a few resamples, the correct position of PICO is found. The yellow and white circles represent the convex and concave corners present in this map. The green and blue circles represent the convex and concave corners observed by PICO. The red circles represent the resampled particles. ### Converging algorithm Converging to the right position when initially incorrect. Step (7) concludes the standard Monte Carlo particle filter. However to cope with errors in localization, a converging algorithm is added. The implementation of this algorithm starts in step (8) with obtaining a measure of uncertainty on the determined position at step (7). First, the local position of the visible corners are translated to the global coordinate frame using the obtained global position of PICO after step (7). Then, the global position of these convex and concave corners is compared to their actual (known) global location on the map, by computing the distance between them. This quantity is then used as an uncertainty measure, as this value would be very small if all the seen corners are placed on the right location. Then in step (9), this uncertainty measure is multiplied by a weight and then used to update the range of particles generated around PICOs position when the PF algorithm reruns. This ensures that the global position of PICO always converges to the right position. This weight can be changed in the configuration and is fine-tuned by considering how aggressive the PF should react to uncertainty. When a high value is chosen, the computed position converges more quickly to the right position. However this would make the localisation less robust against corners from unknown obstacles. On the right, a visualization is seen of a situation where PICO first initially assesses its position at a wrong location. However due to the uncertainty evaluated after the particle filter, it corrects and converges to the right position. This uncertainty is visualized as the yellow circle around PICO. ### Updating using odometry data Updating location using odometry data. When driving, the position and orientation of PICO are updated with the odometry data. When PICO drifts, the previously described uncertainty measure will increase as the distance between the visible edges and the known edges increases. As a result, the range around PICO where particles are generated increases, from which the right position can again be recovered. As PICO sometimes only drifts in a certain direction, the particle range around PICO is made as an ellipse in order to focus more on the direction of larger drift. ### Probability function A key part of determining whether a particle represents the correct position and orientation of PICO is implementing an efficient probability function. This function should compare the information generated for a particle, with the information of PICO. The more these values match, the higher the probability will be of that certain particle. Because a feature based sensor model is used, the information that is compared in this probability function is feature information. The features that are used are the corners of the walls. These corners contain information about their location and type, i.e. convex or concave. This information is compared with the use of two embedded for loops. A loop over all the corners PICO observes and a loop over all the corners that the particle should observe. This means that all the observed corners of the particle are compared to all the observed corners of PICO. This is done as PICO cannot know which particular corner it sees. Then, in this loop, all the corners are compared using a probability distribution. Several distributions were tested. The first guess was using a Gaussian distribution. However when PICO sees an unknown corner, using this distribution could result in giving a particle that has on average less distance between all the corners more probability than a particle on the right position but due to that unknown corner less probability. Therefore, in order to give a bigger penalty on a few small distances compared to one large distance (i.e. unknown corner), a reverse exponential distribution is used. The inputs for this distribution are the x-location, y-location, orientation and distance of the corner. A general probability is then created by taking the product of these individual probabilities. When a convex and a concave corner are compared, an big penalty is given, resulting in a small probability of the considered particle. This prevents high probabilities for corners that in reality do not coincide. ### Resampling function Resampling, graphic representation. One of the main reasons the particle filter is computationally efficient, is due to the resampling step. It is possible to implement this filter without this step, this would, however, require a very large amount of particles for obtaining similar accuracy. After assigning each random generated particle with a probability, all probabilities are weighted in order for the total probability to be equal to one. Then, the already generated particles are resampled according to this weight. If, for example, one very accurate particle gets a weighted probability of 50%, this means that during resampling this particle will be chosen as approximately half of the total number of particles. After resampling, the probability of each particle is recalculated according to the number of times it has been chosen. The particles that have not been chosen are removed. The probability of all the particles are again weighted for the total sum to be equal to one. This resampling step can be done one or several times for each run of the PF algorithm. In [1], a code snippet of the implementation of this resampling algorithm is presented. ### Uncertainty function The last key feature of the localisation algorithm that deserves some extra elaboration is the uncertainty function. As explained before, this function calculates an uncertainty measure that is used to ensure that PICO always converges to the right position. This done by first computing the distance between a corner that PICO should see at the obtained position from the particle filter algorithm and a corner that is actually observed. Then by looping over all the corners PICO should see, the smallest distance to an observed corner is found. The average of these values is a measure for uncertainty. The smaller these deviations are, the more accurate the calculated position of PICO is. To be robust against unknown objects any observed corners that do not coincide with corners that are known, are neglected in the calculations. The way this is determined is by taking the standard deviation of all the calculated distances and excluding the corners that have a too large deviation by taking a certain confidence interval. As the uncertainty measure is only reliable when 3 or more corners are observed, a constant value is taken when less than 3 corners are observed. This constant value should not be too small or too big. A too small uncertainty could result in PICO not being able to recover its position when it again sees 3 or more corners, and will be able to relocalise. A too big uncertainty could result in PICO trying to relocalise in a way to big area than necessary, which can put a risk on having a inaccurate localisation. Also, as PICO can still rely on its odometry data, this value can be taken relatively small. Below, a visualization is shown of an accurate localization while an unknown corner is visible. ### Object detection Besides the need of being robust against unknown objects, they can also be used in our advantage. This requires having an object detect functionality that makes the previously unknown corners, known. By increasing the number of known corners and reducing the amount of unknown corners, localisation will become increasingly accurate during the challenge. This function only works when PICO is certain enough of its position, i.e. a small uncertainty measure. When this condition is fulfilled, the corners that have a large deviation are added to an object-array. Besides having a good certainty, the unknown corner should be observed multiple times. This is done to make sure PICO does not add objects that are not there, as this would work in our disadvantage. Also, this prevents PICO in adding dynamic objects. Below, a visualization is shown of an unknown corner in the bottom right of the room, after a while this corner is added to the global map. In the second visualization PICO uses this corner to in the localisation algorithm, resulting in a more accurate localisation. ## World Model The WORLD MODEL stores all of the information of the surroundings of PICO. The world model contains the following information: • Local world model 1. line segments: the begin and end coordinates of each line segment is stored. 2. gaps: all of the gap coordinates, which are large enough, are stored (only used in escape room challange). 3. concave/convex edges: the coordinates of each edge is stored. A struct edge has been added, which also stores is the edge is concave. 4. raw LRF and odometry data: each timestep the LRF and odometry data that is made available through PICOs sensors is updated in the World Model as well. This way the other components of the information architecture can retrieve the LRF and odometry data through the World Model. • Global world model 1. line segments: again the begin and end coordinates of each line segment, but now in global coordinates. 2. concave/convex edges: again the coordinates of each edge is stored, now in global coordinates. 3. current and previous position of PICO • Trajectories 1. waypoints and cabinet locations: the waypoints, its neighbors and the cabinets, are stored during initialization. 2. link matrix: the cost of each link (=path between 1 waypoint to another) is stored in a matrix. • Mission information 1. runtime and refresh rate 2. current mission The information stored in the world model is made available for all the other components to use. As can be seen in the information architecture, the World Model acts as the core of the where all information of PICOs environment is stored and updated continuously. During initialization, the config. reader will store the waypoints and map knowledge in the WORLD MODEL during initialization. This is being done by two separate JSON input files; 1 for the waypoints [4] and 1 for the map knowledge. The config. reader component is a new component added after the escape room challenge. This component can parse through several types of files. For the Hospital challenge, two types of parses have been used: 1. parseConfigJson(): parses through the supplied JSON file of the hospital map. 2. parseConfigJsonWaypoints(): parses through the waypoint coordinates and neighbors [5]. ## Control The CONTROL algorith is tasked with creating a path towards the cabinet and, using the current strategy, drive towards this cabinet. To create a global path for navigation, waypoints in combination with an A* algorithm are used. These waypoints are chosen in a strategic and computationally efficient way, allowing objects to be avoided but leaving out redundant positions. With links between these points having a weighting and strategically updating these weightings, the global path stays up-to-date. Additionally, in order to avoid hitting objects, doors or cutting corners, a local/sensor-based path following is used with a potential field algorithm. This will act as a safety layer for PICO to avoid collisions. This section will firstly elaborate on the choices made in the global path planning and secondly on the local path following. ### Global path planning In order for the robot to navigate its way through a roughly known area, it benefits from all prior information about the shape, size and dependencies of the different rooms and corridors. One common way of shaping this information in a tangible and concise manner is by gridding the entire space into smaller areas. The choices made for defining area’s on the map is explained below in the section on waypoints and links. Consequently, these areas contain information about the presence of nearby walls or objects, allowing for some path to be calculated from any position to any of the predefined targets. The processing of this information is explained in the section on determining intersections and the path calculation algorithm is explained in the A* section. Following this path is then a task for a low-level-controller, which compares the position of the robot to the desired position on the path and asymptotically steers the error to zero. #### Waypoints Waypoints on map. As we know the map and the targets, the choice has been made to process all available data in a strategic and computational efficient manner. Instead of gridding the entire area into equally sized squares or hexagons, a network of waypoints was introduced to capture all the relevant area's to which the robot could have to move to succeed in going from cabinet to cabinet. These waypoints, which are shown in the figure below, could be thought of as only those grid areas that are required to traverse the rooms and corridors, instead of all of them. It would not make sense to consider grid areas in a corner the robot is either unable to ever reach, or is very unlikely to be the optimal path. Instead, waypoints would be cleverly defined before and after doors, in the middle of rooms and corridors and near cabinets. Due to the fact that unknown objects may be encountered while driving through the hospital, multiple waypoints are placed around each room, allowing for multiple routes to be taken between them. Moreover, as can be seen in the large vertical corridor in the figure below, in corridors, multiple waypoints are placed next to each other for redundancy. For both rooms and corridors, the presence of multiple feasible options should be guaranteed, even in the presence of large, possibly dynamical objects. A total of 90 waypoints have been chosen to accurately and efficiently grid the entire hospital layout. Seven of the waypoints have a special purpose, namely that they represent the positions in front of the cabinets. Specifically, they indicate the position centered in front of the cabinet. These waypoints representing cabinets are accompanied by a file containing information on which waypoints represent which cabinets, and which heading is required to face the cabinet. Incremental cost of links in the presence of a closed door and dynamical object. #### Detecting intersections Unlike the situation where the network of links is initialized, where use could be made of the pre-defined set of allowed links, a detection is required to find intersections between links and newly found objects. Consider the door of the example mentioned in the previous paragraph, where the door is defined as a line segment between two points. Since we know the exact start and end point of the links as well, we should be able to calculate whether or not the two lines intersect. A first approach, based on linear polynomials of the form y = ax+b falls short in situations where nearly vertical lines are identified, or when the lines do in fact intersect, whereas the line segments do not. Instead, a method was developed based on the definition of a line segment as p_(i,start)+u_i*(p_(i,end)-p_(i,start)), with u_i a number between 0 and 1. Then, two line segments i and j can be equated and a solution for u_i and u_j can be found. Barring some exceptional cases where the line segments are for instance collinear, it can be concluded that an intersection between both line segments only occurs when both 0<=u_i<=1 and 0<=u_j<=1. In our software, this approach was implemented by the code snippet [2]. Then, this function was called, comparing each link to each known line segment on an object. As described in the object detection chapter of this Wiki, the objects are stored as a set of line segments in global coordinates in the WORLD MODEL, ready to be used for detecting intersections. The intersections between links and objects is not the only relevant case where intersections should be identified to avoid collisions or the robot getting stuck. Identifying a path, using only waypoints and links, is generally not sufficient, as the robot is not always exactly on a waypoint when a new path is calculated. Therefore, ‘temporary links’ may be introduced, originating from the robot its current position and ending in all waypoints, of which the feasibility is not known a priori. Consequently, the same intersection algorithm can be used to assess feasibility for these temporary links, comparing them to known objects around the robot. In order to ensure that the robot does not want to drive along a link which is too narrow for PICO to follow, a ‘corridor’ of PICO's width is used for this intersection detection instead. Two additional temporary links are placed on both sides of the direct line segment between the robot and each waypoint, of which collisions are also checked. An example of a situation where this ‘corridor’ is required to avoid a deadlock is shown in the video below. Path calculation with 'corridor' intersection detection to avoid impossible path. #### A* algorithm With this definition and implementation of the waypoints and links, all relevant information for a global path planning algorithm is available. Two simple options come to mind, both with the guarantee that an optimum solution is found, namely the Dijkstra’s and the A* algorithm. They are very similar in the sense that they iteratively extend the most promising path with a new link, all the way until the path to the destination has become the most promising. The difference between the two algorithms lies in the assessment of ‘most promising’. For the Dijkstra’s, the only considered measure for promise is the cost-to-go from the initial position of the robot. This results in an equal spread into all directions of the currently considered paths, resulting in a computationally inefficient solution. However, the Dijkstra’s algorithm is always guaranteed to yield an optimal solutions, if it exists. On the other hand, the A* algorithm not only considers cost-to-go, but also takes into account the cost from the currently considered waypoint to the destination. This yields in a much quicker convergence to the optimal solution, especially in the presence of hundreds or even thousands of links. The exact cost from each waypoint to each destination, however, requires an optimization procedure of its own, quickly losing all benefits from the Dijkstra’s approach. However, it turns out that even a relatively inaccurate guess, often referred to as a heuristic, of the cost from a waypoint to the destination greatly benefits computational speed. In our application, this estimate is simply chosen to be the Euclidean distance, not taking into account walls or other objects. This could have been extended by incorporating some penalty in case the link does intersect a wall, but this did not seem like a large improvement and could have been detrimental for stability of the optimization procedure. The advantage of using the Euclidean distance as the heuristic is that its value is always equal to or lower than the actually achieved cost, which happens to be the requirement for the A* algorithm to be convergent to the optimal solution. Note that this heuristic only needs to be calculated once, since all waypoints and destinations are known beforehand and the Euclidean distance never changes. Soft link reset as a result of new cabinet target. With the choice for A* as the solver and the availability of the waypoints, links and heuristic, the actual implementation of the global path planning is rather simple. The first iteration calculates the most promising path from the robot to any waypoint, taking collisions and PICO’s width into account. Next, each new iteration the current most promising path is extended into the most promising direction, while preventing waypoints from being visited twice. This iterative procedure only ends when the current most promising path has arrived at the destination cabinet, after which the route is saved into the WORLD MODEL. Due to the fact that the path remains optimal for any point on the path, as stated by the principle of optimality, it does not need to be re-calculated each time instant. Instead, a new path is only calculated if one of three situations arise, namely a cabinet being reached and a new cabinet awaiting, the aforementioned detection of an object on the current path and the robot entering a failsafe. Whenever the robot reaches a new cabinet and a new path should be calculated, a soft reset is placed on the cost of all links, meaning that the cost all links which have been briefly blocked by an object are reset back to their original value, being their lengths. This distinguishes between static objects and doors on the one hand, who have cause the links they intersect with to far exceed the soft reset threshold, and noise and dynamic objects on the other hand, which have only been identified briefly and therefore had less effect on the cost of the links. This soft reset ensures that the next calculated path will also be optimal, and not effected by temporary or noisy measurements. When an object is detected on the path, no reset needs to be taken place, as we specifically want the robot to find a new route around the object. Thirdly, when the failsafe is entered and a new path is required to be calculated, depending on the cause of the failsafe a hard reset, forgetting all doors and objects, or a soft reset is performed. An example of a soft reset caused by the robot arriving at a cabinet and proceeding to drive to the next one is shown to the right. #### Low level control In order to closely follow the optimal path stored in the WORLD MODEL, a separate function is developed which is tasked to drive towards a point, which is in our case implemented in the GoToPoint() function. The functionality and approach to this function are similar to the function presented in the escape room challenge. However, for the hospital challenge, the global coordinates are to be provided as an input and a sensor based path planning determines the base reference output values. This is described below, in the local path planning chapter. ### Local path planning - Potential field Graphic representation of potential field To avoid the robot bumping in to objects, walls or doors, for example in the case of inaccurate localization, a potential field algorithm is implemented. This is a sensor based strategy, determining in which direction to move. Moving away from walls or objects and towards the desired coordinates. A potential force is created by the sum of an attractive force, coming from the target location, and a repulsive force field, coming from LRF data points around the robot. Every LRF point which is inside the potential field radius (a circle from the origin of the robot, with a tunable radius set to 70 cm), is taken into account creating a repulsive force vector. The repulsive force of a point scales quadratic, depending on the distance from the robot. The repulsive and attractive forces are tuned in such a way that the robot will not come closer to a wall or corner than the 'personal space' (a circle from the origin of the robot, with a tunable radius set to 10 cm plus the half of the width of PICO). A schematic is shown in the figure below. Choosing the scaling of the repulsive force (e.g. Quadratic or Cubic), as well as the parameters for the potential field and 'personal space', was done during testing. The formula chosen is Frepulsive=constant*(distance to LRF point- (radius robot +personalspace))^-2, with the constant 2*10^6. The repulsive force is then split into the vector in x and in y direction, as seen in the diagram above. The attractive field has a linear decay towards the goal. The combination of both fields is implemented with a cap on speed forwards and sideways of the maximum robot velocity and a cap of 0.1 times the maximum speed for moving backwards. This combination of a quadratic scaling repulsive field and linearly scaling attractive field seemed to have the most smooth and robust results. A known and common problem with the potential field algorithm is possible local minima. However, in combination with the global path planning with a lot of waypoints and our strategy, local minima are not expected. The GIF below shows the robot driving towards a local point straight ahead of the robot, not bumping into any walls. Driving forward with an active potential field. ## Visualization The visualization is being done with the help of the OpenCV library, which is available though ROS. The World Model supplies the VISUALIZATION component with information, which is translated to OpenCV objects (e.g. cv::Point, cv::Circle, etc.). Three different VISUALIZATION functions meant for debugging have been developed, being: plotLocalWorld(), plotGlobalWorld() and plotTrajectory(). Each function served the purpose to help debugging a specific part of the code. In the figure below, a simplified information architecture has been made, also showing the part of the code for which the plotting functions were used to debug. The locations within the code structure, where the plotting function are meant to help debugging. • plotLocalWorld(): which plots the components in the local map: local walls, local edges, PICOs safe-space. • plotGlobalWorld(): which plots the components in the global map. It plots in red the particles with highest probability. It shows the area (in yellow) in which the particle are being placed. The walls and edges observed via the LRF data can also be seen, this time in global coordinates. • plotTrajectory(): which plots the trajectory points and links. The weight of each link is also shown in order to test the link breaking mechanism (e.g. when doors are closed in the hospital). The path which is chosen is highlighted, and the current waypoint to which PICO is navigating is highlighted in orange. For the hospital challenge an additional plotting function has been written called, plotMission(). This visualization shows the user a nice overview of the mission, including: • the cabinet to which PICO is driving, • the speed of PICO in x, y and theta direction, • the runtime and framerate, • the list of cabinets to which PICO is/has driving/driven, • and a proud Mario. ## Validation First, all parts were separately created and it was confirmed they worked. During this process, different visualizations were made and used to debug the individual components, as explained in the previous section. These visualizations were also used to debug the integrated code when all parts were implemented. During implementation of all components, various problems were faced and resolved. This section will shed light on the most prevalent challenges. The implementation of all components made it such that the code was not always able to run at the desired 20 Hz, sometimes it could not even reach 10 Hz. This caused problems in for example the potential field, which does not function properly when sensor data is only updated after a long time. Due to a hold of the base reference values, PICO could run into walls. By adding clock statements in between functions we were able to identify bottlenecks and make the code somewhat more efficient and faster. In order to make very sure PICO does not run into walls when a low code run speed, the PICO speed in the backup code was adjusted to the refresh rate (only in the backup code, because it is a competition and a visualization running into a wall has no big consequences). Implementing localization together with global path planning, was one of the bigger challenges. When the localization would not be accurate, a lot of links would be broken, causing the robot to follow sub-optimal paths, or even get stuck. Soft resets and hard resets of weights in the link matrix were added, as well as a failsafe for localization and path planning. Different maps were created in order to test, alter and validate the code. The GIFs below show three of these maps and PICO successfully completing the challenge given. Testmap 1. Testmap 2. Testmap 3. ## Challenge During the final challenge, PICO did not finish. This was due to two small problems, one observed in the first run and the other in the second run. The first problem was that PICO was close to the cabinet but did not go into the 'atCabinet' state. During testing, this was observed a few times, however, the robot was almost always able to converge its localization and get out of the position it was temporarily stuck in. What happened is that the localization was just slightly off. As a result, global planning wanted the robot to move slightly further towards the cabinet. However, the local potential field made sure the robot did not move forward anymore. There are several fixes to this problem: • (Recommended) Adjusting the measure of certainty needed for localization around the cabinet. This could be very accurate and is not invasive on the code. It would be an addition to the strategy. • The potential field could be less strict around the cabinet as well as lowering the speed. This, however, could be risky. • Adjusting the config value which says how far the robot should be in front of the cabinet. This is a very quick fix but it loses the guarantee that the robot is located within the 0.4x0.4 square in front of the cabinet, before performing the pickup procedure. • Local positioning based on sensors when close to the cabinet, this could be very precise and an elegant solution, but would require some new functions. The second problem was a problem that never occurred before the challenge. We observed PICO wanting to drive to a way-point that was directly behind a wall. It did not drive through the wall due to the potential field. At this point PICO was stuck. There are multiple reasons why this happened and multiple options how it could be fixed. 1. PICO should have observed the wall long before it was close to it, he should have broken the link. The reason that PICO did not show this behavior is because the path weight is added to by increments and are dependent on the refresh rate. The refresh rate was very low and PICO did not break the link before it was driving towards the way-point behind the wall. This can be resolved by either a higher and stable refresh rate or an adding of weight based on time instead of iterations. (A very fast, however less elegant, way of resolving this is raising the increment parameter in the config file) To show that the breaking of this link is no problem with a higher frame rate, the GIF below is given Breaking a link after observing the door. 2. The moment that PICO did not reach the way-point behind the wall within 10 seconds, the failsafe was activated. This made all links have the original weight (hard reset) and made PICO create a new path. However, as this new path was via the way-point that he was very close to, he had no time to break the link before he was driving towards the way-point behind the wall again. This infinite loop could be resolved by having the failsafe last a couple of seconds, where PICO is sure of localization and has the time to add weights to the links in the link-matrix, before calculating a new path and following this. A recommendation following from this for the code as a whole, which would fix and prevent some problems is: The code should either tune its parameters to time, not iterations, OR the code should run at a constant refresh rate which it can always reach. This would require making the code more efficient. After changing one config parameter, breaking a link more quickly, we tried the hospital challenge again and succeeded, the result is shown in the GIF below. Hospital challenge. # Some Final Words Although we didn’t complete the final challenge during the live event, we are still pleased with the result. The problems that occurred during the live event, have been discussed and solutions for them have been proposed in the previous chapter. We have some tips for future groups on how to tackle this interesting course and make it as successful as possible: • Make a detailed design document and keep using this during the course. Because the code will start to grow very fast, it is very useful to have a document that sketches the bigger picture. Also, invest some time early on in the information architecture since this is the basis for your source code. • When writing the code, try to do this from the finite state machine’s perspective. Keep in mind what the state machine actually needs, and make your code according to these requirements. • When coding as a group, try to make the distinction of the individual parts as clear as possible. Make good agreements on what a certain part of the code should deliver and how certain information is handled Lastly, we want to thank our tutor Wouter Kuijpers for all the advice and fun meetings. Although the course is really time-consuming, it is a great learning experience and in our opinion a good asset to our coding careers. # References C.W. Jemmott, R.L. Culver, J.W. Langelaan. (2009). Comparison of particle filter and histogram filter performance of passive sonar localization. The Journal of the Acoustical Society of America. S. Thrun, D. Fox, W. Burgard. Probabilistic Robotics. (2006). The Cambridge, Mass: The MIT Press. pp. 147. # Code snippets Implementation of the resampling algorithm: [1] Collision detection between line segments: [2] Edge detection in line segments algorithm: [3] Waypoints JSON input file stucture: [4] Waypoints JSON input file parser: [5] # Logs This section will contain information regarding the group meetings List of Meetings Meeting 1 Wednesday 29 April, 13:30 Chairman: Aris Minute-taker: Emre Introductionary meeting, where we properly introduced ourselves. Discussed in general what is expected in the Design Document. Brainstormed of solutions for the Escape Room challenge. Set up division of tasks (Software Exploration/Design Document). Minutes Meeting 2 Wednesday 6 May, 11:30 Chairman: Emre Minute-taker: Stan Discussing our V1 of the Design Document with Wouter. Devised a plan of attack of the escape room competition and roughly divided the workload into two parts (Perception + world model and Strategy + Control). Minutes Meeting 3 Monday 11 May, 11:00 Chairman: Stan Minute-taker: Joep Discussing what needs to be finished for the Escape Room challenge. Minutes Meeting 4 Friday 15 May, 9:00 Chairman: Joep Minute-taker: Bram Evaluating the escaperoom challenge and the groupwork so far. Made agreements to improve the further workflow of the project. Minutes Meeting 5 Wednesday 20 May, 11:00 Chairman: Bram Minute-taker: Pim Discussing towards an approach for the hospital challenge. First FSM is introduced and localization/visualization ideas are discussed. Minutes Meeting 6 Wednesday 27 May, 11:00 Chairman: Pim Minute-taker: Aris Discussing the progress of the implementation for the hospital challenge. Discussed difficulties with localization and object avoidance. Minutes Meeting 7 Wednesday 2 June, 13:00 Chairman: Aris Minute-taker: Emre Discussing the progress of the improved particle filter, suggestions on how to improve on the map knowledge. Discussed what is of importance for the presentation on June 3rd. Minutes Meeting 8 Wednesday 5 June, 12:00 Chairman: Emre Minute-taker: Stan Evaluating the intermediate presentation and discussing the final steps for the hospital challenge. Minutes Meeting 9 Tuesday 9 June, 13:00 Chairman: Stan Minute-taker: Joep Discussing the final things needed to be done for the hospital challenge. Minutes Meeting 10 Tuesday 16 June, 13:00 Chairman: Joep Minute-taker: Bram Evaluating hospital challenge live-event and division of tasks regarding the wiki. Minutes
# Poking holes in perfection: the need for flatfield correction on a Mythen detector Dr. Albrecht Petzold found and experimenting with this particular issue on his instrument: it looks like the Dectris Mythen detector signal can be cleaned up considerably by applying a flatfield correction. While flatfield corrections have been discussed before with regard to wire detectors, we usually don’t worry much about it for the Mythen detector on the slit-collimated Kratky instruments. We should have, though, since even for the Pilatus detectors a flatfield was found to be beneficial (I’m not sure I ever wrote that up). Dectris supplies flatfields for their detectors for a variety of energies, but the flatfield changes over time (several months) and changes with adjustment of the detector parameters (in particular with the proximity of the energy threshold to the measured energy). There are also other effects such as incidence angle inefficiencies that cannot be predicted by Dectris, and therefore a flatfield is ideally collected in the final instrument configuration. On his instrument, a nearly identical SAXSess II like ours, Albrecht tried using the fluorescence signal from our iron foil to measure the flatfield. If this works, it would be useful since the signal strength of the iron foil is quite good, necessitating mere hours for a flatfield measurement. However, iron fluorescence is of a slightly different energy than the Cu k$latex_\alpha$ energy we use for the measurements. Therefore, as a control measurement, the flat scattering of water was measured over a week-end to provide a second flatfield. The corrections calculated from the measured signals (Figure 1) tell us that the deviations are significant (about +/- 5%), and that the two different methods do not provide the same flatfield. This means that if we correct for the flatfield, we should be able to reduce the uncertainties in our final measurement considerably. It also means that we can (unfortunately) not rely on the fluorescence of iron to deliver us our flatfield. The discrepancies between the two are somewhat surprising to me, and I don’t have a good explanation for them yet. I am not convinced they can be completely explained by the energy threshold effects, but maybe I’m wrong. So next up, as soon as I’m back from holiday, I’ll get to measuring a flatfield on our own instrument. On that one, we can turn the detector 180 degrees as well, so we can verify the data to some degree (apparently this possibility has been removed by Anton Paar in their later version). Interesting times! #### Be the first to comment This site uses Akismet to reduce spam. Learn how your comment data is processed.
# CoerceVectorMatrixPackage R¶ coerce(v) coerces a vector v with entries in Matrix R as vector over Matrix Fraction Polynomial R coerceP(v) coerces a vector v with entries in Matrix R as vector over Matrix Polynomial R
# Direct Products Definition. Let G and H be groups. The direct product of G and H is the set of all ordered pairs with the operation \overfullrule=0pt Remarks. 1. In the definition, I've assumed that G and H are using multiplication notation. In general, the notation you use in depends on the notation in the factors. Examples: 2. You can construct products of more than two groups in the same way. For example, if , , and are groups, then Just as with the two-factor product, you multiply elements componentwise. Example. ( A product of cyclic groups which is cyclic) Show that is cyclic. Since and , If you take successive multiples of , you get Since you can get the whole group by taking multiples of , it follows that is actually cyclic of order 6 --- the same as . Example. ( A product of cyclic groups which is not cyclic) Show that is not cyclic. Since , Here's the operation table: Note that this is not the same group as . Both groups have 4 elements, but is cyclic of order 4. In , all the elements have order 2, so no element generates the group. is the same as the Klein 4-group V, which has the following operation table: If G and H are finite, then . (This is true for sets G and H; it has nothing to do with G and H being groups.) For example, . Lemma. The product of abelian groups is abelian: If G and H are abelian, so is . Proof. Suppose G and H are abelian. Let , where and . I have This proves that is abelian. Remark. If either G or H is not abelian, then is not abelian. Suppose, for instance, that G is not abelian. This means that there are elements such that Then Since , it follows that , so is not abelian. A similar argument works if H is not abelian. Example. ( A product of an abelian and a nonabelian group) Construct the multiplication table for . (Recall that is the group of symmetries of an equilateral triangle.) The number of elements is Here's the multiplication table for : The operation in is addition mod 2, while the operation in is written using multiplicative notation. When you multiply two pairs, you add in in the first component and multiply in in the second component: The identity is , since 0 is the identity in , while id is the identity in . is not abelian, since is not abelian. A particular example: Example. ( Using products to construct groups) Use products to construct 3 different abelian groups of order 8. The groups , , and are abelian, since each is a product of abelian groups. is cyclic of order 8, has an element of order 4 but is not cyclic, and has only elements of order 2. It follows that these groups are distinct. In fact, there are 5 distinct groups of order 8; the remaining two are nonabelian. The group of symmetries of the square is a nonabelian group of order 8. The fifth (and last) group of order 8 is the group Q of the quaternions. or Q are not that same as , , or , since , , and are abelian while or Q are not. Finally, is not the same as Q. has 5 elements of order 2: The four reflections and rotation through . Q has one element of order 2, namely -1. I've shown that these five groups of order 8 are distinct; it takes considerably more work to show that these are the only groups of order 8. Definition. Let m and n be positive integers. The least common multiple of m and n is the smallest positive integer divisible by m and n. Remark. Since is divisible by m and n, the set of positive multiples of m and n is nonempty. Hence, it has a smallest element, by well-ordering. It follows that the least common multiple of two positive integers is always defined. For example, . Lemma. If s is a common multiple of m and n, then . Proof. By the Division Algorithm, Thus, . Since and , I have . Since and , I have . Therefore, r is a common multiple of m and n. Since it's also less than the least common multiple , it can't be positive. Therefore, , and , i.e. . Remark. The lemma shows that the least common multiple is not just "least" in terms of size. It's also "least" in the sense that it divides every other common multiple. Theorem. Let m and n be positive integers. Then Proof. I'll prove that each side is greater than or equal to the other side. Note that and are integers. Thus, This shows that is a multiple of m and a multiple of n. Therefore, it's a common multiple of m and n, so it must be greater than or equal to the least common multiple. Hence, Next, is a multiple of n, so for some s. Then (Why is an integer? Well, is a common multiple of m and n, so by the previous lemma .) Similarly, is a multiple of m, so for some t. Then In other words, is a common divisor of m and n. Therefore, it must be less than the greatest common divisor: The two inequalities I've proved show that . Example. Verify that if and . , , and Proposition. The element has order in . Proof. The first component is 0, since it's divisible by m; the second component is 0, since it's divisible by n. Hence, . Next, I must show that is the smallest positive multiple of which equals the identity. Suppose , so . Consider the first components. in means that ; likewise, the second components show that . Since k is a common multiple of m and n, it must be greater than or equal to the least common multiple : that is, . This proves that is the order of . Example. Find the order of in . Find the order of . The element has order . On the other hand, the element has order . Since has order 30, the group is cyclic; in fact, . Remark. More generally, consider , and suppose has order in . (The 's need not be cyclic.) Then has order . Corollary. is cyclic of order if and only if . Note: In the next proof, " " may mean either the ordered pair or the greatest common divisor of a and b. You'll have to read carefully and determine the meaning from the context. Proof. If , then . Thus, the order of is . But has order , so generates the group. Hence, is cyclic. Suppose on the other hand that . Since , it follows that . Since is a common multiple of m and n and since is the least common multiple, it follows that . Now consider an element . Let p be the order of a in and let q be the order of b in . Since , I may write for some j. Since , I may write for some k. Then Hence, the order of is less than or equal to . But , so the order of is less than (and not equal to) . Since was an arbitrary element of , it follows that no element of has order . Therefore, can't be cyclic of order , since a generator would have order . Remark. More generally, if , ..., are pairwise relatively prime, then is cyclic of order . Example. ( Orders of elements in products) Find the order of . 2 has order 2 in , 4 has order 3 in , and 4 has order 3 in . Hence, the order of is . Example. ( A product of cyclic groups which is not cyclic) Prove directly that is not cyclic of order 8. If , then Thus, every element of has order less than or equal to 4. In particular, there can be no elements of order 8, i.e. no cyclic generators. Contact information
# A What’s the physical nature of the pilot wave? Tags: 1. Apr 14, 2017 ### Maxwell's Demon Within the context of the de Broglie-Bohm pilot-wave theory, can anyone explain what the pilot wave is in physical terms? I’m having a hard time understanding how, for example, the pilot wave influences the trajectory of a photon in the double-slit experiment. Are we dealing with electromagnetic potentials in a background field, which exert a force on the photon as it moves? Shouldn’t there be a reasonably straight-forward way to alter the path of the photons by inducing another source of pilot waves, such as another photon source acting perpendicular to the path of the photons? It seems like there should be some way to determine if the pilot wave is real by performing some kind of experiment like this to directly alter the shape of the pilot wave, and settle the question once and for all. But I can’t seem to find a phenomenological explanation of the pilot wave which would provide a practical means of interacting with it. Does anyone have any insight on this? 2. Apr 14, 2017 ### Demystifier Think of pilot wave $\psi(x,t)$ as something similar to the principal function $S(x,t)$ of Hamilton-Jacobi formulation of classical mechanics. If you are unfamiliar with classical Hamilton-Jacobi equation, try to learn (and ask questions) about that first. 3. Apr 14, 2017 ### Maxwell's Demon Thanks - you're right; I've got a lot of studying to do before before I can understand this. I found a Wiki page that discusses the quantum potential, which seems to be the key term in the quantum Hamilton–Jacobi equation that determines the Bohmian trajectories of particles in the double-slit experiment (if I’m reading this right the action S reduces to the classical limit as the quantum potential goes to 0?): https://en.wikipedia.org/wiki/Quantum_potential That page describes the quantum potential in terms of “a self-organising process involving a basic underlying field” without discussing that underlying field explicitly, but I gather from other articles that this is a background quantum field in equilibrium. It also mentions the Aharonov-Bohm effect: “Also the shift of the interference pattern which occurs in presence of a magnetic field in the Aharonov–Bohm effect could be explained as arising from the quantum potential.” (that statement linked to this paper: https://arxiv.org/pdf/quant-ph/0308039.pdf ) So a magnetic vector potential influences the quantum potential for a charged particle, but I don’t see a method for influencing the quantum potential of a photon. And even if there’s a way to do that, it seems that it could just as readily be explained within the conventional interpretation of quantum theory, which isn’t helpful. Surely there has to be some way to determine whether the pilot wave is physical. At the very least shouldn’t there be some technological method for moving around the interference pattern with some kind of external field generator, sort of like a powerful magnet distorts the image on a CRT monitor…because we already know that the magnetic vector potential and the scalar electric potential influence the wave function, right? 4. Apr 14, 2017 ### SlowThinker 5. Apr 14, 2017 ### Denis DBB theory does not have an answer to your question. In dBB theory, the wave function describes some really existing field, that's all. A physical description of the nature of this field will be the job of some more fundamental theory. Think of the wave function as describing, in a general way, the influence of all the environment (including all the classical parts) on a system. Whatever the external things which have influence, it would define how the actual state changes, given its actual configuration. The actual configuration is some $q\in Q$, the result is some velocity $\dot{q}= F(q,X)$. So, quite universal, one can assign to that unknown $X$ some function $\psi(q)$ on the configuration space, which defines how $X$ influences the resulting velocity if the configuration is $q$. So, this would be, in the most general form, a map $X \to F_X(q)$. The dBB formula is a special, particular case of this, with $F_X(q) = \nabla \Im \ln \psi_X(q)$, a formula which guarantees that part of the Schrödinger equation will be a continuity equation for the probability flow. But it is, nonetheless, close enough to this most general case, to guess that it is not really the wave function $\psi(q)$ which is that part of reality which defines, via the guiding equation, the velocity, but that it is only a placeholder for some unknown entity $X$ which defines some effective $\psi(q)$ in some more fundamental theory.
# source:branches/2015/dev_r5836_NOC3_vvl_by_default/DOC/TexFiles/Chapters/Chap_Model_Basics.tex@6040 Last change on this file since 6040 was 6040, checked in by gm, 5 years ago #1613: vvl by default : start to update the DOC for change in vvl, LDF and solvers File size: 71.8 KB Line 1% ================================================================ 2% Chapter 1 Ñ Model Basics 3% ================================================================ 4 5\chapter{Model basics} 6\label{PE} 7\minitoc 8 9\newpage 10$\$\newline    % force a new ligne 11 12% ================================================================ 13% Primitive Equations 14% ================================================================ 15\section{Primitive Equations} 16\label{PE_PE} 17 18% ------------------------------------------------------------------------------------------------------------- 19%        Vector Invariant Formulation 20% ------------------------------------------------------------------------------------------------------------- 21 22\subsection{Vector Invariant Formulation} 23\label{PE_Vector} 24 25 26The ocean is a fluid that can be described to a good approximation by the primitive 27equations, $i.e.$ the Navier-Stokes equations along with a nonlinear equation of 28state which couples the two active tracers (temperature and salinity) to the fluid 30 31\textit{(1) spherical earth approximation: }the geopotential surfaces are assumed to 32be spheres so that gravity (local vertical) is parallel to the earth's radius 33 34\textit{(2) thin-shell approximation: }the ocean depth is neglected compared to the earth's radius 35 36\textit{(3) turbulent closure hypothesis: }the turbulent fluxes (which represent the effect 37of small scale processes on the large-scale) are expressed in terms of large-scale features 38 39\textit{(4) Boussinesq hypothesis:} density variations are neglected except in their 40contribution to the buoyancy force 41 42\textit{(5) Hydrostatic hypothesis: }the vertical momentum equation is reduced to a 43balance between the vertical pressure gradient and the buoyancy force (this removes 44convective processes from the initial Navier-Stokes equations and so convective processes 46 47\textit{(6) Incompressibility hypothesis: }the three dimensional divergence of the velocity 48vector is assumed to be zero. 49 50Because the gravitational force is so dominant in the equations of large-scale motions, 51it is useful to choose an orthogonal set of unit vectors (\textbf{i},\textbf{j},\textbf{k}) linked 52to the earth such that \textbf{k} is the local upward vector and (\textbf{i},\textbf{j}) are two 53vectors orthogonal to \textbf{k}, $i.e.$ tangent to the geopotential surfaces. Let us define 54the following variables: \textbf{U} the vector velocity, $\textbf{U}=\textbf{U}_h + w\, \textbf{k}$ 55(the subscript $h$ denotes the local horizontal vector, $i.e.$ over the (\textbf{i},\textbf{j}) plane), 56$T$ the potential temperature, $S$ the salinity, \textit{$\rho$} the \textit{in situ} density. 57The vector invariant form of the primitive equations in the (\textbf{i},\textbf{j},\textbf{k}) 58vector system provides the following six equations (namely the momentum balance, the 59hydrostatic equilibrium, the incompressibility equation, the heat and salt conservation 60equations and an equation of state): 61\begin{subequations} \label{Eq_PE} 62  \begin{equation}     \label{Eq_PE_dyn} 63\frac{\partial {\rm {\bf U}}_h }{\partial t}= 64-\left[    {\left( {\nabla \times {\rm {\bf U}}} \right)\times {\rm {\bf U}} 65            +\frac{1}{2}\nabla \left( {{\rm {\bf U}}^2} \right)}    \right]_h 66 -f\;{\rm {\bf k}}\times {\rm {\bf U}}_h 67-\frac{1}{\rho _o }\nabla _h p + {\rm {\bf D}}^{\rm {\bf U}} + {\rm {\bf F}}^{\rm {\bf U}} 68  \end{equation} 69  \begin{equation}     \label{Eq_PE_hydrostatic} 70\frac{\partial p }{\partial z} = - \rho \ g 71  \end{equation} 72  \begin{equation}     \label{Eq_PE_continuity} 73\nabla \cdot {\bf U}=  0 74  \end{equation} 75\begin{equation} \label{Eq_PE_tra_T} 76\frac{\partial T}{\partial t} = - \nabla \cdot  \left( T \ \rm{\bf U} \right) + D^T + F^T 77  \end{equation} 78  \begin{equation}     \label{Eq_PE_tra_S} 79\frac{\partial S}{\partial t} = - \nabla \cdot  \left( S \ \rm{\bf U} \right) + D^S + F^S 80  \end{equation} 81  \begin{equation}     \label{Eq_PE_eos} 82\rho = \rho \left( T,S,p \right) 83  \end{equation} 84\end{subequations} 85where $\nabla$ is the generalised derivative vector operator in $(\bf i,\bf j, \bf k)$ directions, 86$t$ is the time, $z$ is the vertical coordinate, $\rho$ is the \textit{in situ} density given by 87the equation of state (\ref{Eq_PE_eos}), $\rho_o$ is a reference density, $p$ the pressure, 88$f=2 \bf \Omega \cdot \bf k$ is the Coriolis acceleration (where $\bf \Omega$ is the Earth's 89angular velocity vector), and $g$ is the gravitational acceleration. 90${\rm {\bf D}}^{\rm {\bf U}}$, $D^T$ and $D^S$ are the parameterisations of small-scale 91physics for momentum, temperature and salinity, and ${\rm {\bf F}}^{\rm {\bf U}}$, $F^T$ 92and $F^S$ surface forcing terms. Their nature and formulation are discussed in 93\S\ref{PE_zdf_ldf} and page \S\ref{PE_boundary_condition}. 94 95. 96 97% ------------------------------------------------------------------------------------------------------------- 98% Boundary condition 99% ------------------------------------------------------------------------------------------------------------- 100\subsection{Boundary Conditions} 101\label{PE_boundary_condition} 102 103An ocean is bounded by complex coastlines, bottom topography at its base and an air-sea 104or ice-sea interface at its top. These boundaries can be defined by two surfaces, $z=-H(i,j)$ 105and $z=\eta(i,j,k,t)$, where $H$ is the depth of the ocean bottom and $\eta$ is the height 106of the sea surface. Both $H$ and $\eta$ are usually referenced to a given surface, $z=0$, 107chosen as a mean sea surface (Fig.~\ref{Fig_ocean_bc}). Through these two boundaries, 108the ocean can exchange fluxes of heat, fresh water, salt, and momentum with the solid earth, 109the continental margins, the sea ice and the atmosphere. However, some of these fluxes are 110so weak that even on climatic time scales of thousands of years they can be neglected. 111In the following, we briefly review the fluxes exchanged at the interfaces between the ocean 112and the other components of the earth system. 113 114%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 115\begin{figure}[!ht]   \begin{center} 116\includegraphics[width=0.90\textwidth]{./TexFiles/Figures/Fig_I_ocean_bc.pdf} 117\caption{    \label{Fig_ocean_bc} 118The ocean is bounded by two surfaces, $z=-H(i,j)$ and $z=\eta(i,j,t)$, where $H$ 119is the depth of the sea floor and $\eta$ the height of the sea surface. 120Both $H$ and $\eta$ are referenced to $z=0$.} 121\end{center}   \end{figure} 122%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 123 124 125\begin{description} 126\item[Land - ocean interface:] the major flux between continental margins and the ocean is 127a mass exchange of fresh water through river runoff. Such an exchange modifies the sea 128surface salinity especially in the vicinity of major river mouths. It can be neglected for short 129range integrations but has to be taken into account for long term integrations as it influences 130the characteristics of water masses formed (especially at high latitudes). It is required in order 131to close the water cycle of the climate system. It is usually specified as a fresh water flux at 132the air-sea interface in the vicinity of river mouths. 133\item[Solid earth - ocean interface:] heat and salt fluxes through the sea floor are small, 134except in special areas of little extent. They are usually neglected in the model 135\footnote{In fact, it has been shown that the heat flux associated with the solid Earth cooling 136($i.e.$the geothermal heating) is not negligible for the thermohaline circulation of the world 137ocean (see \ref{TRA_bbc}).}. 138The boundary condition is thus set to no flux of heat and salt across solid boundaries. 139For momentum, the situation is different. There is no flow across solid boundaries, 140$i.e.$ the velocity normal to the ocean bottom and coastlines is zero (in other words, 141the bottom velocity is parallel to solid boundaries). This kinematic boundary condition 142can be expressed as: 143\begin{equation} \label{Eq_PE_w_bbc} 144w = -{\rm {\bf U}}_h \cdot  \nabla _h \left( H \right) 145\end{equation} 146In addition, the ocean exchanges momentum with the earth through frictional processes. 147Such momentum transfer occurs at small scales in a boundary layer. It must be parameterized 148in terms of turbulent fluxes using bottom and/or lateral boundary conditions. Its specification 149depends on the nature of the physical parameterisation used for ${\rm {\bf D}}^{\rm {\bf U}}$ 150in \eqref{Eq_PE_dyn}. It is discussed in \S\ref{PE_zdf}, page~\pageref{PE_zdf}.% and Chap. III.6 to 9. 151\item[Atmosphere - ocean interface:] the kinematic surface condition plus the mass flux 152of fresh water PE  (the precipitation minus evaporation budget) leads to: 153\begin{equation} \label{Eq_PE_w_sbc} 154w = \frac{\partial \eta }{\partial t} 155    + \left. {{\rm {\bf U}}_h } \right|_{z=\eta } \cdot  \nabla _h \left( \eta \right) 156    + P-E 157\end{equation} 158The dynamic boundary condition, neglecting the surface tension (which removes capillary 159waves from the system) leads to the continuity of pressure across the interface $z=\eta$. 160The atmosphere and ocean also exchange horizontal momentum (wind stress), and heat. 161\item[Sea ice - ocean interface:] the ocean and sea ice exchange heat, salt, fresh water 162and momentum. The sea surface temperature is constrained to be at the freezing point 163at the interface. Sea ice salinity is very low ($\sim4-6 \,psu$) compared to those of the 164ocean ($\sim34 \,psu$). The cycle of freezing/melting is associated with fresh water and 165salt fluxes that cannot be neglected. 166\end{description} 167 168 169%\newpage 170%$\$\newline    % force a new ligne 171 172% ================================================================ 174% ================================================================ 176\label{PE_hor_pg} 177 178% ------------------------------------------------------------------------------------------------------------- 179% Pressure Formulation 180% ------------------------------------------------------------------------------------------------------------- 181\subsection{Pressure Formulation} 182\label{PE_p_formulation} 183 184The total pressure at a given depth $z$ is composed of a surface pressure $p_s$ at a 185reference geopotential surface ($z=0$) and a hydrostatic pressure $p_h$ such that: 186$p(i,j,k,t)=p_s(i,j,t)+p_h(i,j,k,t)$. The latter is computed by integrating (\ref{Eq_PE_hydrostatic}), 187assuming that pressure in decibars can be approximated by depth in meters in (\ref{Eq_PE_eos}). 188The hydrostatic pressure is then given by: 189\begin{equation} \label{Eq_PE_pressure} 190p_h \left( {i,j,z,t} \right) 191 = \int_{\varsigma =z}^{\varsigma =0} {g\;\rho \left( {T,S,\varsigma} \right)\;d\varsigma } 192\end{equation} 193 Two strategies can be considered for the surface pressure term: $(a)$ introduce of a 194 new variable $\eta$, the free-surface elevation, for which a prognostic equation can be 195 established and solved; $(b)$ assume that the ocean surface is a rigid lid, on which the 196 pressure (or its horizontal gradient) can be diagnosed. When the former strategy is used, 197 one solution of the free-surface elevation consists of the excitation of external gravity waves. 198 The flow is barotropic and the surface moves up and down with gravity as the restoring force. 199 The phase speed of such waves is high (some hundreds of metres per second) so that 200 the time step would have to be very short if they were present in the model. The latter 201 strategy filters out these waves since the rigid lid approximation implies $\eta=0$, $i.e.$ 202 the sea surface is the surface $z=0$. This well known approximation increases the surface 203 wave speed to infinity and modifies certain other longwave dynamics ($e.g.$ barotropic 204 Rossby or planetary waves). The rigid-lid hypothesis is an obsolescent feature in modern 205 OGCMs. It has been available until the release 3.1 of  \NEMO, and it has been removed 206 in release 3.2 and followings. Only the free surface formulation is now described in the 207 this document (see the next sub-section). 208 209% ------------------------------------------------------------------------------------------------------------- 210% Free Surface Formulation 211% ------------------------------------------------------------------------------------------------------------- 212\subsection{Free Surface Formulation} 213\label{PE_free_surface} 214 215In the free surface formulation, a variable $\eta$, the sea-surface height, is introduced 216which describes the shape of the air-sea interface. This variable is solution of a 217prognostic equation which is established by forming the vertical average of the kinematic 218surface condition (\ref{Eq_PE_w_bbc}): 219\begin{equation} \label{Eq_PE_ssh} 220\frac{\partial \eta }{\partial t}=-D+P-E 222D=\nabla \cdot \left[ {\left( {H+\eta } \right) \; {\rm{\bf \overline{U}}}_h \,} \right] 223\end{equation} 224and using (\ref{Eq_PE_hydrostatic}) the surface pressure is given by: $p_s = \rho \, g \, \eta$. 225 226Allowing the air-sea interface to move introduces the external gravity waves (EGWs) 227as a class of solution of the primitive equations. These waves are barotropic because 228of hydrostatic assumption, and their phase speed is quite high. Their time scale is 229short with respect to the other processes described by the primitive equations. 230 231Two choices can be made regarding the implementation of the free surface in the model, 232depending on the physical processes of interest. 233 234$\bullet$ If one is interested in EGWs, in particular the tides and their interaction 235with the baroclinic structure of the ocean (internal waves) possibly in shallow seas, 236then a non linear free surface is the most appropriate. This means that no 237approximation is made in (\ref{Eq_PE_ssh}) and that the variation of the ocean 238volume is fully taken into account. Note that in order to study the fast time scales 239associated with EGWs it is necessary to minimize time filtering effects (use an 240explicit time scheme with very small time step, or a split-explicit scheme with 241reasonably small time step, see \S\ref{DYN_spg_exp} or \S\ref{DYN_spg_ts}. 242 243$\bullet$ If one is not interested in EGW but rather sees them as high frequency 244noise, it is possible to apply an explicit filter to slow down the fastest waves while 245not altering the slow barotropic Rossby waves. If further, an approximative conservation 246of heat and salt contents is sufficient for the problem solved, then it is 247sufficient to solve a linearized version of (\ref{Eq_PE_ssh}), which still allows 248to take into account freshwater fluxes applied at the ocean surface \citep{Roullet_Madec_JGR00}. 249Nevertheless, with the linearization, an exact conservation of heat and salt contents is lost. 250 251The filtering of EGWs in models with a free surface is usually a matter of discretisation 252of the temporal derivatives, using a split-explicit method \citep{Killworth_al_JPO91, Zhang_Endoh_JGR92} 253or the implicit scheme \citep{Dukowicz1994} or the addition of a filtering force in the momentum equation 254\citep{Roullet_Madec_JGR00}. With the present release, \NEMO offers the choice between 255an explicit free surface (see \S\ref{DYN_spg_exp}) or a split-explicit scheme strongly 256inspired the one proposed by \citet{Shchepetkin_McWilliams_OM05} (see \S\ref{DYN_spg_ts}). 257 258%\newpage 259%$\$\newline    % force a new ligne 260 261% ================================================================ 262% Curvilinear z-coordinate System 263% ================================================================ 264\section{Curvilinear \textit{z-}coordinate System} 265\label{PE_zco} 266 267 268% ------------------------------------------------------------------------------------------------------------- 269% Tensorial Formalism 270% ------------------------------------------------------------------------------------------------------------- 271\subsection{Tensorial Formalism} 272\label{PE_tensorial} 273 274In many ocean circulation problems, the flow field has regions of enhanced dynamics 275($i.e.$ surface layers, western boundary currents, equatorial currents, or ocean fronts). 276The representation of such dynamical processes can be improved by specifically increasing 277the model resolution in these regions. As well, it may be convenient to use a lateral 278boundary-following coordinate system to better represent coastal dynamics. Moreover, 279the common geographical coordinate system has a singular point at the North Pole that 280cannot be easily treated in a global model without filtering. A solution consists of introducing 281an appropriate coordinate transformation that shifts the singular point onto land 282\citep{Madec_Imbard_CD96, Murray_JCP96}. As a consequence, it is important to solve the primitive 283equations in various curvilinear coordinate systems. An efficient way of introducing an 284appropriate coordinate transform can be found when using a tensorial formalism. 285This formalism is suited to any multidimensional curvilinear coordinate system. Ocean 286modellers mainly use three-dimensional orthogonal grids on the sphere (spherical earth 287approximation), with preservation of the local vertical. Here we give the simplified equations 288for this particular case. The general case is detailed by \citet{Eiseman1980} in their survey 289of the conservation laws of fluid dynamics. 290 291Let (\textit{i},\textit{j},\textit{k}) be a set of orthogonal curvilinear coordinates on the sphere 292associated with the positively oriented orthogonal set of unit vectors (\textbf{i},\textbf{j},\textbf{k}) 293linked to the earth such that \textbf{k} is the local upward vector and (\textbf{i},\textbf{j}) are 294two vectors orthogonal to \textbf{k}, $i.e.$ along geopotential surfaces (Fig.\ref{Fig_referential}). 295Let $(\lambda,\varphi,z)$ be the geographical coordinate system in which a position is defined 296by the latitude $\varphi(i,j)$, the longitude $\lambda(i,j)$ and the distance from the centre of 297the earth $a+z(k)$ where $a$ is the earth's radius and $z$ the altitude above a reference sea 298level (Fig.\ref{Fig_referential}). The local deformation of the curvilinear coordinate system is 299given by $e_1$, $e_2$ and $e_3$, the three scale factors: 300\begin{equation} \label{Eq_scale_factors} 301\begin{aligned} 302 e_1 &=\left( {a+z} \right)\;\left[ {\left( {\frac{\partial \lambda 303}{\partial i}\cos \varphi } \right)^2+\left( {\frac{\partial \varphi 304}{\partial i}} \right)^2} \right]^{1/2} \\ 305 e_2 &=\left( {a+z} \right)\;\left[ {\left( {\frac{\partial \lambda 306}{\partial j}\cos \varphi } \right)^2+\left( {\frac{\partial \varphi 307}{\partial j}} \right)^2} \right]^{1/2} \\ 308 e_3 &=\left( {\frac{\partial z}{\partial k}} \right) \\ 309 \end{aligned} 310 \end{equation} 311 312%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 313\begin{figure}[!tb]   \begin{center} 314\includegraphics[width=0.60\textwidth]{./TexFiles/Figures/Fig_I_earth_referential.pdf} 315\caption{   \label{Fig_referential} 316the geographical coordinate system $(\lambda,\varphi,z)$ and the curvilinear 317coordinate system (\textbf{i},\textbf{j},\textbf{k}). } 318\end{center}   \end{figure} 319%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 320 321Since the ocean depth is far smaller than the earth's radius, $a+z$, can be replaced by 322$a$ in (\ref{Eq_scale_factors}) (thin-shell approximation). The resulting horizontal scale 323factors $e_1$, $e_2$  are independent of $k$ while the vertical scale factor is a single 324function of $k$ as \textbf{k} is parallel to \textbf{z}. The scalar and vector operators that 325appear in the primitive equations (Eqs. \eqref{Eq_PE_dyn} to \eqref{Eq_PE_eos}) can 326be written in the tensorial form, invariant in any orthogonal horizontal curvilinear coordinate 327system transformation: 328\begin{subequations} \label{Eq_PE_discrete_operators} 330\nabla q=\frac{1}{e_1 }\frac{\partial q}{\partial i}\;{\rm {\bf 331i}}+\frac{1}{e_2 }\frac{\partial q}{\partial j}\;{\rm {\bf j}}+\frac{1}{e_3 332}\frac{\partial q}{\partial k}\;{\rm {\bf k}}    \\ 333\end{equation} 334\begin{equation} \label{Eq_PE_div} 335\nabla \cdot {\rm {\bf A}} 336= \frac{1}{e_1 \; e_2} \left[ 337  \frac{\partial \left(e_2 \; a_1\right)}{\partial i } 338+\frac{\partial \left(e_1 \; a_2\right)}{\partial j }       \right] 339+ \frac{1}{e_3} \left[ \frac{\partial a_3}{\partial k }   \right] 340\end{equation} 341\begin{equation} \label{Eq_PE_curl} 342   \begin{split} 343\nabla \times \vect{A} = 344    \left[ {\frac{1}{e_2 }\frac{\partial a_3}{\partial j} 345            -\frac{1}{e_3 }\frac{\partial a_2 }{\partial k}} \right] \; \vect{i} 346&+\left[ {\frac{1}{e_3 }\frac{\partial a_1 }{\partial k} 347           -\frac{1}{e_1 }\frac{\partial a_3 }{\partial i}} \right] \; \vect{j}     \\ 348&+\frac{1}{e_1 e_2 } \left[ {\frac{\partial \left( {e_2 a_2 } \right)}{\partial i} 349                                       -\frac{\partial \left( {e_1 a_1 } \right)}{\partial j}} \right] \; \vect{k} 350   \end{split} 351\end{equation} 352\begin{equation} \label{Eq_PE_lap} 353\Delta q = \nabla \cdot \left\nabla q \right) 354\end{equation} 355\begin{equation} \label{Eq_PE_lap_vector} 356\Delta {\rm {\bf A}} = 357  \nabla \left( \nabla \cdot {\rm {\bf A}} \right) 358- \nabla \times \left\nabla \times {\rm {\bf A}} \right) 359\end{equation} 360\end{subequations} 361where $q$ is a scalar quantity and ${\rm {\bf A}}=(a_1,a_2,a_3)$ a vector in the $(i,j,k)$ coordinate system. 362 363% ------------------------------------------------------------------------------------------------------------- 364% Continuous Model Equations 365% ------------------------------------------------------------------------------------------------------------- 366\subsection{Continuous Model Equations} 367\label{PE_zco_Eq} 368 369In order to express the Primitive Equations in tensorial formalism, it is necessary to compute 370the horizontal component of the non-linear and viscous terms of the equation using 372Let us set $\vect U=(u,v,w)={\vect{U}}_h +w\;\vect{k}$, the velocity in the $(i,j,k)$ coordinate 373system and define the relative vorticity $\zeta$ and the divergence of the horizontal velocity 374field $\chi$, by: 375\begin{equation} \label{Eq_PE_curl_Uh} 376\zeta =\frac{1}{e_1 e_2 }\left[ {\frac{\partial \left( {e_2 \,v} 377\right)}{\partial i}-\frac{\partial \left( {e_1 \,u} \right)}{\partial j}} 378\right] 379\end{equation} 380\begin{equation} \label{Eq_PE_div_Uh} 381\chi =\frac{1}{e_1 e_2 }\left[ {\frac{\partial \left( {e_2 \,u} 382\right)}{\partial i}+\frac{\partial \left( {e_1 \,v} \right)}{\partial j}} 383\right] 384\end{equation} 385 386Using the fact that the horizontal scale factors $e_1$ and $e_2$ are independent of $k$ 387and that $e_3$  is a function of the single variable $k$, the nonlinear term of 388\eqref{Eq_PE_dyn} can be transformed as follows: 389\begin{flalign*} 390&\left[ {\left( { \nabla \times {\rm {\bf U}}    } \right) \times {\rm {\bf U}} 391+\frac{1}{2}   \nabla \left( {{\rm {\bf U}}^2} \right)}   \right]_h        & 392\end{flalign*} 393\begin{flalign*} 395 {\left[    {   \frac{1}{e_3} \frac{\partial u  }{\partial k} 396         -\frac{1}{e_1} \frac{\partial w  }{\partial i} } \right] w - \zeta \; v }     \\ 397      {\zeta \; u - \left[ {   \frac{1}{e_2} \frac{\partial w}{\partial j} 398                     -\frac{1}{e_3} \frac{\partial v}{\partial k} } \right] \ w}  \\ 399       \end{array} }} \right) 400+\frac{1}{2}   \left( {{\begin{array}{*{20}c} 401       { \frac{1}{e_1}  \frac{\partial \left( u^2+v^2+w^2 \right)}{\partial i}}  \hfill    \\ 402       { \frac{1}{e_2}  \frac{\partial \left( u^2+v^2+w^2 \right)}{\partial j}}  \hfill    \\ 403       \end{array} }} \right)       & 404\end{flalign*} 405\begin{flalign*} 407 {-\zeta \; v} \hfill \\ 408 { \zeta \; u} \hfill \\ 409         \end{array} }} \right) 410+\frac{1}{2}\left( {{   \begin{array}{*{20}c} 411 {\frac{1}{e_1 }\frac{\partial \left( {u^2+v^2} \right)}{\partial i}} \hfill  \\ 412 {\frac{1}{e_2 }\frac{\partial \left( {u^2+v^2} \right)}{\partial j}} \hfill  \\ 413                  \end{array} }} \right) 414+\frac{1}{e_3 }\left( {{      \begin{array}{*{20}c} 415 { w \; \frac{\partial u}{\partial k}}    \\ 416 { w \; \frac{\partial v}{\partial k}}    \\ 417                     \end{array} }} \right 418-\left( {{  \begin{array}{*{20}c} 419 {\frac{w}{e_1}\frac{\partial w}{\partial i} 420 -\frac{1}{2e_1}\frac{\partial w^2}{\partial i}} \hfill \\ 421 {\frac{w}{e_2}\frac{\partial w}{\partial j} 422  -\frac{1}{2e_2}\frac{\partial w^2}{\partial j}} \hfill \\ 423         \end{array} }} \right)        & 424\end{flalign*} 425 426The last term of the right hand side is obviously zero, and thus the nonlinear term of 427\eqref{Eq_PE_dyn} is written in the $(i,j,k)$ coordinate system: 428\begin{equation} \label{Eq_PE_vector_form} 429\left[ {\left( {  \nabla \times {\rm {\bf U}}    } \right) \times {\rm {\bf U}} 430+\frac{1}{2}   \nabla \left( {{\rm {\bf U}}^2} \right)}   \right]_h 431=\zeta 432\;{\rm {\bf k}}\times {\rm {\bf U}}_h +\frac{1}{2}\nabla _h \left( {{\rm 433{\bf U}}_h^2 } \right)+\frac{1}{e_3 }w\frac{\partial {\rm {\bf U}}_h 434}{\partial k} 435\end{equation} 436 437This is the so-called \textit{vector invariant form} of the momentum advection term. 438For some purposes, it can be advantageous to write this term in the so-called flux form, 439$i.e.$ to write it as the divergence of fluxes. For example, the first component of 440\eqref{Eq_PE_vector_form} (the $i$-component) is transformed as follows: 441\begin{flalign*} 442&{ \begin{array}{*{20}l} 443\left[ {\left( {\nabla \times \vect{U}} \right)\times \vect{U} 444          +\frac{1}{2}\nabla \left( {\vect{U}}^2 \right)} \right]_i   % \\ 445%\\ 446     = - \zeta \;v 447     + \frac{1}{2\;e_1 } \frac{\partial \left( {u^2+v^2} \right)}{\partial i} 448     + \frac{1}{e_3}w \ \frac{\partial u}{\partial k}          \\ 449\\ 450\qquad =\frac{1}{e_1 \; e_2} \left(    -v\frac{\partial \left( {e_2 \,v} \right)}{\partial i} 451                     +v\frac{\partial \left( {e_1 \,u} \right)}{\partial j}    \right) 452+\frac{1}{e_1 e_2 }\left(  +e_2 \; u\frac{\partial u}{\partial i} 453                     +e_2 \; v\frac{\partial v}{\partial i}              \right) 454+\frac{1}{e_3}       \left(   w\;\frac{\partial u}{\partial k}       \right)   \\ 455\end{array} }        & 456\end{flalign*} 457\begin{flalign*} 458&{ \begin{array}{*{20}l} 460 -\left(        v^\frac{\partial e_2                                }{\partial i} 461      +e_2 \,v    \frac{\partial v                                   }{\partial i}     \right) 462+\left(           \frac{\partial \left( {e_1 \,u\,v}  \right)}{\partial j} 463      -e_1 \,u    \frac{\partial v                                   }{\partial j}  \right\right. 465+\left(           \frac{\partial \left( {e_2 u\,u}     \right)}{\partial i} 466      -u       \frac{\partial \left( {e_2 u}         \right)}{\partial i}  \right) 467+e_2 v            \frac{\partial v                                    }{\partial i} 468                  \right\} 469+\frac{1}{e_3} \left( 470               \frac{\partial \left( {w\,u} \right)         }{\partial k} 471       -u         \frac{\partial w                    }{\partial k}  \right) \\ 472\end{array} }     & 473\end{flalign*} 474\begin{flalign*} 475&{ \begin{array}{*{20}l} 477               \frac{\partial \left( {e_2 \,u\,u} \right)}{\partial i} 478      +        \frac{\partial \left( {e_1 \,u\,v} \right)}{\partial j}  \right) 479+\frac{1}{e_3 }      \frac{\partial \left( {w\,u       } \right)}{\partial k} 481+\frac{1}{e_1 e_2 }     \left( 482      -u \left(   \frac{\partial \left( {e_1 v   } \right)}{\partial j} 483               -v\,\frac{\partial e_1 }{\partial j}             \right) 484      -u       \frac{\partial \left( {e_2 u   } \right)}{\partial i} 485                  \right) 486 -\frac{1}{e_3 }     \frac{\partial w}{\partial k} u 487 +\frac{1}{e_1 e_2 }\left(    -v^2\frac{\partial e_2   }{\partial i}     \right) 488\end{array} }     & 489\end{flalign*} 490\begin{flalign*} 491&{ \begin{array}{*{20}l} 492\qquad = \nabla \cdot \left( {{\rm {\bf U}}\,u} \right) 493-   \left( \nabla \cdot {\rm {\bf U}} \right) \ u 494+\frac{1}{e_1 e_2 }\left( 495      -v^2     \frac{\partial e_2 }{\partial i} 496      +uv   \,    \frac{\partial e_1 }{\partial j}    \right) \\ 497\end{array} }     & 498\end{flalign*} 499as $\nabla \cdot {\rm {\bf U}}\;=0$ (incompressibility) it comes: 500\begin{flalign*} 501&{ \begin{array}{*{20}l} 502\qquad = \nabla \cdot \left{{\rm {\bf U}}\,u}      \right) 503\frac{1}{e_1 e_2 }   \left( v \; \frac{\partial e_2}{\partial i} 504                         -u \; \frac{\partial e_1}{\partial j}    \right\left( -v \right) 505\end{array} }     & 506\end{flalign*} 507 508The flux form of the momentum advection term is therefore given by: 509\begin{multline} \label{Eq_PE_flux_form} 510      \left[ 511  \left(    {\nabla \times {\rm {\bf U}}}    \right) \times {\rm {\bf U}} 512+\frac{1}{2}   \nabla \left{{\rm {\bf U}}^2}    \right) 513      \right]_h 514\\ 515= \nabla \cdot    \left( {{\begin{array}{*{20}c}   {\rm {\bf U}} \, u   \hfill \\ 516                                    {\rm {\bf U}} \, v   \hfill \\ 517                  \end{array} }} 518            \right) 519+\frac{1}{e_1 e_2 }     \left( 520       v\frac{\partial e_2}{\partial i} 521      -u\frac{\partial e_1}{\partial j} 522                  \right) {\rm {\bf k}} \times {\rm {\bf U}}_h 523\end{multline} 524 525The flux form has two terms, the first one is expressed as the divergence of momentum 526fluxes (hence the flux form name given to this formulation) and the second one is due to 527the curvilinear nature of the coordinate system used. The latter is called the \emph{metric} 528term and can be viewed as a modification of the Coriolis parameter: 529\begin{equation} \label{Eq_PE_cor+metric} 530f \to f + \frac{1}{e_1\;e_2}  \left(  v \frac{\partial e_2}{\partial i} 531                        -u \frac{\partial e_1}{\partial j}  \right) 532\end{equation} 533 534Note that in the case of geographical coordinate, $i.e.$ when $(i,j) \to (\lambda ,\varphi )$ 535and $(e_1 ,e_2) \to (a \,\cos \varphi ,a)$, we recover the commonly used modification of 536the Coriolis parameter $f \to f+(u/a) \tan \varphi$. 537 538 539$\$\newline    % force a new ligne 540 541To sum up, the curvilinear $z$-coordinate equations solved by the ocean model can be 542written in the following tensorial formalism: 543 544\vspace{+10pt} 545$\bullet$ \textbf{Vector invariant form of the momentum equations} : 546 547\begin{subequations} \label{Eq_PE_dyn_vect} 548\begin{equation} \label{Eq_PE_dyn_vect_u} \begin{split} 549\frac{\partial u}{\partial t} 550= +   \left( {\zeta +f} \right)\,v 551   -   \frac{1}{2\,e_1}           \frac{\partial}{\partial i} \left(  u^2+v^2   \right) 552   -   \frac{1}{e_3    }  w     \frac{\partial u}{\partial k}      &      \\ 553   -   \frac{1}{e_1    }            \frac{\partial}{\partial i} \left( \frac{p_s+p_h }{\rho _o}    \right) 554   &+   D_u^{\vect{U}}  +   F_u^{\vect{U}}      \\ 555\\ 556\frac{\partial v}{\partial t} = 557       -   \left( {\zeta +f} \right)\,u 558       -   \frac{1}{2\,e_2 }        \frac{\partial }{\partial j}\left(  u^2+v^\right) 559       -   \frac{1}{e_3     }   w  \frac{\partial v}{\partial k}     &      \\ 560       -   \frac{1}{e_2     }        \frac{\partial }{\partial j}\left( \frac{p_s+p_h }{\rho _o}  \right) 561    &+  D_v^{\vect{U}}  +   F_v^{\vect{U}} 562\end{split} \end{equation} 563\end{subequations} 564 565 566\vspace{+10pt} 567$\bullet$ \textbf{flux form of the momentum equations} : 568\begin{subequations} \label{Eq_PE_dyn_flux} 569\begin{multline} \label{Eq_PE_dyn_flux_u} 570\frac{\partial u}{\partial t}= 571+   \left( { f + \frac{1}{e_1 \; e_2} 572               \left(    v \frac{\partial e_2}{\partial i} 573                  -u \frac{\partial e_1}{\partial j}  \right)}    \right) \, v    \\ 574- \frac{1}{e_1 \; e_2}  \left( 575               \frac{\partial \left( {e_2 \,u\,u} \right)}{\partial i} 576      +        \frac{\partial \left( {e_1 \,v\,u} \right)}{\partial j}  \right) 577                 - \frac{1}{e_3 }\frac{\partial \left( {         w\,u} \right)}{\partial k}    \\ 578-   \frac{1}{e_1 }\frac{\partial}{\partial i}\left( \frac{p_s+p_h }{\rho _o}   \right) 579+   D_u^{\vect{U}} +   F_u^{\vect{U}} 580\end{multline} 581\begin{multline} \label{Eq_PE_dyn_flux_v} 582\frac{\partial v}{\partial t}= 583-   \left( { f + \frac{1}{e_1 \; e_2} 584               \left(    v \frac{\partial e_2}{\partial i} 585                  -u \frac{\partial e_1}{\partial j}  \right)}    \right) \, u   \\ 586 \frac{1}{e_1 \; e_2}   \left( 587               \frac{\partial \left( {e_2 \,u\,v} \right)}{\partial i} 588      +        \frac{\partial \left( {e_1 \,v\,v} \right)}{\partial j}  \right) 589                 - \frac{1}{e_3 } \frac{\partial \left( {        w\,v} \right)}{\partial k}    \\ 590-   \frac{1}{e_2 }\frac{\partial }{\partial j}\left( \frac{p_s+p_h }{\rho _o}    \right) 591+  D_v^{\vect{U}} +  F_v^{\vect{U}} 592\end{multline} 593\end{subequations} 594where $\zeta$, the relative vorticity, is given by \eqref{Eq_PE_curl_Uh} and $p_s$, 595the surface pressure, is given by: 596\begin{equation} \label{Eq_PE_spg} 597p_s =  \rho \,g \,\eta 598\end{equation} 599with $\eta$ is solution of \eqref{Eq_PE_ssh} 600 601The vertical velocity and the hydrostatic pressure are diagnosed from the following equations: 602\begin{equation} \label{Eq_w_diag} 603\frac{\partial w}{\partial k}=-\chi \;e_3 604\end{equation} 605\begin{equation} \label{Eq_hp_diag} 606\frac{\partial p_h }{\partial k}=-\rho \;g\;e_3 607\end{equation} 608where the divergence of the horizontal velocity, $\chi$ is given by \eqref{Eq_PE_div_Uh}. 609 610\vspace{+10pt} 611$\bullet$ \textit{tracer equations} : 612\begin{equation} \label{Eq_S} 613\frac{\partial T}{\partial t} = 614-\frac{1}{e_1 e_2 }\left[ {      \frac{\partial \left( {e_2 T\,u} \right)}{\partial i} 615                  +\frac{\partial \left( {e_1 T\,v} \right)}{\partial j}} \right] 616-\frac{1}{e_3 }\frac{\partial \left( {T\,w} \right)}{\partial k} + D^T + F^T 617\end{equation} 618\begin{equation} \label{Eq_T} 619\frac{\partial S}{\partial t} = 620-\frac{1}{e_1 e_2 }\left[    {\frac{\partial \left( {e_2 S\,u} \right)}{\partial i} 621                  +\frac{\partial \left( {e_1 S\,v} \right)}{\partial j}} \right] 622-\frac{1}{e_3 }\frac{\partial \left( {S\,w} \right)}{\partial k} + D^S + F^S 623\end{equation} 624\begin{equation} \label{Eq_rho} 625\rho =\rho \left( {T,S,z(k)} \right) 626\end{equation} 627 628The expression of \textbf{D}$^{U}$, $D^{S}$ and $D^{T}$ depends on the subgrid scale 629parameterisation used. It will be defined in \S\ref{PE_zdf}. The nature and formulation of 630${\rm {\bf F}}^{\rm {\bf U}}$, $F^T$ and $F^S$, the surface forcing terms, are discussed 631in Chapter~\ref{SBC}. 632 633 634\newpage 635$\$\newline    % force a new ligne 636% ================================================================ 637% Curvilinear generalised vertical coordinate System 638% ================================================================ 639\section{Curvilinear generalised vertical coordinate System} 640\label{PE_gco} 641 642The ocean domain presents a huge diversity of situation in the vertical. First the ocean surface is a time dependent surface (moving surface). Second the ocean floor depends on the geographical position, varying from more than 6,000 meters in abyssal trenches to zero at the coast. Last but not least, the ocean stratification exerts a strong barrier to vertical motions and mixing. 643Therefore, in order to represent the ocean with respect to the first point a space and time dependent vertical coordinate that follows the variation of the sea surface height $e.g.$ an $z$*-coordinate; for the second point, a space variation to fit the change of bottom topography $e.g.$ a terrain-following or $\sigma$-coordinate; and for the third point, one will be tempted to use a space and time dependent coordinate that follows the isopycnal surfaces, $e.g.$ an isopycnic coordinate. 644 645In order to satisfy two or more constrains one can even be tempted to mixed these coordinate systems, as in HYCOM (mixture of $z$-coordinate at the surface, isopycnic coordinate in the ocean interior and $\sigma$ at the ocean bottom) \citep{Chassignet_al_JPO03}  or OPA (mixture of $z$-coordinate in vicinity the surface and steep topography areas and $\sigma$-coordinate elsewhere) \citep{Madec_al_JPO96} among others. 646 647In fact one is totally free to choose any space and time vertical coordinate by introducing an arbitrary vertical coordinate : 648\begin{equation} \label{Eq_s} 649s=s(i,j,k,t) 650\end{equation} 651with the restriction that the above equation gives a single-valued monotonic relationship between $s$ and $k$, when $i$, $j$ and $t$ are held fixed. \eqref{Eq_s} is a transformation from the $(i,j,k,t)$ coordinate system with independent variables into the $(i,j,s,t)$ generalised coordinate system with $s$ depending on the other three variables through \eqref{Eq_s}. 652This so-called \textit{generalised vertical coordinate} \citep{Kasahara_MWR74} is in fact an Arbitrary Lagrangian--Eulerian (ALE) coordinate. Indeed, choosing an expression for $s$ is an arbitrary choice that determines which part of the vertical velocity (defined from a fixed referential) will cross the levels (Eulerian part) and which part will be used to move them (Lagrangian part). 653The coordinate is also sometime referenced as an adaptive coordinate \citep{Hofmeister_al_OM09}, since the coordinate system is adapted in the course of the simulation. Its most often used implementation is via an ALE algorithm, in which a pure lagrangian step is followed by regridding and remapping steps, the later step implicitly embedding the vertical advection \citep{Hirt_al_JCP74, Chassignet_al_JPO03, White_al_JCP09}. Here we follow the \citep{Kasahara_MWR74} strategy : a regridding step (an update of the vertical coordinate) followed by an eulerian step with an explicit computation of vertical advection relative to the moving s-surfaces. 654 655%\gmcomment{ 656 657%A key point here is that the $s$-coordinate depends on $(i,j)$ ==> horizontal pressure gradient... 658 659the generalized vertical coordinates used in ocean modelling are not orthogonal, 660which contrasts with many other applications in mathematical physics. 661Hence, it is useful to keep in mind the following properties that may seem 662odd on initial encounter. 663 664The horizontal velocity in ocean models measures motions in the horizontal plane, 665perpendicular to the local gravitational field. That is, horizontal velocity is mathematically 666the same regardless the vertical coordinate, be it geopotential, isopycnal, pressure, 667or terrain following. The key motivation for maintaining the same horizontal velocity 668component is that the hydrostatic and geostrophic balances are dominant in the large-scale ocean. 669Use of an alternative quasi-horizontal velocity, for example one oriented parallel 670to the generalized surface, would lead to unacceptable numerical errors. 671Correspondingly, the vertical direction is anti-parallel to the gravitational force in all 672of the coordinate systems. We do not choose the alternative of a quasi-vertical 673direction oriented normal to the surface of a constant generalized vertical coordinate. 674 675It is the method used to measure transport across the generalized vertical coordinate 676surfaces which differs between the vertical coordinate choices. That is, computation 677of the dia-surface velocity component represents the fundamental distinction between 678the various coordinates. In some models, such as geopotential, pressure, and 679terrain following, this transport is typically diagnosed from volume or mass conservation. 680In other models, such as isopycnal layered models, this transport is prescribed based 681on assumptions about the physical processes producing a flux across the layer interfaces. 682 683 684In this section we first establish the PE in the generalised vertical $s$-coordinate, 685then we discuss the particular cases available in \NEMO, namely $z$, $z$*, $s$, and $\tilde z$ 686%} 687 688% ------------------------------------------------------------------------------------------------------------- 689% The s-coordinate Formulation 690% ------------------------------------------------------------------------------------------------------------- 691\subsection{The \textit{s-}coordinate Formulation} 692 693Starting from the set of equations established in \S\ref{PE_zco} for the special case $k=z$ 694and thus $e_3=1$, we introduce an arbitrary vertical coordinate $s=s(i,j,k,t)$, which includes 695$z$-, \textit{z*}- and $\sigma-$coordinates as special cases ($s=z$, $s=\textit{z*}$, and 696$s=\sigma=z/H$ or $=z/\left(H+\eta \right)$, resp.). A formal derivation of the transformed 697equations is given in Appendix~\ref{Apdx_A}. Let us define the vertical scale factor by 698$e_3=\partial_s z$  ($e_3$ is now a function of $(i,j,k,t)$ ), and the slopes in the 699(\textbf{i},\textbf{j}) directions between $s-$ and $z-$surfaces by : 700\begin{equation} \label{Eq_PE_sco_slope} 701\sigma _1 =\frac{1}{e_1 }\;\left. {\frac{\partial z}{\partial i}} \right|_s 703\sigma _2 =\frac{1}{e_2 }\;\left. {\frac{\partial z}{\partial j}} \right|_s 704\end{equation} 705We also introduce  $\omega$, a dia-surface velocity component, defined as the velocity 706relative to the moving $s$-surfaces and normal to them: 707\begin{equation} \label{Eq_PE_sco_w} 708\omega  = w - e_3 \, \frac{\partial s}{\partial t} - \sigma _1 \,u - \sigma _2 \,v    \\ 709\end{equation} 710 711The equations solved by the ocean model \eqref{Eq_PE} in $s-$coordinate can be written as follows (see Appendix~\ref{Apdx_A_momentum}): 712 713 \vspace{0.5cm} 714$\bullet$ Vector invariant form of the momentum equation : 715\begin{multline} \label{Eq_PE_sco_u} 716\frac{\partial  u   }{\partial t}= 717   +   \left( {\zeta +f} \right)\,v 718   -   \frac{1}{2\,e_1} \frac{\partial}{\partial i} \left(  u^2+v^2   \right) 719   -   \frac{1}{e_3} \omega \frac{\partial u}{\partial k}       \\ 720   -   \frac{1}{e_1} \frac{\partial}{\partial i} \left( \frac{p_s + p_h}{\rho _o}    \right) 721   +  g\frac{\rho }{\rho _o}\sigma _1 722   +   D_u^{\vect{U}}  +   F_u^{\vect{U}} \quad 723\end{multline} 724\begin{multline} \label{Eq_PE_sco_v} 725\frac{\partial v }{\partial t}= 726   -   \left( {\zeta +f} \right)\,u 727   -   \frac{1}{2\,e_2 }\frac{\partial }{\partial j}\left(  u^2+v^\right) 728   -   \frac{1}{e_3 } \omega \frac{\partial v}{\partial k}         \\ 729   -   \frac{1}{e_2 }\frac{\partial }{\partial j}\left( \frac{p_s+p_h }{\rho _o}  \right) 730    +  g\frac{\rho }{\rho _o }\sigma _2 731   +  D_v^{\vect{U}}  +   F_v^{\vect{U}} \quad 732\end{multline} 733 734 \vspace{0.5cm} 735$\bullet$ Vector invariant form of the momentum equation : 736\begin{multline} \label{Eq_PE_sco_u} 737\frac{1}{e_3} \frac{\partial \left(  e_3\,\right) }{\partial t}= 738   +   \left( { f + \frac{1}{e_1 \; e_2 } 739               \left(    v \frac{\partial e_2}{\partial i} 740                  -u \frac{\partial e_1}{\partial j}  \right)}    \right) \, v    \\ 741   - \frac{1}{e_1 \; e_2 \; e_3 }   \left( 742               \frac{\partial \left( {e_2 \, e_3 \, u\,u} \right)}{\partial i} 743      +        \frac{\partial \left( {e_1 \, e_3 \, v\,u} \right)}{\partial j}   \right) 744   - \frac{1}{e_3 }\frac{\partial \left( { \omega\,u} \right)}{\partial k}    \\ 745   - \frac{1}{e_1} \frac{\partial}{\partial i} \left( \frac{p_s + p_h}{\rho _o}    \right) 746   +  g\frac{\rho }{\rho _o}\sigma _1 747   +   D_u^{\vect{U}}  +   F_u^{\vect{U}} \quad 748\end{multline} 749\begin{multline} \label{Eq_PE_sco_v} 750\frac{1}{e_3} \frac{\partial \left(  e_3\,\right) }{\partial t}= 751   -   \left( { f + \frac{1}{e_1 \; e_2} 752               \left(    v \frac{\partial e_2}{\partial i} 753                  -u \frac{\partial e_1}{\partial j}  \right)}    \right) \, u   \\ 754   - \frac{1}{e_1 \; e_2 \; e_3 }   \left( 755               \frac{\partial \left( {e_2 \; e_\,u\,v} \right)}{\partial i} 756      +        \frac{\partial \left( {e_1 \; e_\,v\,v} \right)}{\partial j}   \right) 757                 - \frac{1}{e_3 } \frac{\partial \left( { \omega\,v} \right)}{\partial k}    \\ 758   -   \frac{1}{e_2 }\frac{\partial }{\partial j}\left( \frac{p_s+p_h }{\rho _o}  \right) 759    +  g\frac{\rho }{\rho _o }\sigma _2 760   +  D_v^{\vect{U}}  +   F_v^{\vect{U}} \quad 761\end{multline} 762 763where the relative vorticity, \textit{$\zeta$}, the surface pressure gradient, and the hydrostatic 764pressure have the same expressions as in $z$-coordinates although they do not represent 765exactly the same quantities. $\omega$ is provided by the continuity equation 766(see Appendix~\ref{Apdx_A}): 767\begin{equation} \label{Eq_PE_sco_continuity} 768\frac{\partial e_3}{\partial t} + e_3 \; \chi + \frac{\partial \omega }{\partial s} = 0 770\chi =\frac{1}{e_1 e_2 e_3 }\left[ {\frac{\partial \left( {e_2 e_3 \,u} 771\right)}{\partial i}+\frac{\partial \left( {e_1 e_3 \,v} \right)}{\partial 772j}} \right] 773\end{equation} 774 775 \vspace{0.5cm} 776$\bullet$ tracer equations: 777\begin{multline} \label{Eq_PE_sco_t} 778\frac{1}{e_3} \frac{\partial \left(  e_3\,\right) }{\partial t}= 779-\frac{1}{e_1 e_2 e_3 }\left[ {\frac{\partial \left( {e_2 e_3\,u\,T} \right)}{\partial i} 780                                           +\frac{\partial \left( {e_1 e_3\,v\,T} \right)}{\partial j}} \right]   \\ 781-\frac{1}{e_3 }\frac{\partial \left( {T\,\omega } \right)}{\partial k}   + D^T + F^S   \qquad 782\end{multline} 783 784\begin{multline} \label{Eq_PE_sco_s} 785\frac{1}{e_3} \frac{\partial \left(  e_3\,\right) }{\partial t}= 786-\frac{1}{e_1 e_2 e_3 }\left[ {\frac{\partial \left( {e_2 e_3\,u\,S} \right)}{\partial i} 787                                           +\frac{\partial \left( {e_1 e_3\,v\,S} \right)}{\partial j}} \right]    \\ 788-\frac{1}{e_3 }\frac{\partial \left( {S\,\omega } \right)}{\partial k}     + D^S + F^S   \qquad 789\end{multline} 790 791The equation of state has the same expression as in $z$-coordinate, and similar expressions 792are used for mixing and forcing terms. 793 794\gmcomment{ 795\colorbox{yellow}{ to be updated $= = >$} 796Add a few works on z and zps and s and underlies the differences between all of them 797\colorbox{yellow}{ $< = =$ end update}  } 798 799 800 801% ------------------------------------------------------------------------------------------------------------- 802% Curvilinear z*-coordinate System 803% ------------------------------------------------------------------------------------------------------------- 804\subsection{Curvilinear \textit{z*}--coordinate System} 805\label{PE_zco_star} 806 807%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 808\begin{figure}[!b]    \begin{center} 809\includegraphics[width=1.0\textwidth]{./TexFiles/Figures/Fig_z_zstar.pdf} 810\caption{   \label{Fig_z_zstar} 811(a) $z$-coordinate in linear free-surface case ; 812(b) $z-$coordinate in non-linear free surface case ; 813(c) re-scaled height coordinate (become popular as the \textit{z*-}coordinate 815\end{center}   \end{figure} 816%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 817 818 819In that case, the free surface equation is nonlinear, and the variations of volume are fully 820taken into account. These coordinates systems is presented in a report \citep{Levier2007} 821available on the \NEMO web site. 822 823%\gmcomment{ 824The \textit{z*} coordinate approach is an unapproximated, non-linear free surface implementation 825which allows one to deal with large amplitude free-surface 826variations relative to the vertical resolution \citep{Adcroft_Campin_OM04}. In 827the  \textit{z*} formulation, the variation of the column thickness due to sea-surface 828undulations is not concentrated in the surface level, as in the $z$-coordinate formulation, 829but is equally distributed over the full water column. Thus vertical 830levels naturally follow sea-surface variations, with a linear attenuation with 831depth, as illustrated by figure fig.1c . Note that with a flat bottom, such as in 832fig.1c, the bottom-following  $z$ coordinate and  \textit{z*} are equivalent. 833The definition and modified oceanic equations for the rescaled vertical coordinate 834 \textit{z*}, including the treatment of fresh-water flux at the surface, are 835detailed in Adcroft and Campin (2004). The major points are summarized 836here. The position ( \textit{z*}) and vertical discretization (\textit{z*}) are expressed as: 837\begin{equation} \label{Eq_z-star} 838H +  \textit{z*} = (H + z) / r \quad \text{and} \ \delta \textit{z*} = \delta z / r \quad \text{with} \ r = \frac{H+\eta} {H} 839\end{equation} 840Since the vertical displacement of the free surface is incorporated in the vertical 841coordinate  \textit{z*}, the upper and lower boundaries are at fixed  \textit{z*} position, 842$\textit{z*} = 0$ and  $\textit{z*} = -H$ respectively. Also the divergence of the flow field 843is no longer zero as shown by the continuity equation: 844\begin{equation*} 845\frac{\partial r}{\partial t} = \nabla_{\textit{z*}} \cdot \left( r \; \rm{\bf U}_h \right) 846      \left( r \; w\textit{*} \right) = 0 847\end{equation*} 848%} 849 850 851% from MOM4p1 documentation 852 853To overcome problems with vanishing surface and/or bottom cells, we consider the 854zstar coordinate 855\begin{equation} \label{PE_} 856   z^\star = H \left( \frac{z-\eta}{H+\eta} \right) 857\end{equation} 858 859This coordinate is closely related to the "eta" coordinate used in many atmospheric 860models (see Black (1994) for a review of eta coordinate atmospheric models). It 861was originally used in ocean models by Stacey et al. (1995) for studies of tides 862next to shelves, and it has been recently promoted by Adcroft and Campin (2004) 863for global climate modelling. 864 865The surfaces of constant $z^\star$ are quasi-horizontal. Indeed, the $z^\star$ coordinate reduces to $z$ when $\eta$ is zero. In general, when noting the large differences between 866undulations of the bottom topography versus undulations in the surface height, it 867is clear that surfaces constant $z^\star$ are very similar to the depth surfaces. These properties greatly reduce difficulties of computing the horizontal pressure gradient relative to terrain following sigma models discussed in \S\ref{PE_sco}. 868Additionally, since $z^\star$ when $\eta = 0$, no flow is spontaneously generated in an 869unforced ocean starting from rest, regardless the bottom topography. This behaviour is in contrast to the case with "s"-models, where pressure gradient errors in 870the presence of nontrivial topographic variations can generate nontrivial spontaneous flow from a resting state, depending on the sophistication of the pressure 871gradient solver. The quasi-horizontal nature of the coordinate surfaces also facilitates the implementation of neutral physics parameterizations in $z^\star$ models using 872the same techniques as in $z$-models (see Chapters 13-16 of \cite{Griffies_Bk04}) for a 873discussion of neutral physics in $z$-models, as well as Section \S\ref{LDF_slp} 874in this document for treatment in \NEMO). 875 876The range over which $z^\star$ varies is time independent $-H \leq z^\star \leq 0$. Hence, all 877cells remain nonvanishing, so long as the surface height maintains $\eta > ?H$. This 878is a minor constraint relative to that encountered on the surface height when using 879$s = z$ or $s = z - \eta$. 880 881Because $z^\star$ has a time independent range, all grid cells have static increments 882ds, and the sum of the ver tical increments yields the time independent ocean 883depth %·k ds = H. 884The $z^\star$ coordinate is therefore invisible to undulations of the 885free surface, since it moves along with the free surface. This proper ty means that 886no spurious ver tical transpor t is induced across surfaces of constant $z^\star$ by the 887motion of external gravity waves. Such spurious transpor t can be a problem in 888z-models, especially those with tidal forcing. Quite generally, the time independent 889range for the $z^\star$ coordinate is a very convenient proper ty that allows for a nearly 890arbitrary ver tical resolution even in the presence of large amplitude fluctuations of 891the surface height, again so long as $\eta > -H$. 892 893%end MOM doc %%% 894 895 896 897\newpage 898% ------------------------------------------------------------------------------------------------------------- 899% Terrain following  coordinate System 900% ------------------------------------------------------------------------------------------------------------- 901\subsection{Curvilinear Terrain-following \textit{s}--coordinate} 902\label{PE_sco} 903 904% ------------------------------------------------------------------------------------------------------------- 905% Introduction 906% ------------------------------------------------------------------------------------------------------------- 907\subsubsection{Introduction} 908 909Several important aspects of the ocean circulation are influenced by bottom topography. 910Of course, the most important is that bottom topography determines deep ocean sub-basins, 911barriers, sills and channels that strongly constrain the path of water masses, but more subtle 912effects exist. For example, the topographic $\beta$-effect is usually larger than the planetary 913one along continental slopes. Topographic Rossby waves can be excited and can interact 914with the mean current. In the $z-$coordinate system presented in the previous section 915(\S\ref{PE_zco}), $z-$surfaces are geopotential surfaces. The bottom topography is 916discretised by steps. This often leads to a misrepresentation of a gradually sloping bottom 917and to large localized depth gradients associated with large localized vertical velocities. 918The response to such a velocity field often leads to numerical dispersion effects. 919One solution to strongly reduce this error is to use a partial step representation of bottom 921Another solution is to introduce a terrain-following coordinate system (hereafter $s-$coordinate) 922 923The $s$-coordinate avoids the discretisation error in the depth field since the layers of 924computation are gradually adjusted with depth to the ocean bottom. Relatively small 925topographic features as well as  gentle, large-scale slopes of the sea floor in the deep 926ocean, which would be ignored in typical $z$-model applications with the largest grid 927spacing at greatest depths, can easily be represented (with relatively low vertical resolution). 928A terrain-following model (hereafter $s-$model) also facilitates the modelling of the 929boundary layer flows over a large depth range, which in the framework of the $z$-model 930would require high vertical resolution over the whole depth range. Moreover, with a 931$s$-coordinate it is possible, at least in principle, to have the bottom and the sea surface 932as the only boundaries of the domain (nomore lateral boundary condition to specify). 933Nevertheless, a $s$-coordinate also has its drawbacks. Perfectly adapted to a 934homogeneous ocean, it has strong limitations as soon as stratification is introduced. 935The main two problems come from the truncation error in the horizontal pressure 936gradient and a possibly increased diapycnal diffusion. The horizontal pressure force 937in $s$-coordinate consists of two terms (see Appendix~\ref{Apdx_A}), 938 939\begin{equation} \label{Eq_PE_p_sco} 940\left. {\nabla p} \right|_z =\left. {\nabla p} \right|_s -\frac{\partial 941p}{\partial s}\left. {\nabla z} \right|_s 942\end{equation} 943 944The second term in \eqref{Eq_PE_p_sco} depends on the tilt of the coordinate surface 945and introduces a truncation error that is not present in a $z$-model. In the special case 946of a $\sigma-$coordinate (i.e. a depth-normalised coordinate system $\sigma = z/H$), 947\citet{Haney1991} and \citet{Beckmann1993} have given estimates of the magnitude 948of this truncation error. It depends on topographic slope, stratification, horizontal and 949vertical resolution, the equation of state, and the finite difference scheme. This error 950limits the possible topographic slopes that a model can handle at a given horizontal 951and vertical resolution. This is a severe restriction for large-scale applications using 952realistic bottom topography. The large-scale slopes require high horizontal resolution, 953and the computational cost becomes prohibitive. This problem can be at least partially 954overcome by mixing $s$-coordinate and step-like representation of bottom topography \citep{Gerdes1993a,Gerdes1993b,Madec_al_JPO96}. However, the definition of the model 955domain vertical coordinate becomes then a non-trivial thing for a realistic bottom 956topography: a envelope topography is defined in $s$-coordinate on which a full or 957partial step bottom topography is then applied in order to adjust the model depth to 958the observed one (see \S\ref{DOM_zgr}. 959 960For numerical reasons a minimum of diffusion is required along the coordinate surfaces 961of any finite difference model. It causes spurious diapycnal mixing when coordinate 962surfaces do not coincide with isoneutral surfaces. This is the case for a $z$-model as 963well as for a $s$-model. However, density varies more strongly on $s-$surfaces than 964on horizontal surfaces in regions of large topographic slopes, implying larger diapycnal 965diffusion in a $s$-model than in a $z$-model. Whereas such a diapycnal diffusion in a 966$z$-model tends to weaken horizontal density (pressure) gradients and thus the horizontal 967circulation, it usually reinforces these gradients in a $s$-model, creating spurious circulation. 968For example, imagine an isolated bump of topography in an ocean at rest with a horizontally 969uniform stratification. Spurious diffusion along $s$-surfaces will induce a bump of isoneutral 970surfaces over the topography, and thus will generate there a baroclinic eddy. In contrast, 971the ocean will stay at rest in a $z$-model. As for the truncation error, the problem can be reduced by introducing the terrain-following coordinate below the strongly stratified portion of the water column 972($i.e.$ the main thermocline) \citep{Madec_al_JPO96}. An alternate solution consists of rotating 973the lateral diffusive tensor to geopotential or to isoneutral surfaces (see \S\ref{PE_ldf}. 974Unfortunately, the slope of isoneutral surfaces relative to the $s$-surfaces can very large, 975strongly exceeding the stability limit of such a operator when it is discretized (see Chapter~\ref{LDF}). 976 977The $s-$coordinates introduced here \citep{Lott_al_OM90,Madec_al_JPO96} differ mainly in two 978aspects from similar models:  it allows  a representation of bottom topography with mixed 979full or partial step-like/terrain following topography ; It also offers a completely general 980transformation, $s=s(i,j,z)$ for the vertical coordinate. 981 982 983\newpage 984% ------------------------------------------------------------------------------------------------------------- 985% Curvilinear z-tilde coordinate System 986% ------------------------------------------------------------------------------------------------------------- 987\subsection{Curvilinear $\tilde{z}$--coordinate} 988\label{PE_zco_tilde} 989 990The $\tilde{z}$-coordinate has been developed by \citet{Leclair_Madec_OM10s}. 991It is available in \NEMO since the version 3.4. Nevertheless, it is currently not robust enough 992to be used in all possible configurations. Its use is therefore not recommended. 993We 994 995\newpage 996% ================================================================ 997% Subgrid Scale Physics 998% ================================================================ 999\section{Subgrid Scale Physics} 1000\label{PE_zdf_ldf} 1001 1002The primitive equations describe the behaviour of a geophysical fluid at 1003space and time scales larger than a few kilometres in the horizontal, a few 1004meters in the vertical and a few minutes. They are usually solved at larger 1005scales: the specified grid spacing and time step of the numerical model. The 1006effects of smaller scale motions (coming from the advective terms in the 1007Navier-Stokes equations) must be represented entirely in terms of 1008large-scale patterns to close the equations. These effects appear in the 1009equations as the divergence of turbulent fluxes ($i.e.$ fluxes associated with 1010the mean correlation of small scale perturbations). Assuming a turbulent 1011closure hypothesis is equivalent to choose a formulation for these fluxes. 1012It is usually called the subgrid scale physics. It must be emphasized that 1013this is the weakest part of the primitive equations, but also one of the 1014most important for long-term simulations as small scale processes \textit{in fine} 1015balance the surface input of kinetic energy and heat. 1016 1017The control exerted by gravity on the flow induces a strong anisotropy 1018between the lateral and vertical motions. Therefore subgrid-scale physics 1019\textbf{D}$^{\vect{U}}$, $D^{S}$ and $D^{T}$  in \eqref{Eq_PE_dyn}, 1020\eqref{Eq_PE_tra_T} and \eqref{Eq_PE_tra_S} are divided into a lateral part 1021\textbf{D}$^{l \vect{U}}$, $D^{lS}$ and $D^{lT}$ and a vertical part 1022\textbf{D}$^{vU}$, $D^{vS}$ and $D^{vT}$. The formulation of these terms 1023and their underlying physics are briefly discussed in the next two subsections. 1024 1025% ------------------------------------------------------------------------------------------------------------- 1026% Vertical Subgrid Scale Physics 1027% ------------------------------------------------------------------------------------------------------------- 1028\subsection{Vertical Subgrid Scale Physics} 1029\label{PE_zdf} 1030 1031The model resolution is always larger than the scale at which the major 1032sources of vertical turbulence occur (shear instability, internal wave 1033breaking...). Turbulent motions are thus never explicitly solved, even 1034partially, but always parameterized. The vertical turbulent fluxes are 1035assumed to depend linearly on the gradients of large-scale quantities (for 1036example, the turbulent heat flux is given by $\overline{T'w'}=-A^{vT} \partial_z \overline T$, 1037where $A^{vT}$ is an eddy coefficient). This formulation is 1038analogous to that of molecular diffusion and dissipation. This is quite 1039clearly a necessary compromise: considering only the molecular viscosity 1040acting on large scale severely underestimates the role of turbulent 1041diffusion and dissipation, while an accurate consideration of the details of 1042turbulent motions is simply impractical. The resulting vertical momentum and 1043tracer diffusive operators are of second order: 1044\begin{equation} \label{Eq_PE_zdf} 1045   \begin{split} 1046{\vect{D}}^{v \vect{U}} &=\frac{\partial }{\partial z}\left( {A^{vm}\frac{\partial {\vect{U}}_h }{\partial z}} \right) \ , \\ 1047D^{vT}                         &= \frac{\partial }{\partial z}\left( {A^{vT}\frac{\partial T}{\partial z}} \right) \ , 1049D^{vS}=\frac{\partial }{\partial z}\left( {A^{vT}\frac{\partial S}{\partial z}} \right) 1050   \end{split} 1051\end{equation} 1052where $A^{vm}$ and $A^{vT}$ are the vertical eddy viscosity and diffusivity coefficients, 1053respectively. At the sea surface and at the bottom, turbulent fluxes of momentum, heat 1054and salt must be specified (see Chap.~\ref{SBC} and \ref{ZDF} and \S\ref{TRA_bbl}). 1055All the vertical physics is embedded in the specification of the eddy coefficients. 1056They can be assumed to be either constant, or function of the local fluid properties 1057($e.g.$ Richardson number, Brunt-Vais\"{a}l\"{a} frequency...), or computed from a 1058turbulent closure model. The choices available in \NEMO are discussed in \S\ref{ZDF}). 1059 1060% ------------------------------------------------------------------------------------------------------------- 1061% Lateral Diffusive and Viscous Operators Formulation 1062% ------------------------------------------------------------------------------------------------------------- 1063\subsection{Formulation of the Lateral Diffusive and Viscous Operators} 1064\label{PE_ldf} 1065 1066Lateral turbulence can be roughly divided into a mesoscale turbulence 1067associated with eddies (which can be solved explicitly if the resolution is 1068sufficient since their underlying physics are included in the primitive 1069equations), and a sub mesoscale turbulence which is never explicitly solved 1070even partially, but always parameterized. The formulation of lateral eddy 1071fluxes depends on whether the mesoscale is below or above the grid-spacing 1072($i.e.$ the model is eddy-resolving or not). 1073 1074In non-eddy-resolving configurations, the closure is similar to that used 1075for the vertical physics. The lateral turbulent fluxes are assumed to depend 1076linearly on the lateral gradients of large-scale quantities. The resulting 1077lateral diffusive and dissipative operators are of second order. 1078Observations show that lateral mixing induced by mesoscale turbulence tends 1079to be along isopycnal surfaces (or more precisely neutral surfaces \cite{McDougall1987}) 1080rather than across them. 1081As the slope of neutral surfaces is small in the ocean, a common 1082approximation is to assume that the `lateral' direction is the horizontal, 1083$i.e.$ the lateral mixing is performed along geopotential surfaces. This leads 1084to a geopotential second order operator for lateral subgrid scale physics. 1085This assumption can be relaxed: the eddy-induced turbulent fluxes can be 1086better approached by assuming that they depend linearly on the gradients of 1087large-scale quantities computed along neutral surfaces. In such a case, 1088the diffusive operator is an isoneutral second order operator and it has 1089components in the three space directions. However, both horizontal and 1090isoneutral operators have no effect on mean ($i.e.$ large scale) potential 1091energy whereas potential energy is a main source of turbulence (through 1092baroclinic instabilities). \citet{Gent1990} have proposed a 1093parameterisation of mesoscale eddy-induced turbulence which associates an 1094eddy-induced velocity to the isoneutral diffusion. Its mean effect is to 1095reduce the mean potential energy of the ocean. This leads to a formulation 1096of lateral subgrid-scale physics made up of an isoneutral second order 1097operator and an eddy induced advective part. In all these lateral diffusive 1098formulations, the specification of the lateral eddy coefficients remains the 1099problematic point as there is no really satisfactory formulation of these 1100coefficients as a function of large-scale features. 1101 1102In eddy-resolving configurations, a second order operator can be used, but 1103usually the more scale selective biharmonic operator is preferred as the 1104grid-spacing is usually not small enough compared to the scale of the 1105eddies. The role devoted to the subgrid-scale physics is to dissipate the 1106energy that cascades toward the grid scale and thus to ensure the stability of 1107the model while not interfering with the resolved mesoscale activity. Another approach 1108is becoming more and more popular: instead of specifying explicitly a sub-grid scale 1109term in the momentum and tracer time evolution equations, one uses a advective 1110scheme which is diffusive enough to maintain the model stability. It must be emphasised 1111that then, all the sub-grid scale physics is included in the formulation of the 1113 1114All these parameterisations of subgrid scale physics have advantages and 1115drawbacks. There are not all available in \NEMO. In the $z$-coordinate 1116formulation, five options are offered for active tracers (temperature and 1117salinity): second order geopotential operator, second order isoneutral 1118operator, \citet{Gent1990} parameterisation, fourth order 1119geopotential operator, and various slightly diffusive advection schemes. 1120The same options are available for momentum, except 1121\citet{Gent1990} parameterisation which only involves tracers. In the 1122$s$-coordinate formulation, additional options are offered for tracers: second 1123order operator acting along $s-$surfaces, and for momentum: fourth order 1124operator acting along $s-$surfaces (see \S\ref{LDF}). 1125 1126\subsubsection{Lateral second order tracer diffusive operator} 1127 1128The lateral second order tracer diffusive operator is defined by (see Appendix~\ref{Apdx_B}): 1129\begin{equation} \label{Eq_PE_iso_tensor} 1130D^{lT}=\nabla {\rm {\bf .}}\left( {A^{lT}\;\Re \;\nabla T} \right) \qquad 1132 1 \hfill & 0 \hfill & {-r_1 } \hfill \\ 1133 0 \hfill & 1 \hfill & {-r_2 } \hfill \\ 1134 {-r_1 } \hfill & {-r_2 } \hfill & {r_1 ^2+r_2 ^2} \hfill \\ 1135\end{array} }} \right) 1136\end{equation} 1137where $r_1 \;\mbox{and}\;r_2$ are the slopes between the surface along 1138which the diffusive operator acts and the model level ($e. g.$ $z$- or 1139$s$-surfaces). Note that the formulation \eqref{Eq_PE_iso_tensor} is exact for the 1140rotation between geopotential and $s$-surfaces, while it is only an approximation 1141for the rotation between isoneutral and $z$- or $s$-surfaces. Indeed, in the latter 1142case, two assumptions are made to simplify  \eqref{Eq_PE_iso_tensor} \citep{Cox1987}. 1143First, the horizontal contribution of the dianeutral mixing is neglected since the ratio 1144between iso and dia-neutral diffusive coefficients is known to be several orders of 1145magnitude smaller than unity. Second, the two isoneutral directions of diffusion are 1146assumed to be independent since the slopes are generally less than $10^{-2}$ in the 1147ocean (see Appendix~\ref{Apdx_B}). 1148 1149For \textit{iso-level} diffusion, $r_1$ and $r_2$ are zero. $\Re$ reduces to the identity 1150in the horizontal direction, no rotation is applied. 1151 1152For \textit{geopotential} diffusion, $r_1$ and $r_2$ are the slopes between the 1153geopotential and computational surfaces: they are equal to $\sigma _1$ and $\sigma _2$, 1154respectively (see \eqref{Eq_PE_sco_slope} ). 1155 1156For \textit{isoneutral} diffusion $r_1$ and $r_2$ are the slopes between the isoneutral 1157and computational surfaces. Therefore, they are different quantities, 1158but have similar expressions in $z$- and $s$-coordinates. In $z$-coordinates: 1159\begin{equation} \label{Eq_PE_iso_slopes} 1160r_1 =\frac{e_3 }{e_1 }  \left( {\frac{\partial \rho }{\partial i}} \right) 1161                  \left( {\frac{\partial \rho }{\partial k}} \right)^{-1} \ , \quad 1162r_1 =\frac{e_3 }{e_1 }  \left( {\frac{\partial \rho }{\partial i}} \right) 1163                  \left( {\frac{\partial \rho }{\partial k}} \right)^{-1}, 1164\end{equation} 1165while in $s$-coordinates $\partial/\partial k$ is replaced by 1166$\partial/\partial s$. 1167 1168\subsubsection{Eddy induced velocity} 1169 When the \textit{eddy induced velocity} parametrisation (eiv) \citep{Gent1990} is used, 1170an additional tracer advection is introduced in combination with the isoneutral diffusion of tracers: 1171\begin{equation} \label{Eq_PE_iso+eiv} 1172D^{lT}=\nabla \cdot \left( {A^{lT}\;\Re \;\nabla T} \right) 1173           +\nabla \cdot \left( {{\vect{U}}^\ast \,T} \right) 1174\end{equation} 1175where ${\vect{U}}^\ast =\left( {u^\ast ,v^\ast ,w^\ast } \right)$ is a non-divergent, 1176eddy-induced transport velocity. This velocity field is defined by: 1177\begin{equation} \label{Eq_PE_eiv} 1178   \begin{split} 1179 u^\ast  &= +\frac{1}{e_3       }\frac{\partial }{\partial k}\left[ {A^{eiv}\;\tilde{r}_1 } \right] \\ 1180 v^\ast  &= +\frac{1}{e_3       }\frac{\partial }{\partial k}\left[ {A^{eiv}\;\tilde{r}_2 } \right] \\ 1181 w^\ast &=  -\frac{1}{e_1 e_2 }\left[ 1182                      \frac{\partial }{\partial i}\left( {A^{eiv}\;e_2\,\tilde{r}_1 } \right) 1183                    +\frac{\partial }{\partial j}\left( {A^{eiv}\;e_1\,\tilde{r}_2 } \right)      \right] 1184   \end{split} 1185\end{equation} 1186where $A^{eiv}$ is the eddy induced velocity coefficient (or equivalently the isoneutral 1187thickness diffusivity coefficient), and $\tilde{r}_1$ and $\tilde{r}_2$ are the slopes 1188between isoneutral and \emph{geopotential} surfaces. Their values are 1189thus independent of the vertical coordinate, but their expression depends on the coordinate: 1190\begin{align} \label{Eq_PE_slopes_eiv} 1191\tilde{r}_n = \begin{cases} 1192   r_n                  &      \text{in $z$-coordinate}    \\ 1193   r_n + \sigma_n &      \text{in \textit{z*} and $s$-coordinates} 1194                   \end{cases} 1196\end{align} 1197 1198The normal component of the eddy induced velocity is zero at all the boundaries. 1199This can be achieved in a model by tapering either the eddy coefficient or the slopes 1200to zero in the vicinity of the boundaries. The latter strategy is used in \NEMO (cf. Chap.~\ref{LDF}). 1201 1202\subsubsection{Lateral fourth order tracer diffusive operator} 1203 1204The lateral fourth order tracer diffusive operator is defined by: 1205\begin{equation} \label{Eq_PE_bilapT} 1206D^{lT}=\Delta \left( \;\Delta T \right) 1207\qquad \text{where} \;\; \Delta \bullet = \nabla \left( {\sqrt{B^{lT}\,}\;\Re \;\nabla \bullet} \right) 1208 \end{equation} 1209It is the second order operator given by \eqref{Eq_PE_iso_tensor} applied twice with 1210the harmonic eddy diffusion coefficient set to the square root of the biharmonic one. 1211 1212 1213\subsubsection{Lateral second order momentum diffusive operator} 1214 1215The second order momentum diffusive operator along $z$- or $s$-surfaces is found by 1216applying \eqref{Eq_PE_lap_vector} to the horizontal velocity vector (see Appendix~\ref{Apdx_B}): 1217\begin{equation} \label{Eq_PE_lapU} 1218\begin{split} 1219{\rm {\bf D}}^{l{\rm {\bf U}}} 1220&= \quad \  \nabla _h \left( {A^{lm}\chi } \right) 1221   \ - \ \nabla _h \times \left( {A^{lm}\,\zeta \;{\rm {\bf k}}} \right)     \\ 1222&=   \left(      \begin{aligned} 1223             \frac{1}{e_1      } \frac{\partial \left( A^{lm} \chi          \right)}{\partial i} 1224         &-\frac{1}{e_2 e_3}\frac{\partial \left( {A^{lm} \;e_3 \zeta} \right)}{\partial j}  \\ 1225             \frac{1}{e_2      }\frac{\partial \left( {A^{lm} \chi         } \right)}{\partial j} 1226         &+\frac{1}{e_1 e_3}\frac{\partial \left( {A^{lm} \;e_3 \zeta} \right)}{\partial i} 1227        \end{aligned}    \right) 1228\end{split} 1229\end{equation} 1230 1231Such a formulation ensures a complete separation between the vorticity and 1232horizontal divergence fields (see Appendix~\ref{Apdx_C}). 1233Unfortunately, it is only available in \textit{iso-level} direction. 1234When a rotation is required ($i.e.$ geopotential diffusion in $s-$coordinates 1235or isoneutral diffusion in both $z$- and $s$-coordinates), the $u$ and $v-$fields 1236are considered as independent scalar fields, so that the diffusive operator is given by: 1237\begin{equation} \label{Eq_PE_lapU_iso} 1238\begin{split} 1239 D_u^{l{\rm {\bf U}}} &= \nabla .\left( {A^{lm} \;\Re \;\nabla u} \right) \\ 1240 D_v^{l{\rm {\bf U}}} &= \nabla .\left( {A^{lm} \;\Re \;\nabla v} \right) 1241 \end{split} 1242 \end{equation} 1243where $\Re$ is given by  \eqref{Eq_PE_iso_tensor}. It is the same expression as 1244those used for diffusive operator on tracers. It must be emphasised that such a 1245formulation is only exact in a Cartesian coordinate system, $i.e.$ on a $f-$ or 1246$\beta-$plane, not on the sphere. It is also a very good approximation in vicinity 1247of the Equator in a geographical coordinate system \citep{Lengaigne_al_JGR03}. 1248 1249\subsubsection{lateral fourth order momentum diffusive operator} 1250 1251As for tracers, the fourth order momentum diffusive operator along $z$ or $s$-surfaces 1252is a re-entering second order operator \eqref{Eq_PE_lapU} or \eqref{Eq_PE_lapU_iso} 1253with the harmonic eddy diffusion coefficient set to the square root of the biharmonic one. 1254 Note: See TracBrowser for help on using the repository browser.
If you wish to contribute or participate in the discussions about articles you are invited to join Navipedia as a registered user # Atmospheric Refraction Fundamentals Title Atmospheric Refraction Author(s) J. Sanz Subirana, J.M. Juan Zornoza and M. Hernández-Pajares, Technical University of Catalonia, Spain. Level Basic Year of Publication 2011 The electromagnetic signals experience changes in velocity (speed and direction) when passing through the atmosphere due to the refraction. According to the Fermat's principle, the measured range $l$ is given by the integral of the refractive index $n$ along the ray path from the satellite to receiver: $l= \int_{_{\mbox{ray path}}}{n\,dl}\qquad\mbox{(1)}$ Thence, the signal delay can be written as: $\Delta= \int_{_{\mbox{ray path}}}{n\,dl}-\int_{_{\mbox{straight line}}}{dl}\qquad\mbox{(2)}$ where the second term integral is the Euclidean distance between the satellite and receiver. Notice that the previous definition includes both the signal bending and propagation delay. A simplification of previous expression is to approximate the first integral along the straight line between the satellite and receiver: $\Delta= \int_{_{\mbox{straight line}}}{(n-1)\,dl}\qquad\mbox{(3)}$ From the point of view of signal delay, the atmosphere can be divided in two main components: the neutral atmosphere (i.e., the non ionised part), which is a non dispersive media, and the ionosphere, where the delay experienced by the signals depends on their frequency. It must be pointed out that the neutral ionosphere includes the troposphere and stratosphere, but the dominant component is the troposphere, and thence, usually the name of the delay refers only to the troposphere (as tropospheric delay).
## Indifference Curves ### 6. Indifference Curves with Labor-Leisure and Intertemporal Choices The concept of an indifference curve applies to tradeoffs in any household choice, including the labor-leisure choice or the intertemporal choice between present and future consumption. In the labor-leisure choice, each indifference curve shows the combinations of leisure and income that provide a certain level of utility. In an intertemporal choice, each indifference curve shows the combinations of present and future consumption that provide a certain level of utility. The general shapes of the indifference curves - downward sloping, steeper on the left and flatter on the right - also remain the same. Petunia is working at a job that pays $12 per hour but she gets a raise to$20 per hour. After family responsibilities and sleep, she has 80 hours per week available for work or leisure. As shown in Figure B5, the highest level of utility for Petunia, on her original budget constraint, is at choice A, where it is tangent to the lower indifference curve (Ul). Point A has 30 hours of leisure and thus 50 hours per week of work, with income of $600 per week (that is, 50 hours of work at$12 per hour). Petunia then gets a raise to $20 per hour, which shifts her budget constraint to the right. Her new utility-maximizing choice occurs where the new budget constraint is tangent to the higher indifference curve Uh. At B, Petunia has 40 hours of leisure per week and works 40 hours, with income of$800 per week (that is, 40 hours of work at $20 per hour). Figure B5 Effects of a Change in Petunia's Wage Petunia starts at choice A, the tangency between her original budget constraint and the lower indifference curve Ul. The wage increase shifts her budget constraint to the right, so that she can now choose B on indifference curve Uh. The substitution effect is the movement from A to C. In this case, the substitution effect would lead Petunia to choose less leisure, which is relatively more expensive, and more income, which is relatively cheaper to earn. The income effect is the movement from C to B. The income effect in this example leads to greater consumption of both goods. Overall, in this example, income rises because of both substitution and income effects. However, leisure declines because of the substitution effect but increases because of the income effect - leading, in Petunia's case, to an overall increase in the quantity of leisure consumed. Substitution and income effects provide a vocabulary for discussing how Petunia reacts to a higher hourly wage. The dashed line serves as the tool for separating the two effects on the graph. The substitution effect tells how Petunia would have changed her hours of work if her wage had risen, so that income was relatively cheaper to earn and leisure was relatively more expensive, but if she had remained at the same level of utility. The slope of the budget constraint in a labor-leisure diagram is determined by the wage rate. Thus, the dashed line is carefully inserted with the slope of the new opportunity set, reflecting the labor-leisure tradeoff of the new wage rate, but tangent to the original indifference curve, showing the same level of utility or "buying power". The shift from original choice A to point C, which is the point of tangency between the original indifference curve and the dashed line, shows that because of the higher wage, Petunia will want to consume less leisure and more income. The "s" arrows on the horizontal and vertical axes of Figure B5 show the substitution effect on leisure and on income. The income effect is that the higher wage, by shifting the labor-leisure budget constraint to the right, makes it possible for Petunia to reach a higher level of utility. The income effect is the movement from point C to point B; that is, it shows how Petunia's behavior would change in response to a higher level of utility or "buying power," with the wage rate remaining the same (as shown by the dashed line being parallel to the new budget constraint). The income effect, encouraging Petunia to consume both more leisure and more income, is drawn with arrows on the horizontal and vertical axis of Figure B5. Putting these effects together, Petunia responds to the higher wage by moving from choice A to choice B. This movement involves choosing more income, both because the substitution effect of higher wages has made income relatively cheaper or easier to earn, and because the income effect of higher wages has made it possible to have more income and more leisure. Her movement from A to B also involves choosing more leisure because, according to Petunia's preferences, the income effect that encourages choosing more leisure is stronger than the substitution effect that encourages choosing less leisure. Figure B5 represents only Petunia's preferences. Other people might make other choices. For example, a person whose substitution and income effects on leisure exactly counterbalanced each other might react to a higher wage with a choice like D, exactly above the original choice A, which means taking all of the benefit of the higher wages in the form of income while working the same number of hours. Yet another person, whose substitution effect on leisure outweighed the income effect, might react to a higher wage by making a choice like F, where the response to higher wages is to work more hours and earn much more income. To represent these different preferences, you could easily draw the indifference curve Uh to be tangent to the new budget constraint at D or F, rather than at B. ##### An Intertemporal Choice Example Quentin has saved up$10,000. He is thinking about spending some or all of it on a vacation in the present, and then will save the rest for another big vacation five years from now. Over those five years, he expects to earn a total 80% rate of return. Figure B6 shows Quentin's budget constraint and his indifference curves between present consumption and future consumption. The highest level of utility that Quentin can achieve at his original intertemporal budget constraint occurs at point A, where he is consuming $6,000, saving$4,000 for the future, and expecting with the accumulated interest to have $7,200 for future consumption (that is,$4,000 in current financial savings plus the 80% rate of return). However, Quentin has just realized that his expected rate of return was unrealistically high. A more realistic expectation is that over five years he can earn a total return of 30%. In effect, his intertemporal budget constraint has pivoted to the left, so that his original utility-maximizing choice is no longer available. Will Quentin react to the lower rate of return by saving more, or less, or the same amount? Again, the language of substitution and income effects provides a framework for thinking about the motivations behind various choices. The dashed line, which is a graphical tool to separate the substitution and income effect, is carefully inserted with the same slope as the new opportunity set, so that it reflects the changed rate of return, but it is tangent to the original indifference curve, so that it shows no change in utility or "buying power". The substitution effect tells how Quentin would have altered his consumption because the lower rate of return makes future consumption relatively more expensive and present consumption relatively cheaper. The movement from the original choice A to point C shows how Quentin substitutes toward more present consumption and less future consumption in response to the lower interest rate, with no change in utility. The substitution arrows on the horizontal and vertical axes of Figure B6 show the direction of the substitution effect motivation. The substitution effect suggests that, because of the lower interest rate, Quentin should consume more in the present and less in the future. Quentin also has an income effect motivation. The lower rate of return shifts the budget constraint to the left, which means that Quentin's utility or "buying power" is reduced. The income effect (assuming normal goods) encourages less of both present and future consumption. The impact of the income effect on reducing present and future consumption in this example is shown with "i" arrows on the horizontal and vertical axis of Figure B6. Figure B6 Indifference Curve and an Intertemporal Budget Constraint The original choice is A, at the tangency between the original budget constraint and the original indifference curve Uh. The dashed line is drawn parallel to the new budget set, so that its slope reflects the lower rate of return, but is tangent to the original indifference curve. The movement from A to C is the substitution effect: in this case, future consumption has become relatively more expensive, and present consumption has become relatively cheaper. The income effect is the shift from C to B; that is, the reduction in utility or "buying power" that causes a move to a lower indifference curve Ul, but with the relative price the same. It means less present and less future consumption. In the move from A to B, the substitution effect on present consumption is greater than the income effect, so the overall result is more present consumption. Notice that the lower indifference curve could have been drawn tangent to the lower budget constraint point D or point F, depending on personal preferences. Taking both effects together, the substitution effect is encouraging Quentin toward more present and less future consumption, because present consumption is relatively cheaper, while the income effect is encouraging him to less present and less future consumption, because the lower interest rate is pushing him to a lower level of utility. For Quentin's personal preferences, the substitution effect is stronger so that, overall, he reacts to the lower rate of return with more present consumption and less savings at choice B. However, other people might have different preferences. They might react to a lower rate of return by choosing the same level of present consumption and savings at choice D, or by choosing less present consumption and more savings at a point like F. For these other sets of preferences, the income effect of a lower rate of return on present consumption would be relatively stronger, while the substitution effect would be relatively weaker.
generalization of the curvature endomorphism Dear colleague, I'm wondering how the generalization of the curvature endomorphism for vector field $w \to R(u,v)w$ looks like for tensor fields of higher rank, eg $W \to T(U_1,U_2,...)W$ for some tensor $T$. For a single vector field $w$ this iso is a linear transform of the tangent bundle. What is it gonna be for a tensor field of higher ranks? - I am not sure that I understand the question. It would help if you could elaborate. At any given point $p$ in the manifold $M$, the curvature defines a linear map $\Lambda^2 T_pM \to \mathrm{End}(T_pM)$. Is your question about other linear maps $\mathcal{T}(T_pM) \to \mathrm{End}(T_pM)$, where $\mathcal{T}$ is some other space of tensors? Which sort of maps do you have in mind? – José Figueroa-O'Farrill Apr 22 2010 at 19:25 Roughly, there is a $R(u,v)$ term for every $TM$ factor of a homogeneous section of $\bigotimes TM$; the naturality of this comes from choosing the connections $\nabla$ on $\bigotimes TM$ to satisfy a Leibniz rule w.r.t $\otimes$: $\nabla_X (W\otimes V) = (\nabla_X W)\otimes V + W\otimes(\nabla_X V)$. It is easy to check that the terms in $[\nabla_X,\nabla_Y]$ with $\nabla_X$ and $\nabla_Y$ on distinct factors will cancel. A similar Leibniz condition describes how to deal with dual factors. – some guy on the street Apr 22 2010 at 20:01 I'm not sure that this is what the OP means, though. You're describing the action of the curvature operator on tensor fields, whereas the OP explicitly talks about an endomorphism of $TM$ of the form $T(U_1,U_2,\dots)$. – José Figueroa-O'Farrill Apr 22 2010 at 21:20 A map of the form $\Lambda^n T_p M\to End(\otimes_1^{n-2} TM)$ which acts as W\toT(U_1,\dots,U_n)W . In this case W should be a tensor of the rank n−1 , I suppose (so for the Riemann tensor it becomes a vector field). – Peter Apr 22 2010 at 21:53 The action of R via Leibnitz rule wrt $\otimes$ seems to be one of the possibilities, but it involves only two vector fields X and Y as noted above. – Peter Apr 22 2010 at 22:00
# Start a new discussion ## Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below ## Discussion Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorMike Shulman • CommentTimeNov 9th 2011 Created coend in a derivator, with a stub at homotopy coend. • CommentRowNumber2. • CommentAuthorUrs • CommentTimeNov 9th 2011 • (edited Nov 9th 2011) At homotopy coend I have added a pointer to the section at Quillen bifunctor on Quillen bifunctor properties of coends over tensors here. • CommentRowNumber3. • CommentAuthorUrs • CommentTimeNov 9th 2011 • (edited Nov 9th 2011) Created coend in a derivator, Interesting. I keep saying to Moritz (who is here) that we should go through lists of applications of derivators such as this, in order to fill the theory with life. But for the time being we seem to be both too busy and travelling too much to get around to it. • CommentRowNumber4. • CommentAuthorMike Shulman • CommentTimeMar 6th 2012 I added to coend in a derivator a fourth construction using the twisted arrow category, which I learned from Moritz. • Please log in or leave your comment as a "guest post". If commenting as a "guest", please include your name in the message as a courtesy. Note: only certain categories allow guest posts. • To produce a hyperlink to an nLab entry, simply put double square brackets around its name, e.g. [[category]]. To use (La)TeX mathematics in your post, make sure Markdown+Itex is selected below and put your mathematics between dollar signs as usual. Only a subset of the usual TeX math commands are accepted: see here for a list. • (Help)
The 2019 SciPy John Hunter Excellence in Plotting Contest is accepting submissions! Apply by June 8th # Secondary Axis¶ Sometimes we want as secondary axis on a plot, for instance to convert radians to degrees on the same plot. We can do this by making a child axes with only one axis visible via Axes.axes.secondary_xaxis and Axes.axes.secondary_yaxis. This secondary axis can have a different scale than the main axis by providing both a forward and an inverse conversion function in a tuple to the functions kwarg: import matplotlib.pyplot as plt import numpy as np import datetime import matplotlib.dates as mdates from matplotlib.transforms import Transform from matplotlib.ticker import ( AutoLocator, AutoMinorLocator) fig, ax = plt.subplots(constrained_layout=True) x = np.arange(0, 360, 1) y = np.sin(2 * x * np.pi / 180) ax.plot(x, y) ax.set_xlabel('angle [degrees]') ax.set_ylabel('signal') ax.set_title('Sine wave') return x * np.pi / 180 return x * 180 / np.pi plt.show() Here is the case of converting from wavenumber to wavelength in a log-log scale. Note In this case, the xscale of the parent is logarithmic, so the child is made logarithmic as well. fig, ax = plt.subplots(constrained_layout=True) x = np.arange(0.02, 1, 0.02) np.random.seed(19680801) y = np.random.randn(len(x)) ** 2 ax.loglog(x, y) ax.set_xlabel('f [Hz]') ax.set_ylabel('PSD') ax.set_title('Random spectrum') def forward(x): return 1 / x def inverse(x): return 1 / x secax = ax.secondary_xaxis('top', functions=(forward, inverse)) secax.set_xlabel('period [s]') plt.show() Sometime we want to relate the axes in a transform that is ad-hoc from the data, and is derived empirically. In that case we can set the forward and inverse transforms functions to be linear interpolations from the one data set to the other. fig, ax = plt.subplots(constrained_layout=True) xdata = np.arange(1, 11, 0.4) ydata = np.random.randn(len(xdata)) ax.plot(xdata, ydata, label='Plotted data') xold = np.arange(0, 11, 0.2) # fake data set relating x co-ordinate to another data-derived co-ordinate. # xnew must be monotonic, so we sort... xnew = np.sort(10 * np.exp(-xold / 4) + np.random.randn(len(xold)) / 3) ax.plot(xold[3:], xnew[3:], label='Transform data') ax.set_xlabel('X [m]') ax.legend() def forward(x): return np.interp(x, xold, xnew) def inverse(x): return np.interp(x, xnew, xold) secax = ax.secondary_xaxis('top', functions=(forward, inverse)) secax.xaxis.set_minor_locator(AutoMinorLocator()) secax.set_xlabel('$X_{other}$') plt.show() A final example translates np.datetime64 to yearday on the x axis and from Celsius to Farenheit on the y axis: dates = [datetime.datetime(2018, 1, 1) + datetime.timedelta(hours=k * 6) for k in range(240)] temperature = np.random.randn(len(dates)) fig, ax = plt.subplots(constrained_layout=True) ax.plot(dates, temperature) ax.set_ylabel(r'$T\ [^oC]$') plt.xticks(rotation=70) def date2yday(x): """ x is in matplotlib datenums, so they are floats. """ y = x - mdates.date2num(datetime.datetime(2018, 1, 1)) return y def yday2date(x): """ return a matplotlib datenum (x is days since start of year) """ y = x + mdates.date2num(datetime.datetime(2018, 1, 1)) return y secaxx = ax.secondary_xaxis('top', functions=(date2yday, yday2date)) secaxx.set_xlabel('yday [2018]') def CtoF(x): return x * 1.8 + 32 def FtoC(x): return (x - 32) / 1.8 secaxy = ax.secondary_yaxis('right', functions=(CtoF, FtoC)) secaxy.set_ylabel(r'$T\ [^oF]$') plt.show() ## References¶ The use of the following functions and methods is shown in this example: import matplotlib matplotlib.axes.Axes.secondary_xaxis matplotlib.axes.Axes.secondary_yaxis Total running time of the script: ( 0 minutes 1.273 seconds) Keywords: matplotlib code example, codex, python plot, pyplot Gallery generated by Sphinx-Gallery
# aliquot ## < a quantity that can be divided into another a whole number of time /> I guess I just found another org-powered user! #emacs
# Is it possible to make Mathematica reformulate an expression in a more numerically stable way? I'm writing a numerical optimization, and I'm having a problem with an expression of the form $$e^{-t} (1+\mathrm{erf}(t))$$ The overall shape of the function looks correct, but when $t$ is small, $e^{-t}$ is huge while $(1+\mathrm{erf}(t))$ is very small, and their product is also small. This leads to horrible floating point inaccuracies. I know, of course, that there are multiple things I can do to remedy this, including scaling my problem so that the values are more reasonably sized. Another is to reformulate the expression to avoid ever computing the huge intermediate values. In the particular example I've given, this is simple: $$e^{-t} (1+\mathrm{erf}(t)) \\ \exp{\left[ \log(e^{-t}) + \log{(1+\mathrm{erf}(t))} \right]} \\ \exp{\left[ -t \log{(1+\mathrm{erf}(t))} \right]}$$ ...which is well behaved for all reasonable values of $t$. However, in my actual expression, there are various parameters with respect to which I take the derivative. The resulting expressions are hideous and reformulating them by hand is daunting (although tractable). Is there a way to make Mathematica reformulate an expression while attempting to avoid expressions that will be numerically unstable? I don't expect Mathematica to be automatically aware of which expressions will be problematic, but if I could, for example, simply instruct it to avoid using Exp[] unless it absolutely must, this would be a very useful tool for me (and I suspect for other people working on numerical optimization!). Note: I am not doing the optimization work in Mathematica. I'm only using Mathematica to help derive analytical gradients for my merit function. Therefore, any features of Mathematica which would eliminate the numerical inaccuracy only in Mathematica doesn't really help me. - Eep. Is $t$ positive or negative? Here is a related question. –  J. M. May 25 '12 at 14:42 BTW: your function is equivalent to $\exp(-t)\mathrm{erfc}(-t)$; maybe that reformulation might be more useful to you. –  J. M. May 25 '12 at 14:47 @J.M.: $t$ can go to negative values, yes. I've actually written my code using $\mathrm{erfc}$, but the values resulting from the $\mathrm{erf}$ term are the same, it doesn't really help. The issue is the huge values from $\mathrm{exp}$. They don't survive to the output anyway, so my goal is to reformulate things such that huge numbers are never generated in the first place. –  Colin K May 25 '12 at 14:56 @J.M.: Matlab however, does have a nice implementation of the scaled complimentary eror function (I'm assuming you mean $e^{t^2} \mathrm{erfc}(t)$) so a reformulation involving this function would be great. I actually hadn't thought of that yet. Thank you! For this particular problem I think I've got enough suggestions to make a solution, but for future use I'd still be interested in how to use MMA for this more effectively. –  Colin K May 25 '12 at 15:25 @Jens: I'm sorry, I should have been more clear about this: The function I've described is a fit function, not the merit function itself. –  Colin K May 25 '12 at 15:43 If you at least know in advance the range in which you will later evaluate, you might consider Taylor series or related approximate forms, using Mathematica to derive such forms and to approximate an error bound. In your example: In[5]:= func = Exp[-t]*(1 + Erf[t]); ser = Series[Exp[-t]*(1 + Erf[t]), {t, 0, 4}]; approxpoly = Normal[ser] Out[7]= 1 + (-1 + 2/Sqrt[Pi])*t + (1/2 - 2/Sqrt[Pi])*t^2 + (-(1/6) + 1/(3*Sqrt[Pi]))*t^3 + (1/24 + 1/(3*Sqrt[Pi]))*t^4 That is a Taylor polynomial approximation near the origin. We can find the coefficient of the next term to give a first order approximation of the error term. In[4]:= SeriesCoefficient[Exp[-t] (1 + Erf[t]), {t, 0, 5}] // N Out[4]= -0.0365428 So the error is around 3/100*t^5 in magnitude near t = zero. This of course assumes the series converges in that region, but in this case at least we know it does (if it did not, your problems would go beyond numerical (in)stability). At t=1, this error is In[13]:= func - approxpoly /. t -> 1. Out[13]= -0.0732347 If you anticipate values of t in that range you might want to use more terms in the polynomial, or else switch before t=1 to a different approximation (this is assuming an error of 7% is more than you want to allow). The table below will give a better idea of the error sizes for t in the range of (-1,1). In[16]:= Table[func - approxpoly, {t, -1., 1., .1}] Out[16]= {-0.0239914, -0.0110741, -0.00431147, -0.0012192, \ -0.0000860751, 0.000162876, 0.000119029, 0.0000438515, 7.806*10^-6, 3.05852*10^-7, 0., -4.21962*10^-7, -0.0000151951, -0.000127231, \ -0.000581433, -0.00189768, -0.00499075, -0.0112854, -0.0228166, \ -0.0423096, -0.0732347} As you would be evaluating outside of Mathematica, you would be substituting the approximations for the function of interest. What I indicate above is an idea of how Mathematica might be utilized to derive and assess them for quality. - I would even go further and suggest that OP use PadeApproximant[] to derive approximations, as these can usually give more accurate approximations even slightly far away from the expansion point. –  J. M. May 25 '12 at 15:14 For instance, Table[Exp[-t]*(1 + Erf[t]) - g[t], {t, -1., 1., .1}], where g[t_] = PadeApproximant[Exp[-t] Erfc[-t], {t, 0, 3}], gives {0.0175211, 0.0105228, 0.00583002, 0.0029265, 0.00129813, 0.000490888, 0.000150096, 0.0000345791, 6.66391*10^-6, -3.98605*10^-8, 0., 8.10952*10^-9, 7.02389*10^-7, 8.68913*10^-6, 0.0000488437, 0.000178342, 0.000495227, 0.00113726, 0.00227074, 0.00407224, 0.00670781}. –  J. M. May 25 '12 at 15:24 @J.M. Thanks. Yes, Pade rational functions are probably better than Taylor polynomials for the stated purpose. –  Daniel Lichtblau May 25 '12 at 15:40 One is reminded of similar problems with $\Gamma$: this function is frequently used in combinatorial expressions in which enormous values almost cancel. Taking logarithms is unstable, but it's possible to compute $\log(\Gamma(z))$ directly with an asymptotic expansion--whence Mathematica offers a separate LogGamma function for this purpose. Why not use asymptotic expansions for large arguments, then? What is needed is an asymptotic expansion of $\log(\text{erfc}(z))$ as $z \to \infty$ (see the comment by @J.M. and apply $\log$). Mathematica's Limit capability quickly lets us obtain as many terms as we might like. I used it to work out the first few below: logErfc[z_, n_] := With[{t = z^2, c = {1, 5, 37, 353, 4081, 55205, 854197, 14876033}}, -(Log[\[Pi]]/2 + t + (1/2)Log[t]) + Sum[c[[i]] / (i (-2 t)^i), {i, 1, Min[n, Length[c]]}] ] The n argument lets you choose how many terms to use ($n=0, 1, 2, \dots, 8$). The expression rapidly becomes extremely accurate even for small numbers of terms and modest values of $z$: ListLogPlot[ Table[Abs[Log[Erfc[N[i, 20]]] - logErfc[i, n]], {n, 2, 8}, {i, 3, 30}], DataRange -> {3, 30}, PlotRange -> {All, All}, Joined -> True, AxesLabel -> {z, "Absolute log error"}] To get more coefficients (which you would scarcely need, but this shows how the first eight coefficients were obtained), compute suitable limits of the difference. For instance, to obtain the ninth coefficient, evaluate Limit[9 (-2 z^2)^9 (Log[Erfc[z]] - logErfc[z, 8]), z -> Infinity] - Here is a less complicated way of obtaining a nice asymptotic expansion: Series[Log[Erfc[z]], {z, Infinity, 20}]. One might consider constructing a Padé approximant from the asymptotic series as well, but thanks to a bug within the internal implementation of PadeApproximant[], some trickery is required: Log[1/z] - Log[Pi]/2 - PadeApproximant[Log[1/z] - Log[Pi]/2 - Series[Log[Erfc[z]], {z, Infinity, 10}], {z, Infinity, 6}]. –  J. M. May 26 '12 at 2:25 ...and if all you want is the c list: CoefficientList[Series[Log[Erfc[Sqrt[z]]], {z, Infinity, 20}] + z - Log[1/Sqrt[z]] + Log[Pi]/2, 1/z] (# 2^#) &@Range[0, 20] –  J. M. May 26 '12 at 2:30
# Iterated Pieri's rule, Schur functors and intersection of subrepresentations Let $\lambda$ and $\mu$ be two Young diagrams, such that $\lambda$ can be obtained from $\mu$ by extending one single column with additional $b$ boxes. Let $\Sigma^\lambda U$ and $\Sigma^\mu U$ denote the corresponding Schur functors (that we consider as representations of $GL(U)$) and let $a \geq 0$ be a integer number. By Pieri's rule $\Sigma^\lambda U$ can be considered as an irreducible subrepresentation of $\Sigma^\mu U\otimes\Lambda^b U$. Thus, one may think of $\Sigma^\lambda U\otimes\Lambda^a U$ as a subrepresentation of $\Sigma^\mu U\otimes\Lambda^b U\otimes\Lambda^a U$. At the very same time, $\Lambda^{b+a}U$ is naturally an irreducible subrepresentation of $\Lambda^b U\otimes\Lambda^a U$. Thus, one may consider $\Sigma^\mu U\otimes\Lambda^{b+a} U$ as a subrepresentation of $\Sigma^\mu U\otimes\Lambda^b U\otimes\Lambda^a U$. I'm interested in computing the intersection of these two subrepresentations. Let us decompose $\Sigma^\mu U\otimes\Lambda^{b+a} U = \oplus_{\nu\in P} \Sigma^\nu U$ into irreducibles. Here $\nu$ is running over the set of Young diagrams such that $\nu/\mu$ consists of $a+b$ boxes and is of width $1$ (every row of $\mu$ can be extended with at most one box). Define the submodule $W=\oplus_{\nu\in Q}\Sigma^\nu U$ consisting of those $\nu\in P$, such that $\nu\supset\lambda$ (recall that $\lambda$ can be obtained from $\mu$ by extending $b$ consecutive rows with a single box). It's easy to see that every such $\nu\in Q$ appears in the decomposition of $\Sigma^\lambda U\otimes\Lambda^a U$ into irreducibles and these are the only ones that can. Thus, the intersection should be contained in $W$. Conjecture: the intersection coincides with $W$. The main problem is the following one: despite every irreducible factor $\Sigma^\nu U$ in $W$ being distinct and appearing in both $\Sigma^\lambda U\otimes\Lambda^a U$ and $\Sigma^\mu U\otimes\Lambda^{b+a} U$ with multiplicity one, most of the time its multiplicity in the ambient representation $\Sigma^\mu U\otimes\Lambda^b U\otimes\Lambda^a U$ is quite big. Thus, one should somehow use the actual form of Pieri's embedding as counting multiplicities is not enough. -
The recommended virtual machine platform for the AMD64 Apertis system images is VirtualBox. It is typical for the Apertis SDK to be run in a virtual machine, though other image types can also be used. This enables development to be performed on computers running Windows, Mac OS, or different Linux distributions. # System requirements You will need a PC with the following configuration to install and run the SDK: ## Hardware • Dual core CPU at 2GHz or higher • 8 GB RAM or more • 12 GB or more free space on the hard disk ## Software • Windows OS • Oracle VirtualBox. See supported version and installation instructions below. ### VirtualBox supported version The following table contains the supported version of VirtualBox and VirtualBox Guest additions for each release of Apertis: Apertis release VirtualBox version VirtualBox Guest Additions version v2019 6.1.12 r139181 (Qt5.6.2) 6.1.12 v2020 6.1.12 r139181 (Qt5.6.2) 6.1.12 v2021 6.1.12 r139181 (Qt5.6.2) 6.1.12 v2022 6.1.12 r139181 (Qt5.6.2) 6.1.12 # Installing VirtualBox If you have not yet installed Oracle VM VirtualBox, to install the current version of this software, please follow these steps: • Download the required version of the VirtualBox installation file for your host platform. Check the table of supported versions above to determine which version of VirtualBox is supported for the Apertis Release you want to use. • Follow the installation procedure provided in the VirtualBox installation guide for your host plaform. # VirtualBox Setup VirtualBox can be configured both from its GUI or via the command line. ## From the VirtualBox GUI • If you have not already downloaded an Apertis SDK image, the images page contains information regarding the options available. SDK images provided explicitly for use with VirtualBox have the file extension .vdi.gz. • Extract the gzipped VDI image file to a local folder on your PC. The image for the virtual machine is a single file. • Start the VirtualBox application (“Oracle VM VirtualBox” in the Start menu). • Go to MachineNew or click the New Icon. This launches the Create New Virtual Machine screen. • Enter a name and select the following from the menus: • Type: Linux • Version: Debian (64 bit) • Select the RAM size. Change the values manually according to your requirements. Assign at least 50% RAM for the virtual machine if your total RAM is more than 2 GB, with 2048MB recommended as the minimum for the SDK. • Select Use an existing virtual hard drive file and browse to the location of your unzipped file (which should have the extension .vdi). • Click Create to create the virtual machine. • A few more settings need to be modified to ensure that the Apertis images boot. Select your new virtual machine and select Settings.... • Ensure that the following settings are set as required: • Check the SystemMotherboardEnable EFI (special OSes only) option • Check the SystemProcessorEnable PAE/NX option • Set video memory, DisplayScreenVideo Memory, 64MB is recommended • Be sure 3D Acceleration is disabled. Ensure DisplayScreenEnable 3D Acceleration is unchecked • If you want to start your virtual machine from the desktop without having to open the VirtualBox every time, you can create a desktop icon. Right-click the entry of your virtual machine on the left and choose Create Shortcut on Desktop from the menu. ## From the Command Line • Run the following commands: $RELEASE=v2020$ REVISION=0 $wget https://images.apertis.org/release/$RELEASE/$RELEASE.$REVISION/amd64/sdk/apertis_$RELEASE-sdk-amd64-sdk_$RELEASE.$REVISION.vdi.gz --2020-06-09 16:20:04-- https://images.apertis.org/release/v2020/v2020.0/amd64/sdk/apertis_v2020-sdk-amd64-sdk_v2020.0.vdi.gz Resolving images.apertis.org (images.apertis.org)... 2a00:1098:0:82:1000:25:2eeb:e3bc, 46.235.227.188 Connecting to images.apertis.org (images.apertis.org)|2a00:1098:0:82:1000:25:2eeb:e3bc|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 2363044547 (2.2G) [application/octet-stream] Saving to: ‘apertis_v2020-sdk-amd64-sdk_v2020.0.vdi.gz’ apertis_v2020-sdk-a 100%[===================>] 2.20G 6.10MB/s in 5m 57s 2020-06-09 16:26:01 (6.32 MB/s) - ‘apertis_v2020-sdk-amd64-sdk_v2020.0.vdi.gz’ saved [2363044547/2363044547]$ gunzip apertis_$RELEASE-sdk-amd64-sdk_$RELEASE.$REVISION.vdi.gz$ VDIFILE=./apertis_$RELEASE-sdk-amd64-sdk_$RELEASE.$REVISION.vdi$ VMNAME="Apertis $RELEASE.$REVISION SDK" $vboxmanage createvm --register --name "$VMNAME" --ostype Debian_64 Virtual machine 'Apertis v2020.0 SDK' is created and registered. UUID: 6370548c-2a11-4fb8-9411-5dc2ae686a8f Settings file: '/home/user/VirtualBox VMs/Apertis v2020.0 SDK/Apertis v2020.0 SDK.vbox' $vboxmanage modifyvm "$VMNAME" --memory 2048 --apic on --pae on --largepages off --firmware efi --accelerate3d off --vram 64 $vboxmanage modifyvm "$VMNAME" --nictype1 virtio $vboxmanage storagectl "$VMNAME" --name SATA --add sata $vboxmanage storageattach "$VMNAME" --storagectl SATA --port 0 --type hdd --medium "$VDIFILE"$ vboxmanage storageattach "$VMNAME" --storagectl SATA --port 1 --type dvddrive --medium emptydrive$ vboxsdl --startvm "$VMNAME" Oracle VM VirtualBox SDL GUI version 6.0.22 (C) 2005-2020 Oracle Corporation All rights reserved. # Start the virtual machine for the first time Use your just created desktop shortcut, or click Start in VirtualBox to start the virtual machine. The boot-up-process might take a few seconds. On starting the virtual machine, VirtualBox might display some popup windows informing you about mouse, keyboard and color settings which might be different on the VM. Please read through the messages and click OK for all of them. If you check the Do not show this message again checkbox, you can permanently disable these popup messages for this virtual machine. # Guest Additions under VirtualBox SDK images Guest additions consist of device drivers and system applications that optimize the guest operating system for better performance and usability. They are designed to be installed inside a virtual machine after the guest operating system has been installed. For more information on the features provided by guest additions, see the VirtualBox manual. In the context of the Apertis project, guest additions allow developers to enable full screen rendering within VirtualBox SDK images. Full screen is not the only reason to install guest additions, though. Shared folders are another very handy feature. ## Installation • Start your VirtualBox Apertis machine SDK image • Go to DevicesInsert Guest Additions CD Image… on the VirtualBox menu bar. A virtual device will appear on the desktop. • Double-click on the VBOXADDITIONS CD icon which should appear on your guest desktop. This will launch a file browser. • Open a terminal by right clickopen terminal in the guest additions folder • Run the Linux Guest Additions installation script: $ sudo ./VBoxLinuxAdditions.run • Verify a new directory is created under /opt with guest additions or verify the vboxguest module is loaded You can now enjoy guest additions’ enhanced features. Once the Guest Additions are installed successfully (the process might take a few minutes), restart the virtual machine, see here if you need help with that. ## Setting up shared folders Go to Settings and click Shared Folders and select Add Shared Folders (use the icon on the left hand side). Browse to the path you created your share folder in (e.g. C:\SHARE). Click OK and close the Settings window. The VirtualBox VM Settings can only be edited when the VM is closed. So please close all VMs if any should be running, before setting up a shared folder. Once you start the virtual machine, go to ApplicationsTerminal Emulator and run this command in the terminal: $sudo mount –t vboxsf HOST_DIR_NAME GUEST_DIR_NAME E.g: $ sudo mount –t vboxsf SHARE /mnt This command will mount the share folder to the current /mnt. ## Adjusting virtual machine window size Once the guest additions are installed, the window size can be changed. Select the option Adjust Window Size. Then resize the window to make it appear as a normal working size. ## Put the virtual machine in Fullscreen mode You can switch the display of the SDK to fullscreen mode by selecting View in the VirtualBox menu and choosing Switch to Fullscreen. You can still access the most important options of VirtualBox in the menu that appears at the bottom of the screen when you get close to it with your cursor. # Uninstall the virtual machine To uninstall the virtual machine, open VirtualBox and right click the machine you want to remove. Chose Remove from the menu. In the following dialog you can decide if you want to remove the virtual machine from VirtualBox or if you want to delete the files containing the virtual machine from your hard drive as well. This will remove the hard drive of the virtual machine and all files saved there. Please make sure to make backups of the files you still need before deleting all files. Remove only will just delete the virtual machine from VirtualBox but leave the files containing the virtual machine intact. # Non-SDK images We recommend running minimal, target and development images on real hardware, but VirtualBox can run our amd64 images. • Download the .img.gz and .img.bmap files for the image you need from the image repository • Expand the downloaded .img.gz file: • If you have bmaptool (recommended), use that to create a sparse file: $bmaptool copy filename.img.gz apertis.img • If not, expand the downloaded .img.gz file (this will be slower): $ gunzip -c filename.img.gz > apertis.img • Convert the image to VirtualBox format: \$ vboxmanage convertfromraw apertis.img apertis.vdi --format VDI • You can delete the temporary .img file now • Create a new VM in the VirtualBox Manager • Select Use an existing virtual hard disk file and chose the .vdi file • Modify Settings, mostly the same as for the SDK (see above) • Base Memory can be smaller for these images: 1024M is recommended
Awesome! # In Specific: ## Presentation of Pairing Games The meeting started with a good presentation by Moss & Laura from Cyrus Innovation[1]. The presentation was their “Pairing Games As Intentional Practice” talk which they will be giving at Agile2010[2]. The talk was good and had interesting ideas about playing games with pairing like one might play games with coding katas etc. e.g. Do the pairing with some specific restrictive rules such that lessons can be learned; activities practiced etc. They presented several games (Socrates, Silent Programming) they have thought up along with the classic Ping Pong; and guided us in the creation of our own game (One Minute Switching). We put some of these games into practice when we went on to do a coding kata. I think it would be very beneficial to have the game playing in the session; but their slot does not afford this. I suggested that they us an Open Jam[3] slot; they could announce it during their talk and then guide the games during the Open Jam. New games could be developed as well. I’d love to hear about any new games that are developed. ## Coding Kata with Pairing Games After the presentation, and pizza we worked on the Tic-Tac-Toe kata. We had five pairs and decided we’d try out some of the pairing games. ### The One Minute Switching Game We started with the One Minute Switching we had designed in the session. The intention is to keep momentum going by setting a fast pace of switching pairs every minute. The pace was relentless and unforgiving! Every time we’d start something we’d switch. I thought it was a great game; it gave an urgency to all decisions and actions; but definitely not something to do for too long. The retrospective brought out similar feelings from the other people. ### The Socrates Game We switched and I got to pair with Abby [4]. I am so happy to get to hang out with Abby again even if for a short time; she is so full of energy and great ideas. This probably helped making this paring session really ‘click’. Our game was the Socrates game which involves the Navigator asking questions of the driver who must answer them after researching in the code; or by changing the code to answer the question. The group decided that the new person to the pair would be the Driver as it would facilitate them learning the ‘new’ code. It worked well; but all pairing groups eventually move away from this game and went to a more standard paring style after an initial use of the game. Our retrospective on this game brought out the thoughts that perhaps with so little code to learn/research the game was not well suited. ### The Silent Programming Game This was the ‘hardest’ game. There are two rules: 1. No talking 2. Switch after three minutes if you haven’t already switched. It was very hard for me to not want to just rip the keyboard away from my paring partner every second. I felt I monopolized the keyboard too much in this case. It did not help that my partner was not familiar with C#/VS2010 which we were using on this machine. The retrospective brought up the point that perhaps this game works best when the knowledge level of the pair partners are roughly equal and they both have a good understanding of what needs to get done in the next roughly 30 minutes. ### My Quick Retrospective on the Code Katas I think for the future we should spend a few minutes to set up the initial code needed so we have a failing first (generically named) test. Basically just something that says public void aTest() { assert.Fail("test something here"); } One or two ‘iterations’ of the One Minute Switching game was spent just doing this; as several of us were not entirely familiar with the IDE etc. On the other hand the One Minute Switching game was great for this; all choices of the setup needed to be done quickly as the clock was ticking! I also liked the quick retrospectives after each pair switch. This helped me learn a little from each paring session. # In Conclusion This was a good session. The Presentation was good and I want to hear more about it. If you are going to Agile2010 consider going to this session. This was especially great for me because I have limited/no opportunity to pair program in my day job. I find the practice engaging, challenging, and even fun. I’m glad I got a chance to do some tonight. Footnotes: [1] @moss & @lgdean on twitter; www.cyrusinnovation.com. [4] http://thehackerchickblog.com/ and @HackerChick on twitter. @boston_sccodekatapairprogrammingsoftwarecraftsmanshiptdd
What kind of realistic ranged weapons would be effective in spaceship combat? So my indomitable army of bunnies have developed space travel and built their first space ship for the exploration of the great universe. However they have a problem! After some consultation with the great god, google, they have come to the conclusion that laser and plasma weaponry are most likely not feasible. Leaving them with magnetic-based weaponry and missiles(assume that they don't have an innumerable amount nuclear missiles). However, it seems to me that firing missiles in space might not be an effective weapon against other spaceships. They would be probably unable to maneuver well enough to hit a fast moving spaceship and any civilization that are advanced enough to built advanced spaceships would have good Anti-Ballistic missiles and Close-in weapon systems. A magnetic-based weaponry is feasible, you can have the space ship's engine power a railgun and fire off kinetic projectiles at high speeds to hit enemy ships. However, I'm worried about the potential recoil from a rail gun knocking the spaceship(I don't think anybody wants to get knocked out of their precise orbit around the planet when they are engaging enemies) around and I think spaceships can also avoid the railgun projectile, provided some distance and anticipation of the projectile(a book said that some ship system could detect the massive buildup of energy needed to fire the railgun and they dodged it) Nukes as asked in this question seem to be highly effective but I would assume that most ships won't carry a ridiculous amount of nukes to use in minor skirmishes(can you imagine if a accident involving a spaceship with a few hundred nukes on it happened when it came in for a landing on the planet?) Do we have any effective ranged weaponry for use in space combat which are feasible and able to be uniformly supplied for all ships? I can't imagine going into space, only to use scaled up rifles in space combat. Someone correct me if my assumption of railguns and magnetic-based weaponry are wrong and that they are in fact the most effective weapons for space combat. • ITT: EvE online – user23110 Sep 19 '16 at 14:31 • Long range weapons are going to need some sort of self guidance, and really the only feasible thing is going to be missiles (current missile tech already check off all the need-to-dos). They can be countered in a variety of ways, but they will still be the first strike option. As range starts to shorten you need less and less self guidance until direct fire kinetic weapons become optimum. The question about what your going to use more of revolves around how effective your anti-missile-systems are. – Marky Sep 19 '16 at 14:51 • In a vacuum, anything you can out-maneuver, you can probably just out-run. – chepner Sep 19 '16 at 16:41 • One of the big questions here is: what kind of travel method do these ships use? Is FTL travel (wormholes, hyperspace, flicker-jumping, etc) during combat going to be a Thing or are fights going to take place at relativistic or even sub-relativistic speeds? How far apart are ships going to be? All of these details affect the answer to your question. – Draco18s Sep 19 '16 at 18:58 • they have come to the conclusion that laser and plasma weaponry are most likely not feasible Why? If you want something better, tell us exactly why those 2 are not feasible in your opinion. – Mast Sep 20 '16 at 13:51 For something that's relatively small, Pulse laser ablation Basically, a laser with a high enough energy, focused on a small enough spot, will instantly turn any surface into a gas. This gas, in a vacuum, will immediately disperse, exposing a hole that was drilled by the package of photons. However, the real damage comes when the laser excites the surface into a plasma, which has the potential to damage its surroundings. As the laser repeatedly hits a target, the material heats up, making each successive hit more damaging than the last, making the laser a weapon that will win a war of attrition. As things get bigger, the way that a laser 'turret' tracks its target get more funky, thus a space station could effectively use Missiles No tracking required, super long range, the missiles actively seek a target to destroy as opposed to a turret. With a sufficient launching system, missiles could have an infinite range. Though, they could be 'intercepted' by those pesky lasers. This could be solved by launching a higher number of lower damaging missiles, effectively overwhelming any sort of defense. There is a slight problem with missiles, things could outrun them. you don't see this often, but speedy spaceships could be built with light offensive systems for the sole purpose of outrunning missiles. after burning for a few minutes, the missile will run out of fuel and become a projectile, at which point, the ships would move out of the way. however, to eliminate the problem of heat signature tracking, there would probably be another ranged weapon: Bombs Bombs are easy to use, just give them a push in the general direction that they should detonate in, and watch it sail off majestically. The problem with countering bombs, is that they have no heat signature to lock on to. They'd be invisible to an non-optical tracking system. Bombs would be effective at eliminating things like hordes of smaller adversaries, and in some cases, a single larger one. • How to avoid targets overrunning your missiles: instead of explosive payload, mount ablative lasers on the missiles. They keep firing at the rear/engine of the target while matching Vector and Acceleration to their best ability. Laser guided missiles with lasers FTW. @Jammin4CO – Mindwin Sep 19 '16 at 17:53 • @Mindwin better yet, have the missile launch LAsERs (Light Assault and Engagement Rangers) which then fire ablative lasers at the target. That way, the LAsERs speed will temporarily match the target, and You can land more shots! then you have LASER guided missiles firing LAsERs that fire LASERS. – user23110 Sep 19 '16 at 17:59 • Outrun a missile? The bigger the target, the more unlikely it becomes. It costs loads of energy to compete with the acceleration of a small missile. And once you outran a missile, the next one will be fired already adapted to your new trajectory, so it costs again loads of energy to change it fast enough to outrun the new missile. And if the missiles were designed by me, they would sometimes shut down their engines, looking like being out of fuel, but actually waiting that the target changes its path to something convenient (trying to outrun the next missile) to suddenly accelerate again… – Holger Sep 19 '16 at 18:33 • Outrunning missiles doesn't make much sense (in general; in specific cases, maybe). I'm assuming there is some other purpose to this ship, beyond outrunning missiles (transporting cargo, perhaps, or squishy sacks of meat that can't handle many Gs?), if not, why waste resources on something that will soon be ignored? Trivially: make a copy of your speedy ship, but strip out the cargo or passengers and everything needed to support them. Now you have a missile that is lighter, faster, has greater endurance, and is more disposable than your ship. A purpose-built missile could be even better. – 8bittree Sep 19 '16 at 19:33 • That F/A-18 vs LGM-30 Minuteman ICBM comparison basically demonstrates exactly what I'm talking about. About 1250 miles at up to Mach 1.8 for the F/A-18, vs about 8100 miles at up to Mach 23 for the LGM-30. And that's for something designed to hit mile-wide stationary targets. – 8bittree Sep 19 '16 at 19:57 Relativistic bag of sand. At the speeds spaceships fly, anything can cause great damage, especially if it flies very fast. Just take a look at how much damage a small fleck of paint can do to current day spacecraft. A bunch of sand fired at a significant fraction of light speed will be close to impossible to detect in time, and impossible to defend against with point defenses even if detected. By giving it some spread, you can even compensate small errors in accuracy. Imagine it like a huge space shotgun. Even if the enemy spaceship somehow survives a hit, it will be stripped of sensors, weapons and engines. It would be a very potent weapon especially at the few light-seconds range (Earth - Moon distance), but very dangerous even at much longer ranges, where the target must be constantly moving in random patterns to avoid it. Within the few light-seconds range, not even that would save the target, as its mass would prevent it from moving enough to avoid getting hit. • It might be interesting to calculate the energy required to accelerate a bag of sand to a suitable speed and how hard it would push your ship back. My guess is that if we took our best rocket technology today and did that then even with our best engines/thrusters compensating for the thrust you'd be shooting out of the solar system backwards and unable to stop for a few generations. – Bill K Sep 19 '16 at 18:59 • At any reasonable speed, your chaff countermeasures are a weapon in their own right! – Graham Sep 19 '16 at 21:58 • @BillK why you just not calculate it en.wikipedia.org/wiki/Momentum#Conservation bag of sand let say 50kg, momentum is kinda 50*c = X * v, let say v=1m/s, then X = 1.5e10 kg. Also to solve problem shoot 2 bags, one for enemy one is recoil bag. – MolbOrg Sep 19 '16 at 22:39 • Yeah, I was visualizing that, a ship that always shoots a copy of everything out both sides :) Kind of wasteful though--it means you need to carry around 2x ammo and expend 2x the energy to accelerate it. Perhaps the best solution is to land your ship on an asteroid and start breaking it down/shooting pieces of it at the enemy. Hmm, this is actually a really good answer--it's not so much what you shoot as where you get it and how you cancel the inertia of shooting it, an external body solves both problems.. – Bill K Sep 19 '16 at 22:46 • You don't really need to shoot anything on the other side, you just need an equal and opposing force. Use a small thruster controlled by the weapons system. – Drunken Code Monkey Sep 20 '16 at 3:24 Bullets (Unless the Ships are Armored - Not Stated in Question) Note: OP states that ships don't want to get moved out of their orbits during battle, so I'm not assuming a particularly high-velocity fight. Basic, standard bullets would be pretty devastating to any ship in space and easy to carry/fire. No energy buildup, no drain on your power, and as long as you could hide a muzzle flash your enemy wouldn't even know you were firing on them until the holes started showing up in their hull. You could also have many, many turrets able to target different ship trajectories or areas of the ship. Sure, they won't blow up the enemy with a great fireball, but how many holes to the great void of space do you think a ship could have before you consider it a big problem? It's not exactly equipped for warfare, but just a few shots could probably wreak major havoc on something like the ISS. Bullets also have a few advantages over larger weapons - you can carry a LOT of them and they're so small that tracking them seems infeasible. That creates a situation where maneuvering is very difficult for your enemies (where to go?) and ensures they also won't be prematurely blown up by antimissile systems. Plus explosions from larger ordinance in space would create a lot of random debris that could come back to haunt you. Since there's not a lot to get in their way in space, despite some considerable distances they won't be slowing down either. Then ensure YOUR ship is full of anti-missile systems of course... • Very good choice. You can accelerate bullets quickly to a much higher speed and you can fire a spread to compensate for maneuverability--a ship moving only has a small area of possible positions it can be in when your bullets get to it (assuming the ships aren't too far apart)--all you have to do is keep firing a spread that covers the entire area and compensate immediately for course changes. Notice, however, that the fact that you have to fire thrusters to compensate for the mass you are ejecting limits YOUR maneuverability. – Bill K Sep 19 '16 at 18:51 • And if you're using bullets, might as well accelerate them electromagnetically (rail- or coil-gun) because you don't need to carry any propellant mass, no propellant to explode, and you're not limited by the speed-of-sound in your propellent deflagration, so can achieve speeds faster than any traditional artillery piece. – Nick T Sep 19 '16 at 23:47 • I like this idea, because the two big problems with bullets on Earth are air resistance and gravity. Without them, you wouldn't even need rifling -- erratic tumbling wouldn't alter the course, thanks to a lack of air to push on -- and you could use much less energy, since you don't need a bullet to continue traveling faster than sound after several hundred meters through air. – Nic Hartley Sep 20 '16 at 5:36 • A space shuttle moved at up to 26.000 km/h (7,2km/s), while the fastest ballistic chemical-propelled projectile is 8,5km/s. (Source: en.wikipedia.org/wiki/Muzzle_velocity). While i do agree that the idea of spraying bullets at something is generally good, chemically propelled bullets become utterly useless in space, where everything is so fast and so far away from each other. Railguns reach up to 35km/s. Much better, but at an estimated combat distance of.. 100.000km and above (basically 1/4 distance earth/moon and still orbital combat) still useless (the projectile travels 45 minutes) – Andreas Heese Sep 20 '16 at 6:23 • It's going to be really annoying getting hit by one of your stray bullets a hundred thousand years from now. – William Robertson Sep 20 '16 at 15:52 Having your spaceship adjust for recoil is almost trivially easy. If the guns are small enough, then a short burst of thruster applied in the opposite direction will cancel it out. IF the railgun is very large, or even the main weapon, then it is probably best to build it into a spinal mount (i.e the rest of the ship is built around it). The mass of the ship absorbs most of the recoil force, and a blip of main engine power cancels out the rest. This is probably the most plausible solution, since hypervelocity railguns or coilguns need to be very long to generate the velocities required for space combat. Have Sting space railgun concept to scale with Space Shuttle. Illustration by Scott Lowther Moving to alternatives, the use of a laser allows you to build very lightweight missiles without needing a lot of rocket fuel. The laser can be focused on the back of the missile to ablate ice, plastic or other lightweight materials which then expand and provide thrust to the missile. Laser launched missiles can be much smaller and cheaper (no expensive rocket booster stage), and the ship itself can be much safer since there is no need to store rocket propellant or solid rocket fuel aboard for missiles. As well, since the laser plasma can be heated to an almost arbitrary degree, such a missile will have a higher ISP than a conventional rocket, so can be smaller for the same amount of deltaV. Adding a homing systems and a small terminal engine to account for evasive action by the target is optional, even a box of kitty litter moving at orbital velocity can have a huge amount of energy (often energy released by high speed impacts is calculated in "Ricks": In fact, there is Rick Robinson’s First Law of Space Combat, which states that, “An object impacting 3 km/sec delivers kinetic energy equal to its mass in TNT”. Put it another way: put one kilogram of anything in your gun, fire it at a target, have it impact at 3 kilometers a second, viola! You’ve got yourself the equivalent of 1 kilo of TNT going off. (If you need a visual of how much TNT this is, one stick is about 200 grams, so 5 sticks of TNT.) ) Finally, nuclear weapons are very compact sources of energy, and can be used for all kinds of exciting effects. The Conventional Weapons page at Atomic Rockets has the details, but the short version is this: Nuclear explosions can be used to drive "shotgun" charges of pellets at speeds of up to 100 km/s. Nuclear shaped charges can drive streams of liquid metal at speeds of nearly 3% of the speed of light. Casaba Howitzers, a special form of nuclear shaped charge, can accelerate a star hot spindle of plasma at @ 10% of the speed of light, and deliver energy comparable to a super high energy laser (Ravening Beam of Death or RBoD) on target without all the heavy and expensive laser machinery. So there are lots of interesting options for space combat even if you only want to limit things to kinetics. Some ideas: • Why no missiles? They can be like small ships, with a full drive, ECM, anti-counter-missile-lasers and so on. Well, i guess technically that would be drones already... • Ballistic weapons are IMHO completely useless. If you need to cover distances of several lightseconds, dodging all these projectiles should be a breeze for any sufficiently agile ship. You might try flooding space with projectiles so they can't dodge, but we are talking about a LOT of open space here, and GIGANTIC area of space to cover...let's say combat takes place at 1/100c and you are one lightsecond away of your target... then the area where your target might be is roughly 27,000,000km³....and while your projectile travels all the way, the enemy gets the information about it with light speed, so they can easily calculate how to dodge it. • What about mines? They could have a small but very powerful drive, be dormant and painted with something that absorbs almost all light, making them impossible to detect. Together with a medium-sized nuke, they'd obliterate anything coming too close. • Depending on how your rabbits managed to solve problems with micro-meteorites, firing a AA shell in the path of a spaceship might prove useful. If the ship is sufficiently fast, hitting a cloud of metal scraps will shred the ship. Again, you face the problem of not knowing where the enemy ship will be, and your projectile being slow, but it might make the "flood space with stuff you don't want to hit" thing easier. • Generally speaking, i think your projectiles need their own drive and maneuverability to make up for the other ship changing course. So i really think missiles are the way to go. • Laser weapons, for whatever reason google said they have not to function, move at the speed of light, making them MUCH harder to dodge and the possible timeframe you need to predict MUCH shorter. So they should hit much more often, give them a try, maybe? :) • If fighting an enemy in a stationary orbit... go to your handy asteroid belt, gather 2,000 smaller asteroids, tow them, accelerate to 1/10c or above, fire them at stationary target in a small cluster. If you are 100% sure the target won't move, just use a single one, to make them harder to detect. Can also be used to annihilate space stations, moons, planets.... even from outside their solar system, if you can wait long enough. (the thought that 1.000 years ago someone fired an asteroid at earth from alpha centauri or any other neighbouring star system is quite eery... our world is so fragile) • what about building a giant microwave death ray? Just point it at the enemy ship long enough. If close to the sun, it might have problems dissipating heat already, if you add additional heat... uh oh. • You have no time to dodge a laser, even if it is fired from lightyears away, because you can't see it coming until it hits you. – Devsman Sep 19 '16 at 19:06 • @Devsman turn it around: you can't aim a laser at anything because you only know where the ship was, let alone is or will be. – Nick T Sep 19 '16 at 23:49 • One thing about dodging ballistic weapons: It would require lots of fuel, which could make ballistic weapons quite effective. – Michael Sep 20 '16 at 5:21 • @NickT This is very true, although at smaller distances (my saying lightyears was as an intentionally impractically large figure) the difficulty of aiming is reduced while the impossibility of dodging is constant. – Devsman Sep 20 '16 at 12:33 • @Devsman: Actually, you can dodge a laser. Maybe not from a surprise attack, but once a battle has started, i KNOw the other guy will shoot at me, and chances are i will know how long his lasers take between shots. So i can keep my course constant, and around the time that i think he is firing, i quickly change course. To make it better, i also randomly change my course a bit earlier and later. Even if the laser only takes 1/10 of a second to travel to me, if i am changing my course during that time, he probably won't hit me, assuming i move at relevant speeds. All about distance, though. – Andreas Heese Sep 20 '16 at 12:36 Space is big. If you are low-tech, travel anywhere takes a long long time. You use chemical rockets. Getting to orbit is the hard part. You can move around the solar system, but you only get to go places, you don't get to come back: you don't have the fuel to go and stop, then go again. Your ships are tiny, fragile, and no living beings are on them past orbit around your planet. At the next tier, you are using solar sails and high velocity ion propulsion. We are at the cusp of this tier -- we have sent ion drive unmanned satallites to do some grand tours of the solar system. Unlike our previous probes, these can stop off at a planet, enter orbit, explore using sensors, then fly off somewhere else. Going beyond that you are using either something exotic (reactionless drives), or something brutal (orion based nuclear drives). The next stage I can describe is that of a K1 civilization, where you can do things like build launcher lasers to send a small probe to do a flyby of a nearby star. As a large K2 project, you could take an asteroid (like ceres) and laser-launch it up to speed to colonize another star, with flight time in 1000s of years. The asteroid would use exotic physics to break somehow, as coming to a stop without the laser-launcher is going to be difficult. At any of these stages, the kinetic energy of the ship itself is going to be absurd. Orbital velocity around a planet, all by itself, makes a pebble orbitting in a significantly different orbit go faster than any bullet we have fired in war. It just gets worse as our ability to travel goes up. Basically, space ships are so ridiculously fragile compared to their speed, there is no practical armor unless you invent force shields. Any weapon (pebbles, sand, etc) that contacts will be very very destructive, exploding into plasma. While you may think that storing nuclear weapons would somehow add danger, the KE of an interstellar ship with any decent speed is going to make a few dozen nuclear weapons irrelevant. At 0.03 C (1000 of years to nearest star) a 100 kg dumb rock is E15.5 J, or a megatonne of TNT. So a weapon will consist of a mass moving in different orbit/track. If any amount hits, the target is destroyed (chemical bonds aren't strong enough). Dodging consist of seeing the weapon and moving out of its way. Weapons track by thrusting efficiently (like a ship does). Delta-V scales, so weapons are limited by how small you can make the engine technology more than anything else (small engines mean more things to dodge). If the attacked ship has a better engine, it can "out run" (sideways) the defending weapons/dodge them. Static defences are hard, due to square law (there are lots of ways to approach a target, and space is empty). With science fiction, you'll end up wanting to think about the possibility of crazy propulsion technology and even energy shields. Because given current science, interstellar war isn't ships going pew pew. In short, interstellar travel of biological beings involves entire-civilization efforts of K2 level civilizations (capable of consuming an entire star's energy output). Interstellar travel of post-biological civilizations could be done slightly easiser, but it mostly consists of sending replicators to the target system and building a new civilization. The weapons of a K2 civilization might involve stellar manipulation to generate controlled solar flares, which are then lazed to launch relativistic smart missiles at a hostile star. Or maybe poisoning their star to make it somehow go nova. • it is possible to defend against relativistic projectiles, by placing foil shield, let say at distance 10-20 light seconds against 10-100kg projectiles(i guess). (depends on projectile, their mass, just for illustrating idea). Collision with foil will produce cone of plasma, and distance will determine percentage which will hit the target. Also plasma could be affected with magnetic fields, even at relativistic speeds - so combination of those 2 might get pretty descent defense system. – MolbOrg Sep 19 '16 at 22:51 • @MolbOrg Math time! 1 cm cross section iron bar weighing 100 kg. Foil shield at 10 light seconds. Fe innermost electrons binding energy is 14k electron volts. So to turn 100 kg of iron to (perfect) plasma, you need 2E12 joules. The foil it runs into needs to be about 45 mg. The surface area of a 10 light second sphere is 1E20 m^2. At 45 mg/cm^2 that is 5E19 kg. This is only 1/10000th of Earth's mass. And you can probably make this work without completely ionizing the iron. But still, this is far from easy – Yakk Sep 20 '16 at 1:58 • lol, nice, u have potential. it was principle of defense, most peoples see funny devastating results which projectile may cause, and very few(I saw one actually) look in to which result might cause small matter to that projectile, many see it as ultimate weapon(which is not), and that is the point. For most passive projectile ppl think about, principle will work, there are ways to counteract that defense - usual shield vs sword. Although point in not in to making plasma, but dispersing projectile in first place. If you do not see enemy, and he see u so well, it is probably game over. – MolbOrg Sep 20 '16 at 5:06 • But if you do see or might guess which direction this projectile might come from, we talking about way much less shield mass. Example it not perfect, and was not intended to be, and much depends on details including details about projectiles, with 0.9999999c projectiles it might not work so well(just guessing). But even if we talking about shell, if it is something like earth to defend, it worth to spend hundreds times of earth mass, if it is needed, sources of matter are available in our system. – MolbOrg Sep 20 '16 at 5:06 • Easy solution for ship defense, quite the challenge - destroy source of projectiles, cover by moon/planet/star, divide ship in to 1t chunks scatter them, stretch ship and maneuver vital components(peoples), surround ship with chunks creating covered zones(line of sight) and maneuver vital components randomly from zone to zone. Defense only never wins, so destruction of source is primary goal, detecting direction of source is first step, amount of sources. – MolbOrg Sep 20 '16 at 5:07 Drone Ships You can have some semi-autonomous drone ships that themselves carry the ballistic weapons that are sufficient to damage another ship. This neatly gets around the problem of recoil bumping you out of orbit. Pump enough of these out and some of them have to get through the other ships defences. You might also like to arm them with some flares to help stop and anti-drone fire. • I forgot about that, RC drones sound fun though they are not strictly ranged weaponry(what are they even categorized as? Autonomous weaponry?) – Skye Sep 19 '16 at 13:32 • Well, there's a range from the mothership to the drones, and the ballistic range from the drone to the enemy. I guess. – Snow Sep 19 '16 at 13:34 • The problem with drones is you still have to carry the delta-v they use to maneuver in the first place; unless the drone body itself takes an ultra-small amount of mass, this starts to greatly cut into the the amount of ordinance you can bring with you;ten pounds of fuel and ten pounds of drones carrying ten pounds of weapons is inherently less destructive then ten pounds of fuel and twenty pounds of weapons – Marky Sep 19 '16 at 14:46 • This is the question that Worldbuilding has always needed! – Caleb Woodman Sep 19 '16 at 15:22 OK so I have been biding my time and waiting for this question for a long time, so this answer might be long. The first manner of business is to decide what the target looks like. Solar panels? Living crew? Reserve fuel? Every weapon needs a target, or it cannot be efficient. With a living crew, the best tactic is multiple hull breaches. In space, bullets fly as fast as they were fired indefinitely. Depleted Uranium slugs are used commonly today for taking down tanks. A well trained gun turret like on modern combat helicopters might do the trick on it's own. Laser ablation of a hull or primary target area is feasible, but I believe targeting and usefulness would be improved if the entire mechanism was a self-contained drone with it's own nuclear battery, allowing for closer shots, multiple reloads, and flanking. A railgun is a fun idea, but it needs a spot to hit. That's millions of dollars of aiming equipment to make sure it hits the target, and the magnetic and kinetic backlash on the owner ship would mean a no crew environment. A nuclear bomb is absolutely overkill, and radiation storms would result on the planet below if one were fired. Instead, an anti-aircraft flak cloud gun would be a safer option. These weapons go a given distance, then detonate into a large burst area of shrapnel, like a fragmentary grenade. A bonus is that the shell can start very small, thus hard to counter. Another spacefaring weapon is a nano drone strike. Release a few dozen drones, each has a jet, guiding system, fuel, and a single bullet. Only one needs to succeed to make a hull breach, and they can get as tactical as they need. What about no crew, though? Well, and EMP, or Electro-Magnetic Pulse, can shut down electricity in the entire ship, unless it's fully insulated, and all you need is a battery on a spike, overloading the system and frying the systems. Dead in orbit. A missile may be a good option, but it needs to be small. I suggest firing it long before the jet system activates, so that it seems like a non-primary target until too late to stop. The kinetic force of even a 5 Kilo bomb (like dynamite) has enough yield to cripple every ship humanity has ever made. Now, space is usually way too big for a mine, but a fight in orbit may allow a payload of small bombs to be carpet-spread over the predicted area, disguised as trash or dead satellites. They might also be leveraged in a fight involving a chase. If you want to use a missile with bigger ordnance, just send some cheap decoys with it, and they won't know what to hit. The decoys may have a small payload just in case for maximum grief factor. In space, anything you can't counter is your demise. I think the easiest way to create a devastating space weapon with no extraordinary technology is to make the ship itself the projectile. Shape the ship's hull into a cutting arrow point: and heavily armor the thing with materials capable of withstanding the impact. Just build up velocity and ram the opposing ships. Your fleet of small fighters will tear the opposing armada to shreds without firing a shot. Complementary weaponry could be basic fragmentation anti-vehicular mines dropping behind, as the whole combat strategy hinges on piercing right through enemy lines, and mines could be dropped inside the larger enemy ships. Maybe a gatling-type laser array in the tip, to soften the impact point on armored targets. Or you could design remotely guided missiles / drones in a similar fashion, and fill them up with high explosives. • Your crew thanks you when they will hit their heads over the windows of your ship XD – GameDeveloper Sep 20 '16 at 9:09 (Wow, this has attracted a lot of answers.) I'd recommend something like the combat wasp system in Peter F Hamilton's Night's Dawn trilogy; there's a nice description of the basics right at the start of the Neutronium Alchemist (the first book) and there are good descriptions of battles using this system throughout the trilogy. Basically it boils down to dogfights-by-proxy. If beam weapons are relatively ineffective at a distance and/or difficult to aim, you need to get close to your opponent and/or use warheads or collision to inflict serious damage. Manned ships are typically large and hence difficult to manoeuvre, and contain squishy components that don't tolerate rapid directional changes and high acceleration particularly well. You also want automatic systems in charge of the individual 'wasps' given the speed at which decisions and manoeuvring need to be made. So combat gets dominated by small unmanned space-capable vehicles ('combat wasps', in the series) that are as light and hence manoeuverable as possible. Each ship carries a payload of them and they use a range of payloads ('submunitions' in the books, I think) ranging from nothing (damage is purely kinetic), beam weapons, explosives, nuclear warheads and antimatter. Ships carry as many wasps as they can and vary strategies in terms of release rate and payload diversity. There a couple of ship-based countermeasures like chaff for use as a last resort, but basically that's it. It's a nice system since: • it works fairly well from a physics point of view (and fairly hard-sf, apart from antimatter); • it's an easy concept for readers to grasp; • it's a plausible explanation for dramatic space battles with lots of explosions. Humans get to make high-level strategic decisions and preprogram tactics. It's worth remembering that targets at the bottom of a gravity well are generally highly vulnerable to any sort of spaceborne attack simply because of added kinetic energy. Alternatively, you could go for something like the system in Ken MacLeod's Fall Revolution series (particularly the Cassini Division); assume that beam weapons are difficult to avoid but can't realistically cause physical damage, and fight battles as long-distance infowars that use lasers etc purely for hacking attempts. Space, in general, is a place lacking most material substances that are abundant on the planet's surface. So, having a space weapon that is not easily rechargeable in open space looks like a very opportunistic idea. Space travel may take ages (even while traveling at the speed of light), and having a gun with no ammo most of this way is really dumb. So, in most possible cases, spaceship must be able to produce / refill ammo in open space. The most available type of energy in given circumstances is solar energy, so weapon using that kind of energy (like EMP cannons and lasers) could be reloaded on the way, therefore more realistically used in a place with a tiny percent of matter, but full of light. Reloading rocket launcher in open space indeed is possible, if a spaceship is really huge (either to hold a significant amount of rockets or able to manufacture it on the fly). Space shrapnel usage sounds more realistically - its source could be a random asteroid passing by. Smart drones, cutting enemy's armor in close combat, also seem to solve the problem of recharging, if their dodge percent and return rates are consistently high. It's worth noting, that more than 2/3 of matter in universe is antimatter (due to current scientific knowledge), so statistically that kind of matter is to be used in open space more widely, than traditional kind of matter. P.S. Speaking of open space, it seems that proper camouflage combined with a speed burst is the most effective way of combat. P.P.S. With proper armor and something like energy shield the ship itself can be like bullet. • Antimatter is not dark matter, which I think you're confusing. – Bobson Sep 19 '16 at 20:32 • more than 2/3 of matter in universe is antimatter - wow, unexpected – MolbOrg Sep 19 '16 at 22:32 • "the ship itself can be like [a] bullet." True - but people who want to survive combat action usually consider it a bad idea to ride inside kinetic munitions. – brichins Sep 20 '16 at 18:43 About railgun, due to the fact you know the railgun characteristics, it's possible to calculate the intensity and direction of recoil. Consequently, you can compensate the recoil with your engine. Moreover, you can also shield the railgun in order to hide the massive buildup of energy emitted when firing, thus making dodging more difficult. I think we need to look at mixing two ideas in order to build a viable low-tech weapons system: First, lets look at missiles. You're wrong about evading them--the missile has the advantage here as it's much cheaper to move a missile than a ship. You'll burn up the target's fuel trying to evade your missiles, eventually evasion isn't going to work. However, a countermissile is going to be a lot smaller than a missile, given roughly comparable ships I would expect the countermissiles to win. (Lets look at the closest equivalent we have: Anti-ship missiles vs SAMs. The anti-ship missiles are a lot bigger and more expensive and the only way to get them through good defenses is to swarm the defenses with more rounds than can be shot down.) Lacking the ability to saturate defenses in some means (note that this depends on tracking range. If missiles can only be detected at short range you might be able to get them through based on a lack of reaction time) they're pretty useless. Various ballistic projectiles have been suggested but that's going to need some awfully accurate gunnery. If you could aim them adequately they would be very nasty as they're much smaller and lighter than a countermissile, simply keep firing and you'll get through when their magazine runs dry. The accuracy of shooting is a serious issue, though. Also, nukes have been suggested--there is no blast wave in space, you have to get close enough for a thermal or radiation kill. That's pretty darn close. Thus I suggest two variations on a theme: Fragmentation rounds. Version A: This is based on a missile. It does not attempt to hit it's target, though, a miss distance of a km or two is fine. Thus it doesn't need to use it's engine much if at all on approach, it's going to be much harder to find. Put a stealthy coating on it and it's going to be still harder to find. In time I expect the defenders to pick it up and shoot it down--but too late. The thing is it's simply trying to get close. Its warhead fires and a whole bunch of high speed fragments are heading for the target. Since they are fired from nearby the accurate gunnery problem is avoided. Being little fragments they're very hard to shoot down. It also has a salvage-fuse mode, when it detects an incoming interceptor (thermal source with a zero bearing rate and parallax detectable to a pair of cameras) it fires anyway, albeit with a lower chance of a hit. Version B is a shorter-range version of the same thing, it's fired from a big gun or like system rather than carried on a missile. The upgraded versions use a nuke to propel the fragments. Let's look at an idea weapon for space and see how close we can come with the tech we know of now. The perfect space-weapon has to be accurate, devastating and should not have downsides for the party firing it. Lasers and plasma weapons are out as per the original request. So what does that leave us with? Kinetic weapons and Missiles if we exclude sci-fi tech. Kinetic weapons are (generally) too slow and have the downside of pushing back on the ship, costing heaps of fuel to compensate for ones that are large enough to cause damage. So missiles are the way to go. However, explosive warheads as we know them now aren't terribly effective. There's no air to propagate the shockwave (no matter what star-wars tries to tell you) so we can't really use those. I'd suggest a missile as delivery device with a kinetic-kill weapon as payload. Build a missile that's capable of adjusting course and getting to within ~5km of the target. That's half of your weapon: thrusters, engines, rudamentary AI for targetting, etc. The second half is stolen directly from the A-10 thunderbolt: GUNS. A spinal-mounted weapon that's essentially better version of the old Metal Storm concept. Barrels pre-loaded with ammo, electronic-firing mechanism or, if possible, rail or coil-based firing mechanism. Once the missile gets close enough, the weapon kicks in and barfs a massive load of bullets in the general direction of the target. Ideally, we can get our projectiles to go at a significant fraction of the speed of light, but a couple thousand km/s is good enough at that distance. The advantage of a weapon like this is that you don't have to consider recoil in any way. It's fine if firing destroys the weapon platform (missile) as that just creates more shrapnel flying towards the enemy. Alternatively, mount a single railgun on the spine of the missile and have it fire a chunk of depleted uranium or tungsten when it's close enough. Requires a bit more aiming but it's likely equally spectacular. Time to go with the Battle Star Galactica answer. Watch their space combat scenes. First there is a "flack shield". A bunch of projectiles that generate a huge amount of space junk and effectively reduces incoming damage by blowing it up. This "flack shield" basically renders missiles useless. Then there is the fighter wings. So your flack shield is all explody, so the enemy tries to fly smaller craft "below" the flack shield where they can shoot missiles and make a mess of things. The answer to that, is your own fighters to defend that area. They shoot bullets, and some small missiles, but their target is smaller craft. Then you need bombers. Let's say you get your small ships under the flak shield, Now you can focus on doing some real damage. But your going to need bigger bombs and heavier missiles. To be honest, I always thought that they had a good model for space combat. It's "simple" and mirrors common "today" navy warfare, and doesn't rely on a suspension of belief (other then where did they get the material to make so many bullets). Your main big ship is a sitting duck and it's up to the little ships to try and defend it. In fact, this is basically true today. • It's "flak" by the way. – Caleb Woodman Sep 21 '16 at 19:47 The rockets we use today are simply a way to eject matter out the back of the ship at the quickest possible rate, pushing the ship in the opposite direction. Anything you shoot out of one end has to be matched by something you shoot out the other (I believe mass x speed must be equal from both ends if you don't want to move, but it might be more complex than that--still the concept holds). The faster you shoot and the more matter you shoot, the stronger the push. So anything you shoot (with a cannon/railgun) at the other ship costs double if you want to hold your orbit. Whatever you use should either be self motivated (a light drone full of fuel that acts both as thrust and payload, perhaps) or extremely light (bullets/pellets) or slow. Self motivated drone: The drone can be self-correcting to a degree, but every bit of velocity it gains between you and your target decreases it's maneuverability and increases your target's chance of evading. If it accelerates the entire way to your target and your target dodges, The drone would have to fire the same amount of time just to cancel it's acceleration. Then it would have to start accelerating back to the target (assuming you and the target were originally not moving relative to each other). Basically if you miss you're done. Slow stealth drone: This would lead me to say that the best bet might be a stealthy and extremely light drone. Fire it slowly towards where the enemy will probably be then it needs to just float dead until it is near your enemy. At that point it should light up, quickly orient itself at the enemy and fire full engines. This would minimize the time the enemy has to dodge, and since you don't need it to get there quickly you won't have to compensate much (assuming your ship launches it) or at all, but the enemy detecting your drone would completely nullify/waste the attack, all it would have to do is not be where the drone expects it to be when the drone gets there) A spread of bearings would be better if detection is possible--but that requires closer range--the further you are away the bigger the spread would have to be to guarantee a hit (and the more bearings, the number of bearings is probably geometrically related to the distance and linearly related to the targets thrust capability, but my physics is way to out of date to do more than guess about that). Neutron Canon Fast moving neutrons are easy enough to create, can be accelerated to high speeds, and in sufficient density, will be able to degrade ships and cause damage. Gamma Rays might be good to. Since both are uncharged, EM force fields will be ineffective against them. • reference to neutrons acceleration could improve this answer – MolbOrg Sep 22 '16 at 0:19 RPG (with modifications: is the most effective weapon). Of course a regular RPG has no maneuver capabilities, instead I would use attack drones each one capable of deploying short-range RPG missiles so that missiles themselves do not need maneuver capabilities. The rational for it is: carrying stuff on space is expensive, so the ammunition has to weight as few as possible: If the bullet itself includes just few fuel and the explosive without stuff for being able to turn in space (additional thrusters) it will have the minimum possible weight. The metal concave part of the RPG would be some metal like Gallium that melts at low temperature and is able to weaken enemy ship's hull. The use of drones allows to place shots BEHIND enemy ships, effectively helping to avoid receive hits by fragments of explosion. Also drones could be sacrificed to shield 1 hit. The ship could also be equipped with magnetic bombs that grip on enemy ship and wait the ship is turned at a proper angle before detonation. A effective strategy would become to "board" enemy ship, that way both ships have to stop using explosives and start to use alternative strategies (using explosives too nearby is dangerous in space.) NOTE: RPGs works by detonating small quantity of explosives in order to project a quantity of molten metal into a "ball" that pierce most armored surfaces. The shape of the metal is concave, the explosion just melt it and project it to the focal point of the concave shape. Gallium is a real threat to aircrafts and metal structures, few drops of gallium can weaken a wide spot of surface that then would just break under internal pressure of the ship. • While Gallium would likely react with the hull of an enemy vessel, the reaction is not particularly fast, and (given the probably thickness of the hull) very possibly wouldn't penetrate enough to cause the instant (or at least rapid) catastrophic damage needed during a firefight. See this Chemistry.SO answer about gallium's reactivity (especially the links in the comments). Melting a hole through their hull over a couple hours or days would be a great way to avenge your own death though. – brichins Sep 20 '16 at 18:58 • ONce you got it by gallium you have to surrender (if you destroy enemy ship, your ship is anyway going to break in few hours), also I think very hot Gallium react much faster (maybe still requires minutes, but not hours). So well placed shot is anyway a heavy damage. – GameDeveloper Sep 21 '16 at 16:19 • It's hard to keep anything warm in the vacuum of space - even an insulated molten gallium round has to rupture to do anything. Agreed that it would cause major damage, though likely just to a (sealable) section of the ship. I don't think the reaction is fast enough to affect the outcome of a shootout; though over a long fight it'd be a great way to weaken the hull for your next wave. But unless you find a gallium asteroid field to mine, it's just not cost effective. Why bother refining gallium when there are so many cheaper options with a quicker result? For cloak-and-dagger sabotage though... – brichins Sep 21 '16 at 20:46 protected by Serban TanasaSep 19 '16 at 21:37 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
# Neptune - One orbit and counting Neptune and Triton one orbit after discovery Credit: Dr Robert Smith and the LT Last night Neptune passed a very special milestone - exactly one orbit since it was discovered! Neptune was discovered by Johann Galle on the 23rd September 1846, but since it takes 164.79 Earth years for it to orbit around the Sun, this is the first time that we have seen it return to that place in its orbit. This is particularly important for Neptune as it was the first planet discovered not by lucky searching of the heavens, but by prediction. By studying the orbits of the other planets, mathematicians Alexis Bouvard, Urbain Le Verrier and John Adams were able to work out that there must be another, unknown planet slightly distorting the orbit of Uranus with its gravity, and they were even able to predict where in the sky to look. Indeed, Galle found Neptune within a degree of the position predicted by Le Verrier - astonishingly accurate! To celebrate this very special occasion, the observation above was taken by the Liverpool Telescope last night as Neptune returned to its "starting point". You can also see the moon Triton - the only large moon to have a significant atmosphere.
# Bullet-Time, Matrix Rain, Slow-Mo and Spiral FX by Multiplexing WS2811 with a Microcontroller! Status Not open for further replies. #### EvilGenius ##### Member Hello Here is another clever circuit that will create cool effects for a large array of smart pixels. Objective: Low cost circuitry Easy to built and implement No fancy bitbanging and worrying about timing constrains Utilize any WS2811 based controller for RGB Lighting (Single Wire SPI) Be able to continue to daisy chain different string lenghts without addditional programming Avoid messing with re-programming WS controller, avoid bit counting or system slowing down Create cool Fx on 15 strings of 50 nodes (horizontally and vertically) Avoid zig-zag connection end to end (save on wiring) Be stand-alone (no need for a fancy computer software) Run on 5V or 12V 3-wire system Circuitry: The heart of the system is four 4-Bit 3-State Avtice High Non-Inveting Buffers allowing multiplexing of a Microcontroller with Data output from WS2811 Controller. Buffer has individual Enable lines connected to Microcontroller, while all the data inputs of the buffers are tied together and connected to WS2811 Controller Data output. uC controls individual isolated horizontal lines while WS-CNT provides for vertical movement and color FX. Output of the buffers are connected to 15 strings of 50 smart pixels (750 nodes). BOM: 4 x 4-Bit CMOS Buffers Active High Non-Inverting (SN74AHC126N) $1.20 1 x Microcontroller (PIC16F628A)$1.20 1 x WS2811 SPI Controller with Remote $6.50 6 x 0.1 uF Ceramic Decoupling Capacitors$0.12 1 x 220uF Electrolytic Capacitor $0.05 1 x 5V Voltage Regulator$0.66 1 x Resistor 51 Ohms $0.02 2 x Resistor 4.7K (Pull-up)$0.04 16 x Resistor 100 Ohm $0.32 Total cost not including PCB:$10 Circuit should be able to create amazing FX by varying the sequence of which string(s) comes on (PIC output) at what time and duration, pause, slow down, speed up, rotate, do Bullet time, Matrix Rain FX, Slow-Mo FX, Sprial FX, Random FX, Move Action while doing spiral, freeze vertical and do horizontal or circular movement, Freeze horizontal and move all colors synchronized up or down, and much more. #### Attachments • 164.3 KB Views: 52 Last edited: #### Pommie ##### Well-Known Member Why do you need this extra circuitry? Why can't a pic drive the strings direct? I just checked and a 20 pin pic could drive 16 strings and only use about 0.3% (3mS) of it's processor time. Or, am I missing something? Mike. #### EvilGenius ##### Member Why do you need this extra circuitry? Why can't a pic drive the strings direct? I just checked and a 20 pin pic could drive 16 strings and only use about 0.3% (3mS) of it's processor time. Or, am I missing something? Mike. Hi Mike I am sure by direct driving you mean the data signal itself. (not the volts and amps). The objective of this project was to avoid tedious programming of the sensitive and tight timing requirements of WS2811. WS2811 Controller (inexpensive version with remote (or something similar) can produce 300 color FX) handles all of that. All we are doing with the PIC is to turn on specific string (as a switch). Direct Drive: Let's say you want to direct drive with the PIC and multiplex the whole thing with your PIC (5o pixels by 15 stings). 1- You don't have enough output ports. You need 15 pins for horizontal and one port for data driving. I suppose you can cut back to 14 outs plus data. 2- PIC16F628A's internal clock is not fast enough to send all the addresses (750 pixels x 24 bit per pixel x time needed for each on/off, plus reset signal for refreshing, plus your overhead). You can add on an external clock to speed up the process, which will burn up two more of your outputs. 3- Then you really need to spend some time configuring the data transfer to the strings for specific number of pixels per string and individual FX. If i want to change from 50 pixels to 40 or 60 pixels per string then you have to completely overhaul the program. It is doable but as I said I wanted to simplify the project. Buy a WS2811 controller, and programming of PIC is as simple as turning 1 or more outputs on or off without any worries about PWM, timing, interrupts, and so on. Last edited: #### EvilGenius ##### Member Look up geekmychristmastree. It is similar to what you are speaking of. He did it (I believe) with 10x6 matrix. Which is a lot more manageable. He also does this by end to end wiring (zigzag). Height of 6 feet. 50 pixels smart string is about 12 feet long (3.7m). With 15 strings of 50 pixels, you can light up a half circle radius of 12 feet. One heck of a light show! #### Pommie ##### Well-Known Member I've obviously miss read the data sheet. I read it that you shift in 24 bits and the LEDs cascade the data - I.E scroll along. I'm now assuming that you shift in 24 bits per LED and then do a reset to latch them. Is that correct? If so, I don't think a pic has enough ram to hold that amount of data. Mike. #### EvilGenius ##### Member I've obviously miss read the data sheet. I read it that you shift in 24 bits and the LEDs cascade the data - I.E scroll along. I'm now assuming that you shift in 24 bits per LED and then do a reset to latch them. Is that correct? If so, I don't think a pic has enough ram to hold that amount of data. Mike. You almost go it. You shift in 24 bits (each bit has a high and low timing), once 24 bits is read, WS2811 automatically latches, any subsequent data received is passed on to the next pixel. Once ready for the next set of color bits, you reset it by holding data line low for certain time (50uS I believe). To hold the color you need to hold off reset till you are ready. It is hurry up and wait, then repeat. By the way the datasheet has so many errors and is poorly written as you have noticed. The sequence is 8 Green, 8 Red, 8 Blue from MSB to LSB. Once WS has all its 24 bits then it PMW (and continue to do so) its appropriate output (R,G,B) at 18.5ma (I measured 16.5ma). Output (OutR) is toggled between 0.63V and Vcc (if 12v system then Vcc=12). Last edited: #### Pommie ##### Well-Known Member Completely different from how I read the data sheet. Thanks for the explanation. Mike. #### EvilGenius ##### Member 1- It holds the colors indefinitely once it has its 24 bits (until you reset it or power down). 2- It is low power driver and is mobile (for stand-alone projects). The outputs (OutR,OutG, OutB) are constant current sinks (remember they don't go down to ground, 0.63v to Vcc toggle). I utilized this feature in several projects to convert dumb RGB to smart pixels. In one project I drove a constant current of 300ma/channel for high power LED driving. 3- It is low cost Example: You can add this chip to drive water solenoids for a smart water show! Or light up the entire house with WS2811 drivers and one cheap controller. With a dongle and a DMX to SPI (using term SPI loosely referring to WS2811 protocol), you can connect your PC (using Light-O-Rama software) to run all your smart pixels. They even have wireless and Ethernet controller versions to communicate with these pixels. Last edited: #### Mosaic ##### Well-Known Member This is interesting: I just developed some string driving modules... #### EvilGenius ##### Member This is interesting: I just developed some string driving modules... That looks cool. What pixel driver did you use(i.e.: is it ws2811 or another chip)? #### EvilGenius ##### Member First automated Xmas and Halloween light show. Wall washers are 12V 10W RGB cans (300ma/ch) and path lights are 12V rectangular tri-RGB. Each pixel has a WS2811 chip for communications. Data input can be connected to any controller out there that communicates with WS2811, WS2812, WS2812B. I can simply expand to 2048 pixels (12 DMX universes) with a $7 controller! But I don't need that many pixels. For PC-dongle-controller you can push to one universe (170 rgb pixel, 510 DMX channels). You can upgrade to Ethernet version (no dongle) and push more DMX universes. P.S.: The whole project cost me less than$80 USD. The power consumption of the project is about 80W, cheap price and high efficiency for lighting up the entire front of the house. I calculated by electric bill to go up by \$2.50 for one entire month, if the system runs 6 hours a day for 30 days. #### Attachments • 934.4 KB Views: 45 • 982.8 KB Views: 47 Last edited: Status Not open for further replies.
location:  Publications → journals → CJM Abstract view # Cubic Base Change for $\GL(2)$ Published:2000-02-01 Printed: Feb 2000 • Zhengyu Mao • Stephen Rallis Format: HTML LaTeX MathJax PDF PostScript ## Abstract We prove a relative trace formula that establishes the cubic base change for $\GL(2)$. One also gets a classification of the image of base change. The case when the field extension is nonnormal gives an example where a trace formula is used to prove lifting which is not endoscopic. MSC Classifications: 11F70 - Representation-theoretic methods; automorphic representations over local and global fields 11F72 - Spectral theory; Selberg trace formula top of page | contact us | privacy | site map |
G-0K8J8ZR168 검색 검색 팝업 닫기 Ex) Article Title, Author, Keywords ## Article Curr. Opt. Photon. 2023; 7(1): 15-20 Published online February 25, 2023 https://doi.org/10.3807/COPP.2023.7.1.15 ## Coherent Optical Receiver for Real-time CO-ORMDM Systems Jae Seung Lee Department of Electronic Engineering, Kwangwoon University, Seoul 01897, Korea Corresponding author: *jslee@kw.ac.kr, ORCID 0000-0002-3927-9200 Received: August 19, 2022; Revised: November 22, 2022; Accepted: November 29, 2022 This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. We propose a new coherent optical receiver (COR) to detect optical receiver mode (ORM) subchannels selectively in coherent optical (CO) ORM division multiplexing (ORMDM) systems. In the COORMDM systems, each optical channel is a linear sum of ORM subchannels, to obtain high spectral efficiencies (SEs). The COR uses an ORM subcarrier as its local oscillator (LO) and reads the transmitted data at the origin times of ORM signals. For example, if the mth ORM subcarrier is used as the LO, then the COR reads the data of the mth ORM subchannel. The proposed COR is fast and can make COORMDM systems useful for real-time optical communication with high SE. Keywords: Coherent optical communication, Optical fiber communication, Optical receivers, Wavelength division multiplexing (WDM) OCIS codes: (060.0060) Fiber optics and optical communications; (060.2330) Fiber optics communications; (060.2360) Fiber optics links and subsystems; (060.4510) Optical communications As data-traffic demands grow in areas such as the Internet of Things and artificial intelligence, it is important to increase the spectral efficiencies (SEs) of optical-fiber transmission systems [1, 2]. With the advent of coherent optical (CO) communication systems [3, 4], the SEs of optical communication systems have grown remarkably, greater than 10 bit s−1 Hz−1. The CO communication systems can discriminate the differences in polarizations, amplitudes, frequencies, and phases of received optical signals. The SE can be enhanced when CO communication systems use the orthogonal frequency-division multiplexing (OFDM) technique [5]; these are called CO-OFDM systems. OFDM uses many closely spaced subcarriers that are modulated independently. CO-OFDM systems have produced high SE records [6, 7], because the CO-OFDM subcarriers are orthogonal to each other and form a complete set. Usually the modulation speed for subcarriers is low, which helps to overcome the optical fiber’s dispersion. However, CO-OFDM suffers from a high peak-to-average power ratio (PAPR) and is sensitive to the phase noises of laser diodes. Most of all, it requires heavy use of digital signal processing (DSP) circuits, which makes its real-time operation difficult [6, 7]. Recently it has been suggested to use a linear sum of optical receiver modes (ORMs) as an optical signal, called an ORM signal [8, 9]. We refer to this kind of multiplexing as ORM division multiplexing (ORMDM). Let’s assume that ORMDM is used in CO-ORMDM systems. Then, the ORMs are inherent modes of the coherent optical receiver (COR) in the CO-ORMDM system [1013]. They are orthogonal to each other and form a complete set as well. Thus the CO-ORMDM can yield high SE values. In a CO-ORMDM system the optical channel is a linear sum of ORM subchannels, as will be explained in Section II. The ORM subchannels have wider spectra and higher baud rates than those of CO-OFDM systems. Thus the foregoing PAPR and phase noise problems can be mitigated, as in wavelet OFDM [14, 15]. In this case, how is one to detect each ORM subchannel separately, in real time? To this end, we propose a new COR that uses an ORM subcarrier as its local oscillator (LO). Let us consider the ith optical channel in a CO-ORMDM system. The electric field of this optical channel can be written as e(t) = Re{E(t)exp( it)}, where ωi is the center angular frequency of the ith optical channel, and E(t) is the complex electric field amplitude (CEFA). For a single ORM signal localized at t = 0, E(t) is a linear sum of ORM mode functions ψn(t){n = 0, 1, 2, …} [8] E(t)=n=0M1anψn(t) where M is the number of ORMs used for the ORMDM. The complex mode coefficient an includes the data to be transmitted. We call the t = 0 time the origin of this ORM signal [8, 9]. The mode functions are real and complete, satisfying the orthogonality relation [8] dtψm(t)ψn(t)=δmn where δmn is the Kronecker delta function. Taking the Fourier transform of both sides of Eq. (1), we have an alternative form of the ORM signal that is more convenient than Eq. (1) in many cases: ε(ω)= n=0 M1anϕn(ω), where ε(ω) and ϕn(ω) are the CEFA and the nth ORM mode function in the optical frequency domain respectively [8]. The mode functions ϕn(ω){n = 0, 1, 2, …} are also complete and satisfy the orthogonality relation dωϕm*(ω)ϕn(ω)=2πδmn−∞ dωϕ*m(ω)ϕn(ω) = 2πδmn. With all ORM signals present, the full expression for E(t) can be written as E(t)= n=0 M1l=a n,lψn(tlT) where T is the period of the ORM signal, and we use the simplification an,0 = an. The optical channel is a linear sum of ORM subchannels. The CEFA of the mth ORM subchannel is l=am,lψm(tlT). The mth ORM subchannel before its modulation is the mth ORM subcarrier, the CEFA of which is given by l=ψm(tlT). The CEFA of the ORM subcarriers and the origin times are shown in Fig. 1, for the three lowest-order ORMs. Figure 1.Complex electric field amplitudes of the three lowest-order optical receiver mode subcarriers. In Fig. 2, we illustrate the proposed COR that detects the mth ORM subchannel of the ith optical channel. This COR will be denoted as CORi,m. Let us first discuss the characteristics of the direct-detection unit (DDU) within the dotted box in Fig. 2. The DDU defines the ORM set, and is the key element of the COR. The optical filter (OF) demultiplexes the received optical wavelength division multiplexing (WDM) channels. The photodetector (PD) is an ideal one, and its frequency response is absorbed by the electrical filter (EF). The EF filters out beat noises generated by the amplified spontaneous emission (ASE) optical noise. The ORMs and related quantities are dependent on the optical channel index i, but we will not use the optical channel index here explicitly, except in CORi,m. Figure 2.The CORi,m that detects the mth ORM subchannel within the ith optical channel. COR, coherent optical receiver; ORM, optical receiver mode; WDM, wavelength division multiplexing; LO, local oscillator; LD, laser diode; MOD, optical modulator; DDU, direct-detection unit; OF, optical filter; PD, photodetector; EF, electrical filter; DSP, digital signal processing. If the CEFA at the DDU input is εDD(ω), then the output voltage of the DDU can be evaluated as [10, 13] yDD(t)=k8π2 dω dω' exp{j(ω'ω)t}εDD*(ω)K(ω,ω')εDD(ω') The kernel is given by K(ω, ω') = H*o(ω)He(ω' − ω)Ho(ω'); it has the Hermitian property K(ω, ω') = K*(ω', ω). Ho(ω) is the transfer function of the OF between the CEFA input to the OF and the CEFA output from the OF, in the optical frequency domain. He(ω) is the transfer function of the EF between the current input to the EF and the voltage output from the EF, in the optical frequency domain. k is a proportionality constant. ϕn(ω) satisfies the following homogeneous Fredholm integral equation of the 2nd kind with real eigenvalue λn: ϕn(ω)=λn dω'K(ω,ω')ϕn(ω') Exact solutions of Eq. (5) are often not available, in which case ϕn(ω) and λn are found numerically [10]. The eigenvalues are positive in general, and increase indefinitely as n increases: λ0 ≤ λ1 ≤ λ2 ≤ …. As for the CORi,m in Fig. 2, the LO’s output light is proportional to the mth ORM subcarrier having the same center wavelength as the ith optical channel. Only the in-phase detection part is shown, for brevity. The quadrature-phase detection part is hidden, and has the same structure except for the 90° phase shift for the LO [3]. For ease of explanation, we also assume that the received optical WDM channels and the LO have the same polarization. The transmitted optical WDM channels are applied to the 3-dB coupler, along with the LO light. The two outputs of the 3-dB coupler are directed to the two DDU inputs. The two DDUs use the same kinds of devices and have the same ORM set. Let us assume that the CEFAs at the upper and the lower DDU inputs in the optical frequency domain are ε1(ω) + ε1(ω) and ε1(ω) − ε2(ω) respectively [3]. ε1(ω) is from the optical WDM channels and ε2(ω) is from the LO. If we denote the voltages at the upper and the lower DDU outputs as y+(t) and y(t) respectively, then using Eq. (4) we obtain y±(t)=k8π2 dω dω' exp{j(ω'ω)t}{ε1*(ω)±ε2*(ω)}K(ω,ω'){ε1(ω')±ε2(ω')} Then the differential output voltage y(t) = y+(t) − y(t) is y(t)=k4π2 dω dω' ε1* (ω)K(ω,ω')ε 2 (ω')exp{j(ω'ω)t} +ε 2* (ω)K(ω,ω')ε1 (ω')exp{j(ω'ω)t} We exchange the integration variables ω and ω' within the first term of Eq. (7) to find y(t)=k2π2 dω dω'Re ε 2* (ω)K(ω,ω')ε1 (ω')exp{j(ω'ω)t} where we have used the Hermitian property of K(ω', ω). Equation (8) is the result for the in-phase detection. Changing ε2(ω) to jε2(ω) in Eq. (8), we can also find the result for the quadrature-phase detection. With these two detection results, we can build a complex received signal for the CORi,m as follows: Ym(t)=k2π2 dω dω'ε2*(ω)K(ω,ω')ε1(ω')exp{j(ω'ω)t} Now, we assume that only a single ORM signal of the ith optical channel is received, and exclude any crosstalk for the reception of this ORM signal. We use n=0 M1anϕn(ω) as the signal part of ε1(ω). The ASE noise part of ε1(ω) can be expressed as n=0χnϕn(ω). Also, we use a single ORM waveform for the LO: ε2(ω) = cmϕm(ω), where cm is a complex constant. Then we obtain Ym(t)=4cm* n=0 M1(an+χn)ymn(t)+ n=Mχnymn(t) where ymn(t) is defined as [8, 9] Ym(0)=kcm*(am+χm)/πλm Since ymn(0) = δmnk / 4πλm, we have Ym(0)=kcm*(am+χm)/πλm which is our main result. It tells us that at t = 0 we can find the transmitted data of the mth ORM subchannel directly, without the heavy use of DSP circuits. Note that this property is unique to the CO-ORMDM, because of the kernel K(ω, ω') in Eq. (11). The t = 0 time point is equal to the origin of the ORM signal Eq. (1). If we regard the maximum point of the zeroth ORM waveform as the center of the ORM signal, each origin time in Fig. 1 has an offset from the center of the corresponding ORM signal, reflecting the delay of the DDU filters. Let us decompose the mode coefficient am as am = γζmzm, where zm is the transmitted data. γ is a proportionality constant, and ζm is a dimensionless real constant related to the optical power of the ORM subchannel. The mode coefficient χm is decomposed as χm = γnm, where nm is a dimensionless zero-mean complex Gaussian random variable. All of the real and imaginary parts of {n0, n1, n2, ...} are mutually independent of each other, having identical variance [10]. With these decompositions we obtain Y¯m(0)=zm+nm/ζm where m(t) = Ym(t)πλm / c*mγζmk. Only the mth ORM contributions appear in Eq. (13), for both the signal and the ASE noise. In addition, we need to include the effects from other ORM signals in the same and adjacent optical channels, and we need to use the full subcarrier expression for the LO. These modifications will add crosstalk terms to the right-hand side of Eq. (13). The phase and the polarization diversities can be included as per [3]. In this section we present the waveforms of Ym(t) for the foregoing case of single-ORM-signal input. We assume that Ho(ω) and He(ω) are Gaussian [13]. The eigenvalue of the mth ORM is λm = λ0 / qm , where q is a positive quantity smaller than 1. The expressions for λ0 and q are in [13]. From Eq. (12), Ym(0) is proportional to qm. To use higher-order ORMs, we need to use high q values, which can be done by decreasing the EF bandwidth compared to the OF bandwidth [13]. Thus we choose 3-dB bandwidths |Ho (ω)|2 and |He(ω)|2 as 100 GHz and 10 GHz respectively, which gives q = 0.754. For simplicity, we set ζm = 1 for all m. Neglecting the ASE, we have from Eq. (10) Y¯m(t)= n=0 M1znymn(t)/qmyc where yc = k / 4πλ0 [8]. We show the real and imaginary parts of m(t) in Fig. 3, denoted as Rm(t) and Im(t) respectively. We set M = 3 with z0 = 1 + j, z1 = −1 + 3j, and z2 = 3 + j. Note that m(0) = zm for m = 0, 1, 2, 3, where z3 = 0 naturally. Figure 3.Real and imaginary parts of the waveforms of m(t), denoted as Rm(t) and Im(t) respectively. The three lowest-order optical receiver mode subchannels are transmitted (M = 3) with z0 = 1 + j, z1 = −1 + 3j, and z2 = 3 + j. (a) m = 0, (b) m = 1, (c) m = 2, (d) m = 3. The proposed COR can be applied to conventional optical communication systems, to upgrade their SEs. From the ORMDM point of view, the optical signal of a conventional optical channel, excluding the OFDM, can be regarded as an ORM signal with an {n = 0, 1, 2, …} dependent on each other. If one of the ORMs (say the mth ORM) has an independent am, we have more freedom and can increase the SE. In this case we need two CORs. One uses the mth ORM subcarrier as its LO, while the other uses the sum of the rest of the ORM subcarriers as its LO. This procedure can be extended to other ORMs, until the ORMDM is fully utilized. The COR can be used in multiple-access networks [16]. If only one ORM subchannel is to be dropped at an optical node, for example, we use one COR at that node. If we detect multiple optical channels and ORM subchannels, we use a WDM demultiplexer (DMUX) and place optical splitters after the WDM DMUX, as shown in Fig. 4. We place one COR at the end of each optical-splitter arm. The optical splitters can be integrated [17], and their losses can be compensated by the optical amplifier (OA) before the WDM DMUX. We can also use gain materials for the optical splitter [18]. If we use the WDM DMUX as the DDU’s OF in Fig. 2, we may remove all OFs from within the CORs. Then the modulation signal to obtain the LO in Fig. 2 is modified to make the LO light seem as if it has passed the OF already. Figure 4.Receiver side of a CO-ORMDM system to detect multiple optical channels and ORM subchannels. CO, coherent optical; ORM, optical receiver mode; ORMDM, ORM division multiplexing; OA, optical amplifier; WDM, wavelength division multiplexing; DMUX, demultiplexer; COR, coherent optical receiver. We perform numerical simulations for the foregoing CO-ORMDM system. The 256-quadrature amplitude modulation (QAM) code is used for the modulation of the ORM subcarriers. The optical channel power is fixed here by allocating smaller optical power evenly to the ORM subchannels as M is increased from 1 to 12. Thus we set ζm=1/M in Eq. (13). For M = 1, we assume that the ASE noise term in Eq. (13) is below 5% of the minimum constellation distance (MCD). If we want to get error-free results in this case, the crosstalk fluctuations should be kept below 45% of the MCD. Under this condition, we have a SE of 6.0 bit s−1 Hz−1. For M = 6, the ASE noise term in Eq. (13) is below 8.7% of the MCD, and we have a SE of 9.8 bit s−1 Hz−1. Similarly, for M = 12 SE is 11.0 bit s−1 Hz−1 and exhibits saturation behavior. If the ORM subchannels have unequal optical powers and unequal QAM codes, we can further increase the SE. Also, with forward error-correction methods the SE can be increased even further [4, 19]. In contrast to CO-OFDM, real-time operation is possible in our CO-ORMDM system. The speed of the CORs in our CO-ORMDM system is not limited by the DSP circuits, as Eq. (12) shows. As for the transmitter of the CO-ORMDM system, we may use a single in-phase/quadrature optical modulator driven by two digital-to-analog converters (DACs) to produce one or a few subchannels. Then the DSP circuit limits can be avoided, and real-time operation can be attained. The digital inputs to the DACs can be modified to pre-compensate for the optical-fiber dispersion, etc. As a reference, a 100 GS/s DAC can generate electrical signals that are about 25 GHz in bandwidth [20]. To increase the channel bandwidth beyond the limit of the DACs, we can use optical arbitrary-waveform generators (OAWGs) to obtain ORM subcarriers from mode-locked laser diodes [2123]. Modulating the ORM subcarriers, we get ORM subchannels. Also, there is no limit from the DSP circuits in this case. To make the proposed CO-ORMDM system more practical, we could integrate photonic devices. Similar works have been done for conventional COR [24, 25]. Building each optical channel as a linear sum of multiple ORM subchannels, CO-ORMDM systems can attain high SEs. To detect the ORM subchannels selectively, we have introduced a new COR that is fast and does not require heavy use of DSP circuits. Thus, in contrast to CO-OFDM, real-time operation is possible in our CO-ORMDM system. In addition, we can use OAWGs to increase the channel bandwidth beyond the limit of the DACs. With photonic integration, CO-ORMDM systems using the proposed CORs can be made simple and practical. The author declares no conflicts of interest. Data underlying the results presented in this paper are not publicly available at the time of publication, but may be obtained from the authors upon reasonable request. The work reported in this paper was conducted during the sabbatical year of Kwangwoon University in 2019. 1. W. Klaus, P. J. Winzer, and K. Nakajima, “The role of parallelism in the evolution of optical fiber communication systems,” Proc. IEEE 110, 1619-1654 (2022). 2. B. J. Puttnam, R. S. Luís, G. Rademacher, M. Mendez-Astudillio, Y. Awaji, and H. Furukawa, “S, C- and L-band transmission over a 157 nm bandwidth using doped fiber and distributed Raman amplification,” Opt. Express 39, 10011-10018 (2022). 3. K. Kikuchi, “Fundamentals of coherent optical fiber communications,” J. Light. Technol. 34, 157-179 (2016). 4. P. J. Winzer, “High-spectral-efficiency optical modulation formats,” J. Light. Technol. 30, 3824-3835 (2012). 5. I. B. Djordjevic and B. Vasic, “Orthogonal frequency division multiplexing for high-speed optical transmission,” Opt. Express 14, 3767-3775 (2006). 6. D. Qian, M.-F. Huang, E. Ip, Y.-K. Huang, Y. Shao, J. Hu, and T. Wang, “101.7-Tb/s (370×294-Gb/s) PDM-128QAM-OFDM transmission over 3×55-km SSMF using pilot-based Phase noise mitigation,” in Optical Fiber Communication Conf. Technical Digest 2011 (Optical Society of America, 2011), Paper PDPB5. 7. T. Omiya, M. Yoshida, and M. Nakazawa, “400 Gbit/s 256 QAM-OFDM transmission over 720 km with a 14 bit/s/Hz spectral efficiency by using high-resolution FDE,” Opt. Express 21, 2632-2641 (2013). 8. J. S. Lee, “Optical signals using superposition of optical receiver modes,” Curr. Opt. Photonics 1, 308-314 (2017). 9. B. Batsuren, K. H. Seo, and J. S. Lee, “Optical communication using linear sums of optical receiver modes: proof of concept,” IEEE Photonics Technol. Lett. 30, 1707-1710 (2018). 10. J. S. Lee and C. S. Shim, “Bit-error-rate analysis of optically preamplified receivers using an eigenfunction expansion method in optical frequency domain,” J. Light. Technol. 12, 1224-1229 (1994). 11. E. Forestieri, “Evaluating the error probability in lightwave systems with chromatic dispersion, arbitrary pulse shape and pre- and postdetection filtering,” J. Light. Technol. 18, 1493-1503 (2000). 12. R. Holzlohner, V. S. Grigoryan, C. R. Menyuk, and W. L. Kath, “Accurate calculation of eye diagrams and bit error rates in optical transmission systems using linearization,” J. Light. Technol. 20, 389-400 (2002). 13. J. S. Lee and A. E. Willner, “Analysis of Gaussian optical receivers,” J. Light. Technol. 31, 2687-2693 (2013). 14. A. Li, W. Shieh, and R. Tucker, “Wavelet packet transform-based OFDM for optical communications,” J. Light. Technol. 28, 3519-3528 (2010). 15. A. Güner and A. Özen, “Lifting wavelet transform based multicarrier modulation scheme for coherent optical communication systems,” J. Light. Technol. 39, 4255-4261 (2021). 16. E. Wong, “Next-generation broadband access networks and technologies,” J. Light. Technol. 30, 597-608 (2012). 17. K. Nara, N. Matsubara, and H. Kawashima, “Monolithically integrated 1×32 optical splitter/router using low loss ripple MZI-based WDM filter and low loss Y-branch circuit,” in Optical Fiber Communication Conference 2006 (Optical Society of America, 2006), paper OWO1. 18. J. D. B. Bradley, R. Stoffer, A. Bakker, L. Agazzi, F. Ay, K. Wörhoff, and M. Pollnau, “Integrated Al2O3:Er3+ zero-loss optical amplifier and power splitter with 40-nm bandwidth,” IEEE Photonics Technol. Lett. 22, 278-280 (2010). 19. S. Y. Kim, K. H. Seo, and J. S. Lee, “Spectral efficiencies of channel-interleaved bidirectional and unidirectional ultradense WDM for metro applications,” J. Light. Technol. 30, 229-233 (2012). 20. H. Huang, J. Heilmeyer, M. Grozing, M. Berroth, J. Leibrich, and W. Rosenkranz, “An 8-bit 100-GS/s Distributed DAC in 28-nm CMOS for optical communications,” IEEE Trans. Microw. Theory Tech. 63, 1211-1218 (2015). 21. S. T. Cundiff and A. M. Weiner, “Optical arbitrary waveform generation,” Nat. Photonics 4, 760-767 (2010). 22. J. Dunayevsky and D. M. Marom, “MEMS spatial light modulator for phase and amplitude modulation of spectrally dispersed light,” J. Microelectromech. Syst. 22, 1213-1221 (2013). 23. H. Tsuda, Y. Tanaka, T. Shioda, and T. Kurokawa, “Analog and digital optical pulse synthesizers using arrayed-waveguide gratings for high-speed optical signal processing,” J. Light. Technol. 26, 670-677 (2008). 24. Z. Xuan and F. Aflatouni, “Integrated coherent optical receiver with feed-forward carrier recovery,” Opt. Express 28, 16073-16088 (2020). 25. Y. Wang, X. Li, Z. Jiang, L. Tong, W. Deng, X. Gao, X. Huang, H. Zhou, Y. Yu, L. Ye, X. Xiao, and X. Zhang, “Ultrahigh-speed graphene-based optical coherent receiver,” Nat. Commun. 12, 5076 (2021). ### Article #### Article Curr. Opt. Photon. 2023; 7(1): 15-20 Published online February 25, 2023 https://doi.org/10.3807/COPP.2023.7.1.15 ## Coherent Optical Receiver for Real-time CO-ORMDM Systems Jae Seung Lee Department of Electronic Engineering, Kwangwoon University, Seoul 01897, Korea Correspondence to:*jslee@kw.ac.kr, ORCID 0000-0002-3927-9200 Received: August 19, 2022; Revised: November 22, 2022; Accepted: November 29, 2022 This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. ### Abstract We propose a new coherent optical receiver (COR) to detect optical receiver mode (ORM) subchannels selectively in coherent optical (CO) ORM division multiplexing (ORMDM) systems. In the COORMDM systems, each optical channel is a linear sum of ORM subchannels, to obtain high spectral efficiencies (SEs). The COR uses an ORM subcarrier as its local oscillator (LO) and reads the transmitted data at the origin times of ORM signals. For example, if the mth ORM subcarrier is used as the LO, then the COR reads the data of the mth ORM subchannel. The proposed COR is fast and can make COORMDM systems useful for real-time optical communication with high SE. Keywords: Coherent optical communication, Optical fiber communication, Optical receivers, Wavelength division multiplexing (WDM) ### I. INTRODUCTION As data-traffic demands grow in areas such as the Internet of Things and artificial intelligence, it is important to increase the spectral efficiencies (SEs) of optical-fiber transmission systems [1, 2]. With the advent of coherent optical (CO) communication systems [3, 4], the SEs of optical communication systems have grown remarkably, greater than 10 bit s−1 Hz−1. The CO communication systems can discriminate the differences in polarizations, amplitudes, frequencies, and phases of received optical signals. The SE can be enhanced when CO communication systems use the orthogonal frequency-division multiplexing (OFDM) technique [5]; these are called CO-OFDM systems. OFDM uses many closely spaced subcarriers that are modulated independently. CO-OFDM systems have produced high SE records [6, 7], because the CO-OFDM subcarriers are orthogonal to each other and form a complete set. Usually the modulation speed for subcarriers is low, which helps to overcome the optical fiber’s dispersion. However, CO-OFDM suffers from a high peak-to-average power ratio (PAPR) and is sensitive to the phase noises of laser diodes. Most of all, it requires heavy use of digital signal processing (DSP) circuits, which makes its real-time operation difficult [6, 7]. Recently it has been suggested to use a linear sum of optical receiver modes (ORMs) as an optical signal, called an ORM signal [8, 9]. We refer to this kind of multiplexing as ORM division multiplexing (ORMDM). Let’s assume that ORMDM is used in CO-ORMDM systems. Then, the ORMs are inherent modes of the coherent optical receiver (COR) in the CO-ORMDM system [1013]. They are orthogonal to each other and form a complete set as well. Thus the CO-ORMDM can yield high SE values. In a CO-ORMDM system the optical channel is a linear sum of ORM subchannels, as will be explained in Section II. The ORM subchannels have wider spectra and higher baud rates than those of CO-OFDM systems. Thus the foregoing PAPR and phase noise problems can be mitigated, as in wavelet OFDM [14, 15]. In this case, how is one to detect each ORM subchannel separately, in real time? To this end, we propose a new COR that uses an ORM subcarrier as its local oscillator (LO). ### II. ORMDM Let us consider the ith optical channel in a CO-ORMDM system. The electric field of this optical channel can be written as e(t) = Re{E(t)exp( it)}, where ωi is the center angular frequency of the ith optical channel, and E(t) is the complex electric field amplitude (CEFA). For a single ORM signal localized at t = 0, E(t) is a linear sum of ORM mode functions ψn(t){n = 0, 1, 2, …} [8] $E(t)=∑n=0M−1anψn(t)$ where M is the number of ORMs used for the ORMDM. The complex mode coefficient an includes the data to be transmitted. We call the t = 0 time the origin of this ORM signal [8, 9]. The mode functions are real and complete, satisfying the orthogonality relation [8] $∫−∞∞ dtψm(t)ψn(t)=δmn$ where δmn is the Kronecker delta function. Taking the Fourier transform of both sides of Eq. (1), we have an alternative form of the ORM signal that is more convenient than Eq. (1) in many cases: $ε(ω)=∑ n=0 M−1anϕn(ω)$, where ε(ω) and ϕn(ω) are the CEFA and the nth ORM mode function in the optical frequency domain respectively [8]. The mode functions ϕn(ω){n = 0, 1, 2, …} are also complete and satisfy the orthogonality relation $∫−∞∞ dωϕm*(ω)ϕn(ω)=2πδmn$−∞ dωϕ*m(ω)ϕn(ω) = 2πδmn. With all ORM signals present, the full expression for E(t) can be written as $E(t)=∑ n=0 M−1∑l=−∞∞a n,lψn(t−lT)$ where T is the period of the ORM signal, and we use the simplification an,0 = an. The optical channel is a linear sum of ORM subchannels. The CEFA of the mth ORM subchannel is $∑ l=−∞∞am,lψm(t−lT)$. The mth ORM subchannel before its modulation is the mth ORM subcarrier, the CEFA of which is given by $∑ l=−∞∞ψm(t−lT)$. The CEFA of the ORM subcarriers and the origin times are shown in Fig. 1, for the three lowest-order ORMs. Figure 1. Complex electric field amplitudes of the three lowest-order optical receiver mode subcarriers. ### III. PROPOSED COR In Fig. 2, we illustrate the proposed COR that detects the mth ORM subchannel of the ith optical channel. This COR will be denoted as CORi,m. Let us first discuss the characteristics of the direct-detection unit (DDU) within the dotted box in Fig. 2. The DDU defines the ORM set, and is the key element of the COR. The optical filter (OF) demultiplexes the received optical wavelength division multiplexing (WDM) channels. The photodetector (PD) is an ideal one, and its frequency response is absorbed by the electrical filter (EF). The EF filters out beat noises generated by the amplified spontaneous emission (ASE) optical noise. The ORMs and related quantities are dependent on the optical channel index i, but we will not use the optical channel index here explicitly, except in CORi,m. Figure 2. The CORi,m that detects the mth ORM subchannel within the ith optical channel. COR, coherent optical receiver; ORM, optical receiver mode; WDM, wavelength division multiplexing; LO, local oscillator; LD, laser diode; MOD, optical modulator; DDU, direct-detection unit; OF, optical filter; PD, photodetector; EF, electrical filter; DSP, digital signal processing. If the CEFA at the DDU input is εDD(ω), then the output voltage of the DDU can be evaluated as [10, 13] $yDD(t)=k8π2∫−∞∞ dω ∫ −∞ ∞dω' exp{j(ω'−ω)t}⋅εDD*(ω)K(ω,ω')εDD(ω')$ The kernel is given by K(ω, ω') = H*o(ω)He(ω' − ω)Ho(ω'); it has the Hermitian property K(ω, ω') = K*(ω', ω). Ho(ω) is the transfer function of the OF between the CEFA input to the OF and the CEFA output from the OF, in the optical frequency domain. He(ω) is the transfer function of the EF between the current input to the EF and the voltage output from the EF, in the optical frequency domain. k is a proportionality constant. ϕn(ω) satisfies the following homogeneous Fredholm integral equation of the 2nd kind with real eigenvalue λn: $ϕn(ω)=λn∫−∞∞ dω'K(ω,ω')ϕn(ω')$ Exact solutions of Eq. (5) are often not available, in which case ϕn(ω) and λn are found numerically [10]. The eigenvalues are positive in general, and increase indefinitely as n increases: λ0 ≤ λ1 ≤ λ2 ≤ …. As for the CORi,m in Fig. 2, the LO’s output light is proportional to the mth ORM subcarrier having the same center wavelength as the ith optical channel. Only the in-phase detection part is shown, for brevity. The quadrature-phase detection part is hidden, and has the same structure except for the 90° phase shift for the LO [3]. For ease of explanation, we also assume that the received optical WDM channels and the LO have the same polarization. The transmitted optical WDM channels are applied to the 3-dB coupler, along with the LO light. The two outputs of the 3-dB coupler are directed to the two DDU inputs. The two DDUs use the same kinds of devices and have the same ORM set. Let us assume that the CEFAs at the upper and the lower DDU inputs in the optical frequency domain are ε1(ω) + ε1(ω) and ε1(ω) − ε2(ω) respectively [3]. ε1(ω) is from the optical WDM channels and ε2(ω) is from the LO. If we denote the voltages at the upper and the lower DDU outputs as y+(t) and y(t) respectively, then using Eq. (4) we obtain $y±(t)=k8π2∫−∞∞ dω ∫ −∞ ∞dω' exp{j(ω'−ω)t}⋅{ε1*(ω)±ε2*(ω)}K(ω,ω'){ε1(ω')±ε2(ω')}$ Then the differential output voltage y(t) = y+(t) − y(t) is $y(t)=k4π2∫−∞∞ dω ∫ −∞ ∞ dω'⋅ ε1* (ω)K(ω,ω')ε 2 (ω')exp{j(ω'−ω)t} +ε 2* (ω)K(ω,ω')ε1 (ω')exp{j(ω'−ω)t}$ We exchange the integration variables ω and ω' within the first term of Eq. (7) to find $y(t)=k2π2∫−∞∞ dω ∫ −∞ ∞ dω'⋅Re ε 2* (ω)K(ω,ω')ε1 (ω')exp{j(ω'−ω)t}$ where we have used the Hermitian property of K(ω', ω). Equation (8) is the result for the in-phase detection. Changing ε2(ω) to jε2(ω) in Eq. (8), we can also find the result for the quadrature-phase detection. With these two detection results, we can build a complex received signal for the CORi,m as follows: $Ym(t)=k2π2∫−∞∞ dω ∫ −∞ ∞dω'⋅ε2*(ω)K(ω,ω')ε1(ω')exp{j(ω'−ω)t}$ Now, we assume that only a single ORM signal of the ith optical channel is received, and exclude any crosstalk for the reception of this ORM signal. We use $∑ n=0 M−1anϕn(ω)$ as the signal part of ε1(ω). The ASE noise part of ε1(ω) can be expressed as $∑ n=0∞χnϕn(ω)$. Also, we use a single ORM waveform for the LO: ε2(ω) = cmϕm(ω), where cm is a complex constant. Then we obtain $Ym(t)=4cm*∑ n=0 M−1(an+χn)ymn(t)+∑ n=M∞χnymn(t)$ where ymn(t) is defined as [8, 9] $Ym(0)=kcm*(am+χm)/πλm$ Since ymn(0) = δmnk / 4πλm, we have $Ym(0)=kcm*(am+χm)/πλm$ which is our main result. It tells us that at t = 0 we can find the transmitted data of the mth ORM subchannel directly, without the heavy use of DSP circuits. Note that this property is unique to the CO-ORMDM, because of the kernel K(ω, ω') in Eq. (11). The t = 0 time point is equal to the origin of the ORM signal Eq. (1). If we regard the maximum point of the zeroth ORM waveform as the center of the ORM signal, each origin time in Fig. 1 has an offset from the center of the corresponding ORM signal, reflecting the delay of the DDU filters. Let us decompose the mode coefficient am as am = γζmzm, where zm is the transmitted data. γ is a proportionality constant, and ζm is a dimensionless real constant related to the optical power of the ORM subchannel. The mode coefficient χm is decomposed as χm = γnm, where nm is a dimensionless zero-mean complex Gaussian random variable. All of the real and imaginary parts of {n0, n1, n2, ...} are mutually independent of each other, having identical variance [10]. With these decompositions we obtain $Y¯m(0)=zm+nm/ζm$ where m(t) = Ym(t)πλm / c*mγζmk. Only the mth ORM contributions appear in Eq. (13), for both the signal and the ASE noise. In addition, we need to include the effects from other ORM signals in the same and adjacent optical channels, and we need to use the full subcarrier expression for the LO. These modifications will add crosstalk terms to the right-hand side of Eq. (13). The phase and the polarization diversities can be included as per [3]. In this section we present the waveforms of Ym(t) for the foregoing case of single-ORM-signal input. We assume that Ho(ω) and He(ω) are Gaussian [13]. The eigenvalue of the mth ORM is λm = λ0 / qm , where q is a positive quantity smaller than 1. The expressions for λ0 and q are in [13]. From Eq. (12), Ym(0) is proportional to qm. To use higher-order ORMs, we need to use high q values, which can be done by decreasing the EF bandwidth compared to the OF bandwidth [13]. Thus we choose 3-dB bandwidths |Ho (ω)|2 and |He(ω)|2 as 100 GHz and 10 GHz respectively, which gives q = 0.754. For simplicity, we set ζm = 1 for all m. Neglecting the ASE, we have from Eq. (10) $Y¯m(t)=∑ n=0 M−1znymn(t)/qmyc$ where yc = k / 4πλ0 [8]. We show the real and imaginary parts of m(t) in Fig. 3, denoted as Rm(t) and Im(t) respectively. We set M = 3 with z0 = 1 + j, z1 = −1 + 3j, and z2 = 3 + j. Note that m(0) = zm for m = 0, 1, 2, 3, where z3 = 0 naturally. Figure 3. Real and imaginary parts of the waveforms of m(t), denoted as Rm(t) and Im(t) respectively. The three lowest-order optical receiver mode subchannels are transmitted (M = 3) with z0 = 1 + j, z1 = −1 + 3j, and z2 = 3 + j. (a) m = 0, (b) m = 1, (c) m = 2, (d) m = 3. ### V. DISCUSSION The proposed COR can be applied to conventional optical communication systems, to upgrade their SEs. From the ORMDM point of view, the optical signal of a conventional optical channel, excluding the OFDM, can be regarded as an ORM signal with an {n = 0, 1, 2, …} dependent on each other. If one of the ORMs (say the mth ORM) has an independent am, we have more freedom and can increase the SE. In this case we need two CORs. One uses the mth ORM subcarrier as its LO, while the other uses the sum of the rest of the ORM subcarriers as its LO. This procedure can be extended to other ORMs, until the ORMDM is fully utilized. The COR can be used in multiple-access networks [16]. If only one ORM subchannel is to be dropped at an optical node, for example, we use one COR at that node. If we detect multiple optical channels and ORM subchannels, we use a WDM demultiplexer (DMUX) and place optical splitters after the WDM DMUX, as shown in Fig. 4. We place one COR at the end of each optical-splitter arm. The optical splitters can be integrated [17], and their losses can be compensated by the optical amplifier (OA) before the WDM DMUX. We can also use gain materials for the optical splitter [18]. If we use the WDM DMUX as the DDU’s OF in Fig. 2, we may remove all OFs from within the CORs. Then the modulation signal to obtain the LO in Fig. 2 is modified to make the LO light seem as if it has passed the OF already. Figure 4. Receiver side of a CO-ORMDM system to detect multiple optical channels and ORM subchannels. CO, coherent optical; ORM, optical receiver mode; ORMDM, ORM division multiplexing; OA, optical amplifier; WDM, wavelength division multiplexing; DMUX, demultiplexer; COR, coherent optical receiver. We perform numerical simulations for the foregoing CO-ORMDM system. The 256-quadrature amplitude modulation (QAM) code is used for the modulation of the ORM subcarriers. The optical channel power is fixed here by allocating smaller optical power evenly to the ORM subchannels as M is increased from 1 to 12. Thus we set $ζm=1/M$ in Eq. (13). For M = 1, we assume that the ASE noise term in Eq. (13) is below 5% of the minimum constellation distance (MCD). If we want to get error-free results in this case, the crosstalk fluctuations should be kept below 45% of the MCD. Under this condition, we have a SE of 6.0 bit s−1 Hz−1. For M = 6, the ASE noise term in Eq. (13) is below 8.7% of the MCD, and we have a SE of 9.8 bit s−1 Hz−1. Similarly, for M = 12 SE is 11.0 bit s−1 Hz−1 and exhibits saturation behavior. If the ORM subchannels have unequal optical powers and unequal QAM codes, we can further increase the SE. Also, with forward error-correction methods the SE can be increased even further [4, 19]. In contrast to CO-OFDM, real-time operation is possible in our CO-ORMDM system. The speed of the CORs in our CO-ORMDM system is not limited by the DSP circuits, as Eq. (12) shows. As for the transmitter of the CO-ORMDM system, we may use a single in-phase/quadrature optical modulator driven by two digital-to-analog converters (DACs) to produce one or a few subchannels. Then the DSP circuit limits can be avoided, and real-time operation can be attained. The digital inputs to the DACs can be modified to pre-compensate for the optical-fiber dispersion, etc. As a reference, a 100 GS/s DAC can generate electrical signals that are about 25 GHz in bandwidth [20]. To increase the channel bandwidth beyond the limit of the DACs, we can use optical arbitrary-waveform generators (OAWGs) to obtain ORM subcarriers from mode-locked laser diodes [2123]. Modulating the ORM subcarriers, we get ORM subchannels. Also, there is no limit from the DSP circuits in this case. To make the proposed CO-ORMDM system more practical, we could integrate photonic devices. Similar works have been done for conventional COR [24, 25]. ### VI. CONCLUSION Building each optical channel as a linear sum of multiple ORM subchannels, CO-ORMDM systems can attain high SEs. To detect the ORM subchannels selectively, we have introduced a new COR that is fast and does not require heavy use of DSP circuits. Thus, in contrast to CO-OFDM, real-time operation is possible in our CO-ORMDM system. In addition, we can use OAWGs to increase the channel bandwidth beyond the limit of the DACs. With photonic integration, CO-ORMDM systems using the proposed CORs can be made simple and practical. ### DISCLOSURES The author declares no conflicts of interest. ### DATA AVAILABILITY Data underlying the results presented in this paper are not publicly available at the time of publication, but may be obtained from the authors upon reasonable request. ### ACKNOWLEDGMENT The work reported in this paper was conducted during the sabbatical year of Kwangwoon University in 2019. ### Fig 1. Figure 1.Complex electric field amplitudes of the three lowest-order optical receiver mode subcarriers. Current Optics and Photonics 2023; 7: 15-20https://doi.org/10.3807/COPP.2023.7.1.15 ### Fig 2. Figure 2.The CORi,m that detects the mth ORM subchannel within the ith optical channel. COR, coherent optical receiver; ORM, optical receiver mode; WDM, wavelength division multiplexing; LO, local oscillator; LD, laser diode; MOD, optical modulator; DDU, direct-detection unit; OF, optical filter; PD, photodetector; EF, electrical filter; DSP, digital signal processing. Current Optics and Photonics 2023; 7: 15-20https://doi.org/10.3807/COPP.2023.7.1.15 ### Fig 3. Figure 3.Real and imaginary parts of the waveforms of m(t), denoted as Rm(t) and Im(t) respectively. The three lowest-order optical receiver mode subchannels are transmitted (M = 3) with z0 = 1 + j, z1 = −1 + 3j, and z2 = 3 + j. (a) m = 0, (b) m = 1, (c) m = 2, (d) m = 3. Current Optics and Photonics 2023; 7: 15-20https://doi.org/10.3807/COPP.2023.7.1.15 ### Fig 4. Figure 4.Receiver side of a CO-ORMDM system to detect multiple optical channels and ORM subchannels. CO, coherent optical; ORM, optical receiver mode; ORMDM, ORM division multiplexing; OA, optical amplifier; WDM, wavelength division multiplexing; DMUX, demultiplexer; COR, coherent optical receiver. Current Optics and Photonics 2023; 7: 15-20https://doi.org/10.3807/COPP.2023.7.1.15 ### References 1. W. Klaus, P. J. Winzer, and K. Nakajima, “The role of parallelism in the evolution of optical fiber communication systems,” Proc. IEEE 110, 1619-1654 (2022). 2. B. J. Puttnam, R. S. Luís, G. Rademacher, M. Mendez-Astudillio, Y. Awaji, and H. Furukawa, “S, C- and L-band transmission over a 157 nm bandwidth using doped fiber and distributed Raman amplification,” Opt. Express 39, 10011-10018 (2022). 3. K. Kikuchi, “Fundamentals of coherent optical fiber communications,” J. Light. Technol. 34, 157-179 (2016). 4. P. J. Winzer, “High-spectral-efficiency optical modulation formats,” J. Light. Technol. 30, 3824-3835 (2012). 5. I. B. Djordjevic and B. Vasic, “Orthogonal frequency division multiplexing for high-speed optical transmission,” Opt. Express 14, 3767-3775 (2006). 6. D. Qian, M.-F. Huang, E. Ip, Y.-K. Huang, Y. Shao, J. Hu, and T. Wang, “101.7-Tb/s (370×294-Gb/s) PDM-128QAM-OFDM transmission over 3×55-km SSMF using pilot-based Phase noise mitigation,” in Optical Fiber Communication Conf. Technical Digest 2011 (Optical Society of America, 2011), Paper PDPB5. 7. T. Omiya, M. Yoshida, and M. Nakazawa, “400 Gbit/s 256 QAM-OFDM transmission over 720 km with a 14 bit/s/Hz spectral efficiency by using high-resolution FDE,” Opt. Express 21, 2632-2641 (2013). 8. J. S. Lee, “Optical signals using superposition of optical receiver modes,” Curr. Opt. Photonics 1, 308-314 (2017). 9. B. Batsuren, K. H. Seo, and J. S. Lee, “Optical communication using linear sums of optical receiver modes: proof of concept,” IEEE Photonics Technol. Lett. 30, 1707-1710 (2018). 10. J. S. Lee and C. S. Shim, “Bit-error-rate analysis of optically preamplified receivers using an eigenfunction expansion method in optical frequency domain,” J. Light. Technol. 12, 1224-1229 (1994). 11. E. Forestieri, “Evaluating the error probability in lightwave systems with chromatic dispersion, arbitrary pulse shape and pre- and postdetection filtering,” J. Light. Technol. 18, 1493-1503 (2000). 12. R. Holzlohner, V. S. Grigoryan, C. R. Menyuk, and W. L. Kath, “Accurate calculation of eye diagrams and bit error rates in optical transmission systems using linearization,” J. Light. Technol. 20, 389-400 (2002). 13. J. S. Lee and A. E. Willner, “Analysis of Gaussian optical receivers,” J. Light. Technol. 31, 2687-2693 (2013). 14. A. Li, W. Shieh, and R. Tucker, “Wavelet packet transform-based OFDM for optical communications,” J. Light. Technol. 28, 3519-3528 (2010). 15. A. Güner and A. Özen, “Lifting wavelet transform based multicarrier modulation scheme for coherent optical communication systems,” J. Light. Technol. 39, 4255-4261 (2021). 16. E. Wong, “Next-generation broadband access networks and technologies,” J. Light. Technol. 30, 597-608 (2012). 17. K. Nara, N. Matsubara, and H. Kawashima, “Monolithically integrated 1×32 optical splitter/router using low loss ripple MZI-based WDM filter and low loss Y-branch circuit,” in Optical Fiber Communication Conference 2006 (Optical Society of America, 2006), paper OWO1. 18. J. D. B. Bradley, R. Stoffer, A. Bakker, L. Agazzi, F. Ay, K. Wörhoff, and M. Pollnau, “Integrated Al2O3:Er3+ zero-loss optical amplifier and power splitter with 40-nm bandwidth,” IEEE Photonics Technol. Lett. 22, 278-280 (2010). 19. S. Y. Kim, K. H. Seo, and J. S. Lee, “Spectral efficiencies of channel-interleaved bidirectional and unidirectional ultradense WDM for metro applications,” J. Light. Technol. 30, 229-233 (2012). 20. H. Huang, J. Heilmeyer, M. Grozing, M. Berroth, J. Leibrich, and W. Rosenkranz, “An 8-bit 100-GS/s Distributed DAC in 28-nm CMOS for optical communications,” IEEE Trans. Microw. Theory Tech. 63, 1211-1218 (2015). 21. S. T. Cundiff and A. M. Weiner, “Optical arbitrary waveform generation,” Nat. Photonics 4, 760-767 (2010). 22. J. Dunayevsky and D. M. Marom, “MEMS spatial light modulator for phase and amplitude modulation of spectrally dispersed light,” J. Microelectromech. Syst. 22, 1213-1221 (2013). 23. H. Tsuda, Y. Tanaka, T. Shioda, and T. Kurokawa, “Analog and digital optical pulse synthesizers using arrayed-waveguide gratings for high-speed optical signal processing,” J. Light. Technol. 26, 670-677 (2008). 24. Z. Xuan and F. Aflatouni, “Integrated coherent optical receiver with feed-forward carrier recovery,” Opt. Express 28, 16073-16088 (2020). 25. Y. Wang, X. Li, Z. Jiang, L. Tong, W. Deng, X. Gao, X. Huang, H. Zhou, Y. Yu, L. Ye, X. Xiao, and X. Zhang, “Ultrahigh-speed graphene-based optical coherent receiver,” Nat. Commun. 12, 5076 (2021). Wonshik Choi, Editor-in-chief
## Journal of Differential Geometry ### Weighted Projective Embeddings, Stability of Orbifolds, and Constant Scarla Curvature Kär Metrics #### Abstract We embed polarised orbifolds with cyclic stabiliser groups into weighted projective space via a weighted form of Kodaira embedding. Dividing by the (non-reductive) automorphisms of weighted projective space then formally gives a moduli space of orbifolds. We show how to express this as a reductive quotient and so a GIT problem, thus defining a notion of stability for orbifolds. We then prove an orbifold version of Donaldson’s theorem: the existence of an orbifold Kähler metric of constant scalar curvature implies K-semistability. By extending the notion of slope stability to orbifolds, we therefore get an explicit obstruction to the existence of constant scalar curvature orbifold Kähler metrics. We describe the manifold applications of this orbifold result, and show how many previously known results (Troyanov, Ghigi-Kollár, Rollin-Singer, the AdSCFT Sasaki-Einstein obstructions of Gauntlett-Martelli-Sparks- Yau) fit into this framework. #### Article information Source J. Differential Geom., Volume 88, Number 1 (2011), 109-159. Dates First available in Project Euclid: 4 October 2011
Informative line ### Exponential Function And Value Of E Practice Limits of Exponential & Graphs of Exponential Function & e Calculus, finding the inverse of a function which contain exponential function in them. # Exponential Function • If $$a>0$$ and $$a\neq1$$ then $$f(x)=a^x$$ is a continuous function whose domain is $$k$$ and range is $$(0,\,\infty)$$. • $$a^x>0$$ for all real values of $$x$$. • $$a^x$$ is an increasing function, if $$a>1$$ • $$a^x$$ is an decreasing function, if $$0<a<1$$ • ?$$a^n=a×a×a×.......n$$ times  if $$n\,\varepsilon N$$ • $$a^0=1$$ for all $$a$$. • $$a^{-n}=\dfrac{1}{a^n}$$  $$n\,\varepsilon N$$ • $$a^{p/q}=\left(a^{1/q}\right)^p$$? #### Find the exponential function of the from $$f(x)=k\,a^x$$, whose graph is as shown. A $$f(x)=2\left(3^x\right)$$ B $$f(x)=4\left(5^x\right)$$ C $$f(x)=3\left(4^x\right)$$ D $$f(x)=2\left(5^x\right)$$ × f(x) is an increasing function graph $$\Rightarrow \,a>1.$$ From the graph $$\to f(0)=3$$ $$\Rightarrow\,3=k×a^0$$ $$\Rightarrow\,k=3$$ From the graph, $$\to$$ $$f(2)=48$$ $$\Rightarrow\,48=k×a^2$$ $$\Rightarrow\,48=3a^2$$ $$\Rightarrow\,a^2=16$$ $$\Rightarrow\,a=4$$ (reject –4) $$\therefore\,f(x)=3×4^x$$ ### Find the exponential function of the from $$f(x)=k\,a^x$$, whose graph is as shown. A $$f(x)=2\left(3^x\right)$$ . B $$f(x)=4\left(5^x\right)$$ C $$f(x)=3\left(4^x\right)$$ D $$f(x)=2\left(5^x\right)$$ Option C is Correct # Graphs of f(x)=ax $$(0,\,1)$$ is a point on all the graphs of form $$f(x)=a^x$$ $$(0<a<1)$$ • Graph of $$f(x)=a^x$$ is a decreasing function of $$(0,\,1)$$. • Graph of $$f$$ decreases more rapidly as $$'a'$$ increases. • All the graphs of function of the form $$f(x)=a^x$$ pass through $$(0,\,1)$$ as $$a°=1$$  $$\forall\;'a'$$. • The graph grows more rapidly as value of a increases. This is due to the fact that the number greater than 1 when raised to power will keep on increasing while numbers between 0 and 1 when raised to powers will keep on decreasing. e.g. $$(0.2)^2=0.04$$ and $$(0.2)^3=0.008$$ while  $$(1.2)^2=1.44\;\;$$and  $$(1.2)^3=1.728$$ • $$\therefore\,a^x$$ is an increasing function of $$x$$,  when $$a>1$$ and it is a decreasing function of $$x$$, when $$0<a<1$$ #### Consider the graph of there functions on the same $$x-y$$ axis. Which of the following is the correct statement ? A (1)  is the graph of $$f(x)=(1.7)^x$$ (2)  is the graph of $$f(x)=3^x$$ (3)  is the graph of $$f(x)=5^x$$ B (1)  is the graph of $$f(x)=7^x$$ (2)  is the graph of $$f(x)=3^x$$ (3)  is the graph of $$f(x)=(1.2)^x$$ C (1)  is the graph of $$f(x)=5^x$$ (2)  is the graph of $$f(x)=2 ^x$$ (3)  is the graph of $$f(x)=10^x$$ D (1)  is the graph of $$f(x)=(1.8)^x$$ (2)  is the graph of $$f(x)=10^x$$ (3)  is the graph of $$f(x)=5^x$$ × (3)  is the steepest, then (2) and (1) is the slowest growing graph. $$\therefore$$ 'a' should be greatest for (3) and least for (1) $$\therefore$$ Hence, option (A) is correct. ### Consider the graph of there functions on the same $$x-y$$ axis. Which of the following is the correct statement ? A (1)  is the graph of $$f(x)=(1.7)^x$$ (2)  is the graph of $$f(x)=3^x$$ (3)  is the graph of $$f(x)=5^x$$ . B (1)  is the graph of $$f(x)=7^x$$ (2)  is the graph of $$f(x)=3^x$$ (3)  is the graph of $$f(x)=(1.2)^x$$ C (1)  is the graph of $$f(x)=5^x$$ (2)  is the graph of $$f(x)=2 ^x$$ (3)  is the graph of $$f(x)=10^x$$ D (1)  is the graph of $$f(x)=(1.8)^x$$ (2)  is the graph of $$f(x)=10^x$$ (3)  is the graph of $$f(x)=5^x$$ Option A is Correct # Definition of the Number $$e$$ • $$e$$ is a number such that $$\lim\limits_{h\to 0}\left(\dfrac{e^h-1}{h}\right)=1$$ • Consider, $$\dfrac{d}{dx}(a^x)=\lim\limits_{h\to 0}\left(\dfrac{a^{x+h}-a^x}{h}\right)$$ $$=a^x\,\lim\limits_{h\to 0}\left(\dfrac{a^h-1}{h}\right)$$ $$\therefore$$ If $$f(x)=a^x$$, then $$f'(x)$$ = $$f'(0)$$ $$×f(x)$$. • $$\therefore$$ rate of change of any exponential function is proportional to function itself. • If $$a=e$$,  then we say that  $$\dfrac{d}{dx}e^x=e^x$$ • By chain rule, $$\dfrac{d}{dx}e^u=e^x\dfrac{du}{dx}$$ where, $$u$$ is any function of $$x$$. #### If $$f(x)=(2x^5-3x)\,e^x$$ , find $$f'(x)$$. A $$e^x[2x^5+10x^4-3x-3]$$ B $$e^x[5x^5-x^4+x^3+3]$$ C $$e^x[5x^4-6x^3+8x+1]$$ D $$e^x[10x^5-4x^3+x+7]$$ × $$f(x)=(2x^5-3x)e^x$$ $$\Rightarrow$$ $$f'(x)$$$$\underbrace{(2x^5-3x)\dfrac{d}{dx}(e^x)+e^x\dfrac{d}{dx}(2x^5-3x)}_{Product\,Rule}$$ $$=(2x^5-3x)e^x+e^x\,[10x^4-3]$$ $$=e^x[2x^5-3x+10x^4-3]$$ $$=e^x\,[2x^5+10x^4-3x-3]$$ ### If $$f(x)=(2x^5-3x)\,e^x$$ , find $$f'(x)$$. A $$e^x[2x^5+10x^4-3x-3]$$ . B $$e^x[5x^5-x^4+x^3+3]$$ C $$e^x[5x^4-6x^3+8x+1]$$ D $$e^x[10x^5-4x^3+x+7]$$ Option A is Correct # Finding the Inverse of a Function which contain Exponential Function in them • To find the inverse of an function: 1. Let $$y = f(x)$$ be the given function. 2. Solve $$x$$ in terms of $$y$$. 3. Interchange $$x$$ and $$y$$, the new $$y$$ obtained is the required inverse. #### Find the inverse function of the function  $$f(x)=\dfrac {e^x}{2+3e^x}$$. A $$f^{-1}(x)=\ell n\,x$$ B $$f^{-1}(x)=\ell n\Bigg(\dfrac {2x}{1-3x}\Bigg)$$ C $$f^{-1}(x)=e^{x^2}$$ D $$f^{-1}(x)=\ell n\Bigg(\dfrac {5x}{1+x}\Bigg)$$ × $$f(x)=\left (\dfrac {e^x}{2+3e^x}\right)$$ $$y=f(x)$$ $$\Rightarrow y=\left (\dfrac {e^x}{2+3e^x}\right)$$ $$2y+3y\,e^x=e^x$$ $$\Rightarrow e^x(1-3y)=2y$$ $$\Rightarrow e^x=\dfrac {2y}{1-3y}$$ $$\Rightarrow x=\ell n\;\dfrac {2y}{1-3y}$$ (take log on both sides to base e) Interchange $$x$$ and $$y$$ $$y=\ell n\left (\dfrac {2x}{1-3x}\right)=f^{-1}(x)$$ ### Find the inverse function of the function  $$f(x)=\dfrac {e^x}{2+3e^x}$$. A $$f^{-1}(x)=\ell n\,x$$ . B $$f^{-1}(x)=\ell n\Bigg(\dfrac {2x}{1-3x}\Bigg)$$ C $$f^{-1}(x)=e^{x^2}$$ D $$f^{-1}(x)=\ell n\Bigg(\dfrac {5x}{1+x}\Bigg)$$ Option B is Correct #### Consider the graph of there functions on the same $$x-y$$ axis. Which of the following is the correct statement ? A (1)  is the graph of $$f(x)=\left(\dfrac{1}{10}\right)^x=10^{-x}$$ (2)  is the graph of $$f(x)=\left(\dfrac{1}{9}\right)^x=9^{-x}$$ (3)  is the graph of $$f(x)=\left(\dfrac{1}{7}\right)^x=7^{-x}$$ B (1)  is the graph of $$f(x)=\left(\dfrac{1}{8}\right)^x=8^{-x}$$ (2)  is the graph of $$f(x)=\left(\dfrac{1}{10}\right)^x=10^{-x}$$ (3)  is the graph of $$f(x)=\left(\dfrac{1}{6}\right)^x=6^{-x}$$ C (1)  is the graph of $$f(x)=\left(\dfrac{1}{6}\right)^x=6^{-x}$$ (2)  is the graph of $$f(x)=\left(\dfrac{1}{2}\right)^x=2^{-x}$$ (3)  is the graph of $$f(x)=\left(\dfrac{1}{3}\right)^x=3^{-x}$$ D (1)  is the graph of $$f(x)=\left(\dfrac{1}{7}\right)^x=7^{-x}$$ (2)  is the graph of $$f(x)=\left(\dfrac{1}{5}\right)^x=5^{-x}$$ (3)  is the graph of $$f(x)=\left(\dfrac{1}{9}\right)^x=9^{-x}$$ × (3)  is the steepest declining graph, then (2) and (1) $$\therefore$$ 'a' should be the least for (3) and the greatest for (1) Hence, option (A) is correct. ### Consider the graph of there functions on the same $$x-y$$ axis. Which of the following is the correct statement ? A (1)  is the graph of $$f(x)=\left(\dfrac{1}{10}\right)^x=10^{-x}$$ (2)  is the graph of $$f(x)=\left(\dfrac{1}{9}\right)^x=9^{-x}$$ (3)  is the graph of $$f(x)=\left(\dfrac{1}{7}\right)^x=7^{-x}$$ . B (1)  is the graph of $$f(x)=\left(\dfrac{1}{8}\right)^x=8^{-x}$$ (2)  is the graph of $$f(x)=\left(\dfrac{1}{10}\right)^x=10^{-x}$$ (3)  is the graph of $$f(x)=\left(\dfrac{1}{6}\right)^x=6^{-x}$$ C (1)  is the graph of $$f(x)=\left(\dfrac{1}{6}\right)^x=6^{-x}$$ (2)  is the graph of $$f(x)=\left(\dfrac{1}{2}\right)^x=2^{-x}$$ (3)  is the graph of $$f(x)=\left(\dfrac{1}{3}\right)^x=3^{-x}$$ D (1)  is the graph of $$f(x)=\left(\dfrac{1}{7}\right)^x=7^{-x}$$ (2)  is the graph of $$f(x)=\left(\dfrac{1}{5}\right)^x=5^{-x}$$ (3)  is the graph of $$f(x)=\left(\dfrac{1}{9}\right)^x=9^{-x}$$ Option A is Correct # Limits of Exponential Function • If $$a>1$$ then, (1)  $$\lim\limits_{x\to \infty}\;a^x=\infty$$ (2)  $$\lim\limits_{x\to\, -\infty}\;a^x=0$$ • If $$0<a<1$$ then, (1)  $$\lim\limits_{x\to \infty}\;a^x=0$$ (2)  $$\lim\limits_{x\to -\infty}\;a^x=\infty$$ $$x$$ axis will always be horizontal asymptote of the exponential function  $$y=a^x$$. #### Find  $$\lim\limits_{x\to -\infty}\;\left(\dfrac{2^x-1}{3}\right)$$ A $$-\dfrac{1}{3}$$ B $$\infty$$ C $$\dfrac{1}3{}$$ D $$-\infty$$ × $$\ell=\lim\limits_{x\to \,-\infty}\;\left(\dfrac{2^x-1}{3}\right)$$ $$=\dfrac{1}{3}\;\lim\limits_{x\to\, -\infty}\;\left({2^x-1}\right)$$ $$=\dfrac{1}{3}\left[\lim\limits_{x\to \,-\infty}2^x-\lim\limits_{x\to\,-\infty}-1\right]$$ Now,  $$\lim\limits_{x\to\,-\infty}\;2^x=0$$ $$\therefore\,\ell=\dfrac{1}{3}\left[\lim\limits_{x\to \,-\infty}\;2^x-1\right]$$ $$=\dfrac{1}{3}[0-1]$$ $$=-\dfrac{1}{3}$$ ### Find  $$\lim\limits_{x\to -\infty}\;\left(\dfrac{2^x-1}{3}\right)$$ A $$-\dfrac{1}{3}$$ . B $$\infty$$ C $$\dfrac{1}3{}$$ D $$-\infty$$ Option A is Correct
## Welcome to Serendeputy! Serendeputy is your personal news assistant. - learns what you like and don't like, - lovingly compiles a list of news and blogs for you. How it works. What to do: 2. Click smileys and frownies 3. Find favorite topics and sources 4. See how much better your deputy is getting at finding you good stuff. # Stats Stack Exchange For my masters thesis in corporate finance I'm doing a research about debt concentration (i.e. companies using several debt types or only 1, measured by HHI index) I've got several determinants and some control variables. My data consists of 24503 observations,... From: Stats Stack Exchange | By: Brx | Sunday, May 1, 2016 smile frown I am implementing a vanilla variational mixture of multivariate Gaussians, as per Chapter 10 of Pattern Recognition and Machine Learning (Bishop, 2007). The Bayesian approach requires to specify parameters for the Gaussian-inverse-Wishart prior: $\alpha_0$... From: Stats Stack Exchange | By: lacerbi | Friday, April 29, 2016 smile frown I am reading Chris Bishop's Pattern Recognition and Machine Learning. In Section 2.3.5 he introduces some ideas on the contribution of the $n$th observation in a data set to the maximum likelihood estimator of the mean. He says that the larger number... From: Stats Stack Exchange | By: cgo | Friday, April 29, 2016 smile frown I computed a A x B (2 x 2) within subject ANOVA for a given ROI using repeated measures GLM. The interaction between A and B was not significant, but two main effects were detected. Can I still compare A1 vs A2 within B1 or within B2 using paired ttest... From: Stats Stack Exchange | By: ping yang | Friday, April 29, 2016 smile frown Can someone explain how the math used to determine that 34 participants were required for this study? To have an 80% chance of detecting a 1.5–percentage point between-group A1C difference as significant (at the two-sided 5% level), with an assumed... From: Stats Stack Exchange | By: haim | Sunday, May 1, 2016 smile frown I am fitting a mixed effects model in R using nlme lme(y~x+I(x^2),random=~x|subject,data=train) Is this the correct way or should it be lme(y~x+I(x^2),random=~x+I(x^2)|subject,data=train) What is the difference in the interpretation of fitting these... From: Stats Stack Exchange | By: kon7 | Monday, May 2, 2016 smile frown Hi i'd like to know a bit more about kNN-like approach implementations for classification problems, and specifically classification problems where we want to have a probability distribution as an output (to compute logloss like metrics for example) In... From: Stats Stack Exchange | By: Fagui Curtain | Monday, May 2, 2016 smile frown I am trying to replicate this paper "Gleditsch, Kristian Skrede and Michael D. Ward. 2006. "Diffusion and the International Context of Democratization", International Organization 50: 911-933" and I have problems finding the gamma coefficients. The base... From: Stats Stack Exchange | By: Maria | Monday, May 2, 2016 smile frown I am working with the following model and am attempting to derivate coordinate ascent updates using mean field variational inference: Sample $p_X \sim Beta(\alpha_1, \alpha_2)$ Sample $p_Y \sim Beta(\alpha_2, \alpha_1)$ For $i \in \{ 1...d\}$, sample... From: Stats Stack Exchange | By: lrAndroid | Sunday, May 1, 2016 smile frown In train or rfe I can only set Accuracy or Kappa. Is there a way to edit the functions to define a scoring function? I am using Kappa at the moment but I need to optimize for positive predictive Value (= hit rate = fraction of positives recognized as... From: Stats Stack Exchange | By: user670186 | Sunday, May 1, 2016 smile frown A couple weeks back, I was seeing if I could solve the basic formulation of the Birthday Problem (i.e. assuming 365 equally likely birthdays, what's the probability that, given a room of ${n}$ people, at least one pair of people share a birthday). The... From: Stats Stack Exchange | By: ZombieSocrates | Sunday, May 1, 2016 smile frown I have two pivot tables, one with gallons of gas consumed prior to treatment, and a second with gallons of gas consumed after treatment, which is a mixture added to the full gas tank. See image below. I have the pivot table containing a subset of data... From: Stats Stack Exchange | By: Jazzmine | Sunday, May 1, 2016 smile frown I have two data sets (base and to_match), each with 10 individuals, grouped in 2 classes. Each individual is described by a set of 4 variables. What I want to do is: test wether the groups in the first dataset (base) are identical, based on all the describing... From: Stats Stack Exchange | By: Wiliam | Friday, April 29, 2016 smile frown What kind of $f(n): \mathbb{N} \to \mathbb{N}$'s make the following statement true? What kind don't? $\limsup A_{f(n)} \subseteq \limsup A_n$ where $n \in \mathbb{N}$ (*) Well obviously the answers to each are: $(f(n) \ | \ \limsup A_{f(n)} \subseteq... From: Stats Stack Exchange | By: BCLC | Sunday, May 1, 2016 smile frown From Williams' Probability with Martingales:$X_n(\omega)$does not converge to a limit in$[-\infty,\infty]$--> Is this supposed to be stronger than$\lim X_n$does not exist? Why do we have Is the part with $$\liminf X_n(\omega) < \limsup X_n(\omega)$$... From: Stats Stack Exchange | By: BCLC | Sunday, May 1, 2016 smile frown If I create a weekly ts time series with <= 188 values in it and plot it I get a "fractional" labeled x axis: x <- ts(rnorm(188,0,1), frequency=52, start=c(2000,1)) plot(x) but if I create a time series with >= 189 values, plot displays the... From: Stats Stack Exchange | By: Randy Wilson | Sunday, May 1, 2016 smile frown I have the following data series: # retrn vix 1 7.44 27.799999 2 14.57 23.4 3 8.03 19.440001 4 4.42 18.43 5 2.27 15.5 6 9.67 17.15 7 -3.44 24.059999 8 8.32 17.08 9 4.65 18.93 10 7.7 17.469999 11 2.87 15.73 12 5.02 18.6 ... retrn - my asset returns (monthly)... From: Stats Stack Exchange | By: Vingthor | Sunday, May 1, 2016 smile frown In part of an experimental trial (n=1), I asked the participant to answer a specific questionnaire (continuous response variable) under the influence of 4 different dosages (dosage 1, 2, 3 and 4) of a same substance. This task was repeated (after a certain... From: Stats Stack Exchange | By: ynwa_in_stats | Sunday, May 1, 2016 smile frown Say I know the distribution of$X-Y$, but I do not know the distributino of$X$(or$Y$), but I know that they are statistically independent, and I know they have the same distribution. Is the problem of finding the distribution well-defined, as in will... From: Stats Stack Exchange | By: pkofod | Friday, April 29, 2016 smile frown I have a series of monthly returns on financial data. My goal is to estimate the volatility of 10 year rolling returns. I am a bit confused on two options. a) Calculate 10 year rolling returns, annualize this and then calculate the volatility of the... From: Stats Stack Exchange | By: Jantamanta | Sunday, May 1, 2016 smile frown Suppose we compute the correlation PCA of a dataset$X$(with$m$variables and$n$observations) by first normalizing the input variables. That is: mean -> 0 and standard deviation -> 1. Let us assume for the sake of this question that$\mu_i=0$... From: Stats Stack Exchange | By: Werner Van Belle | Sunday, May 1, 2016 smile frown Is M-estimation valid only for regression models or does it's working hold good for robust estimation of parameters in other statistical models? I understand that M-estimators are asymptotically normal for least squares models. Is it also true for any... From: Stats Stack Exchange | By: user251385 | Sunday, May 1, 2016 smile frown I am attempting to model the fluorescent signal emitted by a fluorescent calcium indicator (lights up when there is calcium influx into a cell). According to [1], the following formula works as a workable approximation, under certain conditions:$\Delta... From: Stats Stack Exchange | By: mowe | Sunday, May 1, 2016 smile frown I'm fairly new to statistics - I'm sure this is a basic question but my google searching is failing me. Happy to just be pointed to other reading. I have 3 datasets of varying sizes (N1 ~ 200,000, N2 ~ 80,000, N3 ~ 400). In each dataset, for each sample... From: Stats Stack Exchange | By: kevbonham | Sunday, May 1, 2016 smile frown I am totally new to "machine learning" and am looking for how to get started. Can you point me to a few resources, geared for the beginner, that are excellent starting points? What are the main families of tasks in machine learning? Who are the famous... From: Stats Stack Exchange | By: Disco Dancer | Sunday, May 1, 2016 smile frown This is probably a very basic question; I have a data-frame with a fake questionnaire with three sets of questions measuring three constructs. I'm currently reading some research papers which in order to create the construct aggregate the mean per country,... From: Stats Stack Exchange | By: John Smith | Sunday, May 1, 2016 smile frown I was sort of self-studying a poorly-elaborated lecture note of factorial design. It mentioned that a $2^{9-5}$ design has resolution 3. This is checked with the table below. It has $2^4=16$ runs, and we require $9+1=10$ runs to delineate all main effects.... From: Stats Stack Exchange | By: user2513881 | Sunday, May 1, 2016 smile frown I am having some problems with estimating a VAR in R. I am trying to replicate a study from Park and Ratti 2008 Using a time period from January 1997 to February 2016, I have been able to perform KPSS and PP tests, which results resemble the ones in... From: Stats Stack Exchange | By: fwintherdk | Saturday, April 30, 2016 smile frown From wiki: Given a set of independent identically distributed data points $\mathbb{X}=(x_1,\ldots,x_n)$, where $x_i \sim p(x_i|\theta)$ according to some probability distribution parameterized by θ, where θ itself is a random variable described by... From: Stats Stack Exchange | By: slava_b | Sunday, May 1, 2016 smile frown I am using esri arc to generate random points. I then analyze the pattern from this process using Average Nearest Neighbor which is also in esri gis but lets say it can be in any other software. Is there a chance that it comes as dispersed or clustered... From: Stats Stack Exchange | By: Navid | Sunday, May 1, 2016 smile frown I have 5 point likert scale questionnaire as dependant variable..and yes /no quectionnaire as independant variable.how i analyze this with spss..want to find correlation of these 2 variables and find relationship From: Stats Stack Exchange | By: Anil | Sunday, May 1, 2016 smile frown I use arc software to do Moran 1 analysis and it only takes polygons for input. Why is it called point process of only takes polygons? From: Stats Stack Exchange | By: Navid | Sunday, May 1, 2016 smile frown Let $Y_1 < Y_2 < … < Y_n$ be the order statistics of $n$ independent observations from a continuous distribution with cumulative distribution function $F(x)$ and probability density function: $$f(x)=F′(x)$$ where $0 < F(x) < 1$ over... From: Stats Stack Exchange | By: Hamid | Sunday, May 1, 2016 smile frown I need to generate random point process manually to learn in the same way they do in other software like arc esri. I can use RAND() but I know what I produce then has to be Poisson distribution because that what I see in literature. How can I make sure... From: Stats Stack Exchange | By: Navid | Sunday, May 1, 2016 smile frown I am running the following model in R: model = lmer(Tau ~ ageS*days+YrsOfEds*days+sex*days+tract*days + (1|SubjectID), data=long) With this model I am trying to predict change in tau over time based on the quality of a tract. Both tau and tract are continuous... From: Stats Stack Exchange | By: HIL | Sunday, May 1, 2016 smile frown Let $A$, $B$ be two zero-mean random variables. Let the variance be $\sigma^2_A$, $\sigma^2_B$ and let the correlation be $\sigma_{AB}$. Consider the following expression :- $$\mathbb{E}\big[A|B=b\big]$$ When $A,B$ are jointly gaussians the above expression... From: Stats Stack Exchange | By: Vivek Bagaria | Sunday, May 1, 2016 smile frown I know how to find a correlation between 2 variables. How am i supposed to find correlations between multiple variables in r programming and how do i plot a graph for it? From: Stats Stack Exchange | By: Akshay Sirsikar | Sunday, May 1, 2016 smile frown I recently saw* a pmf: $f(y)=\frac{\mu^y}{(y!)^\theta z(\mu,\theta)}$, where $z(\mu,\theta) = \sum_{i=0}^{\infty}\frac{\mu^i}{(i!)^\theta}$. * It is a bonus question on a homework assignment. I am wondering if this belongs to the exponential family?... From: Stats Stack Exchange | By: Kevin | Sunday, May 1, 2016 smile frown I've gone through the theoretical definition of cluster analysis and have learnt the basics of it.But i want to know the advantages of the cluster analysis process and a real time example as to where it is used. From: Stats Stack Exchange | By: Akshay Sirsikar | Sunday, May 1, 2016 smile frown Many statistical software ask whether to standardize data or no: What is a general rule to when data should be standardized? Do we standardize categorical variables? Is there a difference in how standardization effects or in interpreted in different... From: Stats Stack Exchange | By: kon7 | Sunday, May 1, 2016 smile frown Are there some neural networks that can reach state-of-the-art accuracy with two or three hours training, on dataset like CIFAR, MNIST,etc... From: Stats Stack Exchange | By: Eli He | Sunday, May 1, 2016 smile frown based on public data and using excel 2010 or after, I want to forecast/predict the football match winner. From: Stats Stack Exchange | By: ray | Saturday, April 30, 2016 smile frown We know that if $\big(X_1,X_2...X_k) \sim multinomial(n;p_1,p_2...p_k)$ then $X_i \sim bin(n;p_i)$ Then, $var(X_i) = np_i(1-p_i)$. But we have $cov(X_i,X_j) = -np_ip_j$. So doesnt that imply $var(X_i) = cov(X_i,X_i) = -np_i^2$? (Which is basically impossible... From: Stats Stack Exchange | By: RibD | Sunday, May 1, 2016 smile frown Can someone please explain how the sample mean and sample variance are independent? From: Stats Stack Exchange | By: Blueberry | Saturday, April 30, 2016 smile frown The two formulations seem identiical to me: $H(x) = \sum p(x) log(1/p(x))$ why tha latter it is attributed to Shannon rather than Gibbs? From: Stats Stack Exchange | By: hayer | Saturday, April 30, 2016 smile frown If I understand correctly, boxplot() treats numerical group variable values as discrete values and spaces the boxes evenly on the plot. What can I do to produce a boxplot with a horizontal axis scaled for continuous group variable values? (e.g. in SAS... From: Stats Stack Exchange | By: Amit | Saturday, April 30, 2016 smile frown I'm new fish in the water of Game Theory and just got stuck with calculating discounting rage (or discounting parameter) with 2x2 matrix. The main condition is that the game is repetitive. Here is the matrix: Here is what I want to learn: (1) how can... From: Stats Stack Exchange | By: RLearnsStats | Saturday, April 30, 2016 smile frown Please tell me break points for each graph. Thank you.... From: Stats Stack Exchange | By: B11b | Saturday, April 30, 2016 smile frown I know for regular problems, we know if we have an best regular unbiased estimator, it must be the mle. But generally, if we have an unbiased mle, would it also be a best unbiased estimator(or maybe I should call it umvue, as long as it has the smallest... From: Stats Stack Exchange | By: Gary Cheng | Saturday, April 30, 2016 smile frown I have a neural network that I trained on 32 * 32 px size images. Can I use these filters learned from the network on larger images not used in training the network such as a 600 * 800 px image? Or does it not make any sense to apply filters that were... From: Stats Stack Exchange | By: Kevin | Saturday, April 30, 2016 smile frown
# Doug's MATLAB Video Tutorials ## Cryptography in MATLAB: Code review This video assumes you have watched this video that outlines a simple encryption algorithm. The video shows a quick code review of my algorithm. Warning: this video is part of a longer series, I have purposefully left in one syntax error and one inefficiency so that we can use the debugger and profiler later in the series to find and fix these problems. Name (required) E-mail (required, will not be published) Website (optional) Spam protection (required): What is 1 + 7 ? Wrap code fragments inside <pre> tags, like this: <pre class="code"> a = magic(3); sum(a) </pre> If you have a "<" character in your code, either follow it with a space or replace it with "&lt;" (including the semicolon). Doug Hull is a proud MathWorker who is on a mission to help you with MATLAB. These postings are the author's and don't necessarily represent the opinions of MathWorks.
Followers 0 # _IEFormElementGetObjByName error ## 14 posts in this topic Hi all, I'm getting an error using the IE.au3 script,and I'd really like some help My code is as follows: #include <IE.au3> ; Create a new browser window and navigate to a page $oIE = _IECreate() _IENavigate($oIE, "http://7.associatel.pay.clickbank.net") ; ENTER GEO DETAILS ; Find the form and the form fields we are interested in $o_form = _IEFormGetObjByIndex($oIE, 0) $o_country = _IEFormElementGetObjByName($o_form, "ctry") $o_zip = _IEFormElementGetObjByName($o_form, "zipc") And the error :s line 848 (File "C:\etc etc etc") If IsObj($o_object.e^ ERROR Error: Unable to parse line Another user was having the same error a while back and I don't relcall if it was an issue of not knowing how to run the beta instead of the production release or if it was an issue with IE.au3 download. Check here for discussion and resolution. For future reference, please include the full error report from AutoIt instead of editing it back -- it includes the version being used and could have other inforamtion that may not mean anything to you, but may be very important in debugging. Dale Edited by DaleHohm Free Internet Tools: DebugBar, AutoIt IE Builder, HTTP UDF, MODIV2, IE Developer Toolbar, IEDocMon, Fiddler, HTML Validator, WGet, curl Automate input type=file (Related) SciTe Debug mode - it's magic: #AutoIt3Wrapper_run_debug_mode=Y Doesn't work needs to be ripped out of the troubleshooting lexicon. It means that what you tried did not produce the results you expected. It begs the questions 1) what did you try?, 2) what did you expect? and 3) what happened instead? Reproducer: a small (the smallest?) piece of stand-alone code that demonstrates your trouble #### Share this post ##### Link to post ##### Share on other sites Dale, Sorted. I had set my filetypes in Windows Explorer to pick the beta, but no idea why that didn't work. Maybe it needs a reboot? The SciTe download is great - which I'd found it sooner! Thanks everyone Andy #### Share this post ##### Link to post ##### Share on other sites #7 · Posted (edited) Oops. Not sorted. I didn't realise that sciTe hid the errors in the bottom pane. Here's the output: >"C:\Program Files\AutoIt3\SciTe\CompileAU3\CompileAU3.exe" /run /beta /ErrorStdOut /in "C:\Documents and Settings\apeacock\My Documents\Personal\To Sync\Web\clickbankchecker\desktopscript\checker.au3" /autoit3dir "C:\Program Files\AutoIt3\beta" /UserParams >Running AU3Check...C:\Program Files\AutoIt3\SciTe\Defs\Unstable\Au3Check\au3check.dat >AU3Check Ended. No Error(s). >Running: (3.1.1.89):C:\Program Files\AutoIt3\beta\autoit3.exe "C:\Documents and Settings\apeacock\My Documents\Personal\To Sync\Web\clickbankchecker\desktopscript\checker.au3" C:\Documents and Settings\apeacock\My Documents\Personal\To Sync\Web\clickbankchecker\desktopscript\IE.au3 (848) : ==> The requested action with this object has failed.: If IsObj($o_object.elements.item ($s_name,$i_index)) Then If ^ ERROR >AutoIT3.exe ended. >Exit code: 0    Time: 4.476 Any ideas? Edit: I've also installed a clean copy of AutoIt current release, beta release, and SciTe on a new machine, and get the same problem. But if I edit the script to open Google.com and get a reference to the search box, it works fine. Could it be something wierd about that clickbank page? Andy Edited by Andrew Peacock ##### Share on other sites Well this is quite curious. I've never seem an issue in this code before and I see no obvious issues in the way the form is defined in the source. I'll look into this further, but in the mean time you can access the form elements by index: $o_country = _IEFormElementGetObjByIndex($o_form, 0) $o_zip = _IEFormElementGetObjByIndex($o_form, 1) Dale Free Internet Tools: DebugBar, AutoIt IE Builder, HTTP UDF, MODIV2, IE Developer Toolbar, IEDocMon, Fiddler, HTML Validator, WGet, curl Automate input type=file (Related) SciTe Debug mode - it's magic: #AutoIt3Wrapper_run_debug_mode=Y Doesn't work needs to be ripped out of the troubleshooting lexicon. It means that what you tried did not produce the results you expected. It begs the questions 1) what did you try?, 2) what did you expect? and 3) what happened instead? Reproducer: a small (the smallest?) piece of stand-alone code that demonstrates your trouble ##### Share on other sites #9 ·  Posted (edited) Well, I've found why this page fails when you try to query the FormElements by name. It is a very unique problem with this particular webpage and I don't believe it is in any way related to IE.au3. The page contains the following hidden form field: <input type=hidden name=item value="007"> The fact that this field has a name of "item" appears to confuse the form elements collection in the DOM. I have not tested in VBScript, but I suspect that the $o_object.elements.item ($s_name, $i_index) reference would fail in the same way. You'll need to work around this with the ByIndex suggestion that I provided. I do not consider this to be a bug in IE.au3 (although I may be able to trap such an error at some point in the future instead of cryptically aborting as it does now). Dale Edit: I tested this in VBScript and it fails exactly the same way as it does in AutoIt. I checked the HTML 4.01 spec and could not find a restriction on the Name attribute other than this: ID and NAME tokens must begin with a letter ([A-Za-z]) and may be followed by any number of letters, digits ([0-9]), hyphens ("-"), underscores ("_"), colons (":"), and periods ("."). It does not call out a restriction that it cannot be the string "item", so it could be a bug in the IE DOM or it is documented somewhere else that I missed Well this is quite curious. I've never seem an issue in this code before and I see no obvious issues in the way the form is defined in the source. I'll look into this further, but in the mean time you can access the form elements by index: $o_country = _IEFormElementGetObjByIndex($o_form, 0)$o_zip     = _IEFormElementGetObjByIndex($o_form, 1) Dale Edited by DaleHohm Free Internet Tools: DebugBar, AutoIt IE Builder, HTTP UDF, MODIV2, IE Developer Toolbar, IEDocMon, Fiddler, HTML Validator, WGet, curl Automate input type=file (Related) SciTe Debug mode - it's magic: #AutoIt3Wrapper_run_debug_mode=Y Doesn't work needs to be ripped out of the troubleshooting lexicon. It means that what you tried did not produce the results you expected. It begs the questions 1) what did you try?, 2) what did you expect? and 3) what happened instead? Reproducer: a small (the smallest?) piece of stand-alone code that demonstrates your trouble #### Share this post ##### Link to post ##### Share on other sites Hi, I had the same problem with the proxy of my company. A proxy receive the http request and rebuilt it. For example, the policy security in my company is the following : 1- the proxy receive the URL 2- the proxy cut the URL and insulate the domain name 3- the proxy debrief the DNS to know where it must send the request 4- the proxy load the page with an URL without the domain name. May be your firewall doing it. Please try it and let me know if you'll solve your problem. Regards, FabFly PS : it's not not an IE.au3 problem. I solved the same problem with this solution. Please look at this thread : http://www.autoitscript.com/forum/index.ph...14entry124214 #### Share this post ##### Link to post ##### Share on other sites Thanks both of you for your help. I'll go with the IEFormElementGetObjByIndex solution for now, Andy #### Share this post ##### Link to post ##### Share on other sites #12 · Posted (edited) @FabFly The error message here and the one you experienced may look similar (both are caused by a problem with an object reference), but the solution for each is very different. The trouble in this case is the use of what appears to be a reserved word in the NAME of one of the form elements. thanks, Dale Edit: fixed typos Edited by DaleHohm Free Internet Tools: DebugBar, AutoIt IE Builder, HTTP UDF, MODIV2, IE Developer Toolbar, IEDocMon, Fiddler, HTML Validator, WGet, curl Automate input type=file (Related) SciTe Debug mode - it's magic: #AutoIt3Wrapper_run_debug_mode=Y Doesn't work needs to be ripped out of the troubleshooting lexicon. It means that what you tried did not produce the results you expected. It begs the questions 1) what did you try?, 2) what did you expect? and 3) what happened instead? Reproducer: a small (the smallest?) piece of stand-alone code that demonstrates your trouble #### Share this post ##### Link to post ##### Share on other sites Dale, Hopefully you can help on the following - I've searched the forum, but can't find anything. Once that form is submitted, how do I get a reference to the new page and form that are presented? Regards, Andy #### Share this post ##### Link to post ##### Share on other sites Dale, Hopefully you can help on the following - I've searched the forum, but can't find anything. Once that form is submitted, how do I get a reference to the new page and form that are presented? Regards, Andy If the resulting page is displayed in the same browser window, then the reference that you have from your initial _IECreate or _IEAttach is still valid (i.e.$oIE in most of my examples). The references to objects on the page will have changed however adn you'll need to use functions like _IEFormGetObjByIndex to get a new reference. If the form submission results in a new window being created you'll need to get a reference to it with _IEAttach(). Dale Free Internet Tools: DebugBar, AutoIt IE Builder, HTTP UDF, MODIV2, IE Developer Toolbar, IEDocMon, Fiddler, HTML Validator, WGet, curl Automate input type=file (Related) SciTe Debug mode - it's magic: #AutoIt3Wrapper_run_debug_mode=Y Doesn't work needs to be ripped out of the troubleshooting lexicon. It means that what you tried did not produce the results you expected. It begs the questions 1) what did you try?, 2) what did you expect? and 3) what happened instead? Reproducer: a small (the smallest?) piece of stand-alone code that demonstrates your trouble
## Balmonster Group Title What is the distance formula? 2 years ago 2 years ago 1. ilikephysics2 -b + or - sqrt(b^2-4ac) / 2a 2. oldrin.bataku There are multiple... it depends on the space and what metric you want. The most common is the Euclidean distance, i.e.$\sqrt{\Delta x^2+\Delta y^2}$ 3. dpaInc it is an application of the pythagorean theorem...
.article-body a.abonner-button:hover{color:#FFF!important;} In this post, we are going to show you how you can use your computer and Matlab to solve a system of many equations. If you want to solve the optimization problem in MATLAB, then use the optimization toolbox tools, or nlinfit, or the curve fitting toolbox. (Generally, any code to implement Gauss-Newton that you will find on the file exchange is code written by novices, what I would consider poor code. The matlab function ode45 will be used. MatCont (documentation PDF) ODE Solvers. I found it was useful to try writing out each method to practice working with MatLab. Number Newton's iterations Description. Newton’s Method and Loops Solving equations numerically For the next few lectures we will focus on the problem of solving an equation: f(x) = 0: (3. m function to solve many. These solvers can be used with the following syntax: [outputs] = function_handle(inputs) [t,state] = solver(@dstate,tspan,ICs,options) Matlab algorithm (e. For guided practice and further exploration of how to use MATLAB files, watch Video Lecture 3: Using Files. Transfer nondefault options for the fmincon solver to options for the fminunc solver. c 1998 Society for Industrial and Applied Mathematics Vol. where is the Jacobian matrix of partial derivatives of with respect to. Coincidentally, I had started to use MATLAB® for teaching several other subjects around this time. Quasi-Newton updates of the Hessian (recall that Quasi-Newton updates give dense matrices, which are impractical for large-scale problems) fminsearch: a derivative-free method based on Nelder-Mead simplex Kevin Carlberg Optimization in Matlab. Newton-Raphson Method for Solving Nonlinear Equations. Matlab’s function fzero combines bisection, secant and inverse quadratic interpolation and is\fail-safe". MATLAB 2019 Overview MATLAB 2019 Technical Setup Details MATLAB 2019 Free Download MATLAB Deep Learning: With Machine Learning, Neural Networks and Artificial Intelligence by Phil Kim Get started with MATLAB for deep learning and AI with this in-depth primer. fun is a function that accepts a vector x and returns a vector F, the nonlinear equations evaluated at x. Transforming Numerical Methods Education for the STEM Undergraduate : Home. c 1998 Society for Industrial and Applied Mathematics Vol. My professor is asking us to use the Newton-Raphson Method to solve the Colebrook Equation using MATLAB for the friction factor and ensure that they match values obtained from the Moody Diagram. This can be seen straight from the formula, where f’(x) is a necessary part of the iterative function. com/document/d/1qHu62Eh-KqG16m1FgmJbBKXK9kbAcomHtsmsFc-Qctk/edit?usp=sharing Bisection Method Matlab Co. The matlab function ode45 will be used. Solver Pane. (One rarely does this kind of calculation by hand any more. Today I am going to explain Bisection method for finding the roots of given equation. m finds roots using the Bisection Method. The Newton-Raphson Method 1 Introduction The Newton-Raphson method, or Newton Method, is a powerful technique for solving equations numerically. l1ls: A Matlab Solver for Large-Scale ℓ1-Regularized Least Squares Problems Kwangmoo Koh [email protected] The document contains MATLAB code for solving the Kepler's equation and plotting the graph between eccentric anomaly and Mean anomaly. 1 Single equation Find the positive minimum point of the function f(x) = x−2 tanx by computing the zeros of f′ using Newton's method. Graphing in T1-83 and using Find Root Option. Newton method with linesearch. fun is a function that accepts a vector x and returns a vector F, the nonlinear equations evaluated at x. We have an extensive database of resources on solving NONLINEAR simultaneous equATIONS USING MATLAB. m, and also the exact solution in yE. Nonlinear equations to solve, specified as a function handle or function name. slack_bus = 1 % Which. Rafael Marques. The graph was plotted for 6 different eccentricity values. I need to solve equation e^x = 3*x in two ways: using Bisection and Newton methods, so I need two codes. I know how to program Newton method in Matlab, but I am still curious if there is any built-in Newton solver in Matlab?(Or bisection method?). I do not know how to solve nonlinear differential equations with Newton's method. It is to be noted that you can only make use of this method when you have the value of the initial condition of the differential equation you are trying to solve. Although this is the most basic non-linear solver, it is surprisingly powerful. Therefore to solve a higher order ODE, the ODE has to be first converted to a set of first order ODE's. This is an open method, so it starts with a single initial estimate for the root. For the next few lectures we will focus on the problem of solving an equation: formulas, solutions of di erential equations, experiments, or simulations. Newton's Method is an application of derivatives will allow us to approximate solutions to an equation. com supplies usable tips on Matlab - Newton Raphson Method, adding and subtracting and lesson plan and other algebra subject areas. The following MATLAB answers post provides a code that implements the Newton - Raph s on method. The document contains MATLAB code for solving the Kepler's equation and plotting the graph between eccentric anomaly and Mean anomaly. Graphing in T1-83 and using Find Root Option. % Newton Raphson solution of two nonlinear algebraic equations. These solvers automatically generate random start it ithi bd 5 pointsw nbounds. (Generally, any code to implement Gauss-Newton that you will find on the file exchange is code written by novices, what I would consider poor code. This method is selected with method = :newton. Newton-Raphson Matlab Code, Quadratic multiple choice question, solving factorial equations, quadratic problem with exponents, rationalize the denominator and simplify TI. To find an accurate root of this equation, first one must guess a starting value, here y » 2. Matlab’s function fzero combines bisection, secant and inverse quadratic interpolation and is\fail-safe". Previously, we talked about secant method vis-à-vis C program and algorithm/flowchart for the method. All code generated with Matlab® Software File: Newton_2D_test2. ← Local Sidereal Time Kepler Equation Solver Without Transcendental Function Evaluations →. To view the plots, run the example "Solving a Nonlinear ODE with a Boundary Layer by Collocation". The graph was plotted for 6 different eccentricity values. Newton-GMRES solver. The resulting solutions, ever flatter at 0 and ever steeper at 1, are shown in the example plot. The Matlab meshgrid command is designed for that (it is kind of a two-dimensional linspace). Find more Mathematics widgets in Wolfram|Alpha. The important thing to remember is that ode45 can only solve a first order ODE. The source code and files included in this project are listed in the project files section, please make sure whether the listed source code meet your needs there. Transfer nondefault options for the fmincon solver to options for the fminunc solver. I do not know how to solve nonlinear differential equations with Newton's method. In simple terms, these methods begin by attempting to evaluate a problem using test ("false") values for the variables, and then adjust the values accordingly. The following Matlab project contains the source code and Matlab examples used for newton raphson solver. Trial software; You are now following this Submission. The key is the matrix indexing instead of the traditional linear indexing. $with a solution$(\alpha, \beta)$and if$(x_0, y_0)$is an initial approximation that is sufficiently close to. I have 5 nodes in my model and 4 imaginary nodes for finite difference method. A Newton's Method Example 1 Example 2 B Steepest Descent Method Example 3. Other possible approaches, apart from what has already been metioned: 1. A unified framework, NLIGA (Non-Linear Isogeometric Analysis), is developed for mainly solving two and three-dimensional nonlinear problems on the MATLAB platform by using isogeometric analysis (IGA). c to perform computations. Around 1669, Isaac Newton (1643-1727) gave a new algorithm to solve a polynomial equation and it was illustrated on the example y 3-2y-5=0. In this essay, we are only interested in one type of methods : the Newton's methods. PV modeling - Solve current equation. We will take some questions from different books and will solve them step by step using Matlab. Although this is the most basic non-linear solver, it is surprisingly powerful. When typing the function and derivative, put multiplication signs between all things to be multiplied. Activities for factoring quadratic equations, two step word problems worksheets, ti 84 calculator online free use, list all type fractions for a beginner with samples, how to. where is the Jacobian matrix of partial derivatives of with respect to. A few useful MATLAB functions. The output of solve can contain parameters from the input equations in addition to parameters introduced by solve. Write a Matlab code which uses Newton Raphson Method to compute an approximate solution to the equation f(x)=0 starting from xo and stopping when the magnitude of f(x) becomes smaller than e. A root of a function f, from the real numbers to real numbers or from the complex numbers to the complex numbers, is a number x such that f(x) = 0. Newton’s method requires rst-order derivatives so often other methods are preferred that require function evaluation only. Once you have saved this program, for example as newton. oldoptions = optimoptions(@fmincon, 'Algorithm' , 'sqp' , 'MaxIterations' ,1500). numeric analysis Newton's method. These solvers can be used with the following syntax: [outputs] = function_handle(inputs) [t,state] = solver(@dstate,tspan,ICs,options) Matlab algorithm (e. Bisection Method in MATLAB Code:. Transforming Numerical Methods Education for the STEM Undergraduate : Home. MATLAB: M-files; Newton’s Method Last revised : March, 2003 Introduction to M-files In this session we learn the basics of working with M-files in MATLAB, so called because they must use. bisection secant matlab equation newton numerical-methods algorithm linear-equations equations numerical-analysis Product. NITSOL: A Newton Iterative Solver for Nonlinear Systems describes an algorithm for solving nonlinear systems. Background Newton's method can be used to solve systems of nonlinear equations. Newton-Raphson Matlab Code, Quadratic multiple choice question, solving factorial equations, quadratic problem with exponents, rationalize the denominator and simplify TI. Newton-Raphson Method for Solving non-linear equations in MATLAB(mfile) 21:09 MATLAB PROGRAMS MATLAB Program: % Newton-Raphson Algorithm % Find the root of y=cos(x) from o to pi. Your best bet is to use the new nonlinear solvers from here. 1 Getting Started Start Matlab and locate the command window. Decimal Search Calculator. Nonlinear hyperelastic and elastoplastic materials are primarily considered at this stage. then you may want to look at the Jacobian-Free Newton. In the following script M-file, we choose a grid of x and t values, solve the PDE and create a surface plot of its solution (given in Figure 1. In order to solve a particular differential equation, you will need to define the function f(t,y) in the file f. Matlab example: Multidimensional Newton's Method Here is the textbook example, written out in a couple of les. The point to notice here is that we output not just the value of the function, but also its Jacobian matrix: function [y dy]=myfunction(x).$ with a solution $(\alpha, \beta)$ and if $(x_0, y_0)$ is an initial approximation that is sufficiently close to. Bisection Method for Solving non-linear equations using MATLAB(mfile) 09:58 MATLAB Codes , MATLAB PROGRAMS % Bisection Algorithm % Find the root of y=cos(x) from o to pi. Write a MATLAB function that uses the Newton-Raphson method to solve a nonlinear system of equations. Learn more about differential equations, ode45. Not all solvers support MIDCPs, and those that do cannot guarantee a successful solution in reasonable time for all models. The Newton-Raphson method is used if the derivative fprime of func is provided, otherwise the secant method is used. In much of the rest of this lab, you will need to use graph2d within MatLab to plot graphs and a program newtons-method. The following Matlab project contains the source code and Matlab examples used for newton raphson solver with adaptive step size. When typing the function and derivative, put multiplication signs between all things to be multiplied. The matlab function ode45 will be used. Newton's Method Sometimes we are presented with a problem which cannot be solved by simple algebraic means. Number Newton's iterations Description. Chapter 1 Running Matlab Type your commands here Figure 1. The Matlab meshgrid command is designed for that (it is kind of a two-dimensional linspace). The Solver category includes parameters for configuring a solver for a model. Study quasi-Newton and more modern limited-memory quasi-Newton methods Overcome computational pitfalls of Newton. A Newton's Method top. Comments and Ratings (3) MATLAB Release Compatibility. Parameters introduced by solve do not appear in the MATLAB workspace. In this essay, we are only interested in one type of methods : the Newton's methods. Donev (Courant Institute) Lecture VI 10/2014 16 / 24. Previously, we talked about secant method vis-à-vis C program and algorithm/flowchart for the method. To achieve all the project's objectives, the developer must have fulfilled all the scope below: i. I'm finding it very difficult to get my head around how best to express the following system of equations in MatLab in order to solve it. ) To apply Newton's method to as defined in , the sixteen components of the Jacobian matrix are also needed. Newton polynomial interpolation consists of Newton's forward difference formula and Newton's backward difference formula. This guess should be made intelligently. This can be stated in the familiar form: Fnet =ma In the one dimensional case this can be written as: Fnet =m&y&. Like so much of the di erential calculus, it is based on the simple idea of linear approximation. Trial software; You are now following this Submission. % Newton Raphson solution of two nonlinear algebraic equations. It is suggested that finite-differencing be used to calculate function derivatives. Newton-Raphson Matlab Code, Quadratic multiple choice question, solving factorial equations, quadratic problem with exponents, rationalize the denominator and simplify TI. PV modeling - Solve current equation. 1 Governing. First, the most cheap floating-point operations on a modern CPU are addition and multiplication (both are equally fast; same as fused multiply-adds when available). The first guess is often outside the region of convergence. Newton polynomial interpolation consists of Newton's forward difference formula and Newton's backward difference formula. daessc (Solver for Simscape™) Computes the model's state at the next time step by solving systems of differential-algebraic equations resulting from Simscape models. Identify appropriate command for MATLAB M-files iii. Using these functions it is relatively easy to perform head loss calcu-lations, solve flow rate problems, generate system curves, and find the design point for a system and pump. From Wikiversity < Numerical Analysis. PROGRAMMING OF FINITE DIFFERENCE METHODS IN MATLAB LONG CHEN We discuss efficient ways of implementing finite difference methods for solving Pois-son equation on rectangular domains in two and three dimensions. Meade Department of Mathematics Overview The analysis of a function via calculus involves solving a variety of equations: f0(x) = 0 for critical points, f00(x) = 0 for possible inflection points. Newton's method: Matlab code you will choose a number of regularly-spaced points in the square given by and and then use your newton. Matlab Codes. Rafael Marques. Question: Newton Raphson And Secant Method Consider The Equation F(x) = Tan(pi*x) - X - 6 A) Write A MATLAB Function Program That Implements The Newton-Raphson Method To Solve F(x) = 0. Pipe Flow Analysis with Matlab Gerald Recktenwald∗ January 28, 2007 This document describes a collection of Matlab programs for pipe flow analysis. Problems and Restrictions of Newton's Method. Taylors series analysis: use f(r) = f(x 0 + r x 0); Basic Newton’s Method Algorithm: starting with. Transforming Numerical Methods Education for the STEM Undergraduate : Home. We will see second method (Gauss-Seidel iteration method) for solving simultaneous equations in next post. This method is selected with method = :newton. The Matlab meshgrid command is designed for that (it is kind of a two-dimensional linspace). m listed separately on the "Lecture Material" web page. Like so much of the di erential calculus, it is based on the simple idea of linear approximation. %INITIAL1: MATLAB function M-file that specifies the initial condition %for a PDE in time and one space dimension. For guided practice and further exploration of how to use MATLAB files, watch Video Lecture 3: Using Files. m and modify the code so that it implements the Secant Method. The program should restrict the maximum number of iteration to N. daessc (Solver for Simscape™) Computes the model's state at the next time step by solving systems of differential-algebraic equations resulting from Simscape models. solve equation y=1+e^(-0. Do I need to include a separate loop over the Newton-Raphson method?. This video demonstrates how to solve nonlinear systems of equations in matlab. Although this is the most basic non-linear solver, it is surprisingly powerful. m, and also the exact solution in yE. Newton's Method Equation Solver. I am trying to solve fourth order differential equation by using finite difference method. Learn more about differential equations, ode45. I already started to program but I'm absolutely new at Matlab and worked a lot so I'm asking for help. Matlab's function fzero combines bisection, secant and inverse quadratic interpolation and is\fail-safe". Parameters introduced by solve do not appear in the MATLAB workspace. , ode45, ode23) Handle for function containing the derivatives Vector that specifiecs the. 3-11, such that the output will have three arguments. The pipe roughness, pipe diameter, volumetric flow rate, and the kinematic viscosity are all user-defined inputs in SI units. In MATLAB < FMINSEARCH(fun, x o, options, arg1, arg2,) • Finds the local minimum of ' fun ' near the guess x o • fun is a text string which gives the name of an m-file (function). The Armijo-Goldstein line search (a damping strategy for choosing ɑ) helps to improve convergence from bad initial guesses. The above source code for Gauss elimination method in MATLAB can be used to solve any number of linear equations. The Newton Method, properly used, usually homes in on a root with devastating e ciency. Identify appropriate command for MATLAB M-files iii. The following MATLAB answers post provides a code that implements the Newton - Raph s on method. Learn more about matlab, newton-raphson MATLAB. A Newton Raphson Solver in Python. A solver computes a dynamic system's states at successive time steps over a specified time span. m, if needed. In this essay, we are only interested in one type of methods : the Newton's methods. How to solve a system of non-Linear ODEs (Boundary Value Problems) Numerically? If someone can share the code in Matlab for it, that would be nice. Finally, we use the special Matlab command quiver to display the vector plot. Here, we’ll go through a program for Secant method in MATLAB along with its mathematical background and a numerical example. Solve System of Linear Equations Using solve. quadprog - Quadratic programming. Newton Raphson method is an iterative method which is used to find the roots. Quasi-Newton updates of the Hessian (recall that Quasi-Newton updates give dense matrices, which are impractical for large-scale problems) fminsearch: a derivative-free method based on Nelder-Mead simplex Kevin Carlberg Optimization in Matlab. The idea of how My_fzero works comes from combining the ideas of "Personal Calculator Has Key to Solve Any Equation f (x) = 0" by Professor William M. in a neighborhood of the Matlab – Optimization and Integration January 14. Fixed Point Iteration and Newton's Method in 2D and 3D. A Newton's Method Example 1 Example 2 B Steepest Descent Method Example 3. NITSOL: A NEWTON ITERATIVE SOLVER FOR NONLINEAR SYSTEMS MICHAEL PERNICEyAND HOMER F. This example builds on the first-order codes to show how to handle a second-order equation. Today we are just concentrating on the first method that is Jacobi’s iteration method. com is really the excellent destination to take a look at!. Find a zero of the function func given a nearby starting point x0. • Matlab has several different functions (built-ins) for the numerical solution of ODEs. In this tutorial, we're going to write Matlab programs for Newton's forward interpolation as well as Newton's backward interpolation, going through the mathematical derivation of the interpolation technique in general. The sessions were set up to include time for students to try problems, so the slides contain numerous example problems. Or find Gauss-Newton code on the file exchange. The output of solve can contain parameters from the input equations in addition to parameters introduced by solve. Can someone suggest a method to solve non linear simultaneous equations in matlab? is any better solver in Matlab or any other software package. Kahan, "An Equation Solver. Nonlinear Least Squares (Curve Fitting) Solve nonlinear least-squares (curve-fitting) problems in serial or parallel Before you begin to solve an optimization problem, you must choose the appropriate approach: problem-based or solver-based. And third, to s solve for nonlin-ear boundary value problems for ordinary di erential equations, we will study the Finite Di erence method. Use Algebraic Tricks if it is a Simple Polynomial. A numerical ODE solver is used as the main tool to solve the ODE's. Create nondefault options for the fmincon solver. And then, the iteration process is repeated by updating new values of a and b. Credit for MATLAB brush: Will Schleter. self derivative function in solving non-linear equation by means of Newton-Raphson method. fun is a function that accepts a vector x and returns a vector F, the nonlinear equations evaluated at x. , ode45, ode23) Handle for function containing the derivatives Vector that specifiecs the. Method, and the Newton- Raphson method for solving a single non-linear (or linear, of course) algebraic equation. Matlab’s function fzero combines bisection, secant and inverse quadratic interpolation and is\fail-safe". If you have a Gl b lGlobal Oti i tiOptimization TlbToolbox license, use the GlobalSearch or MultiStart solvers. Ahmad Kolahi: 2005-07-31. value = 2*x/(1+xˆ2); We are finally ready to solve the PDE with pdepe. fzero - Scalar nonlinear zero finding. The solver, called TRESNEI, is adequate. Walker, the authors of the paper NITSOL [3], implemented this algorithm in FORTRAN. Use Another Computer Program such as Mathematica or Matlab. The NAG Library contains several routines for minimizing or maximizing a function which use quasi-Newton algorithms. The pipe roughness, pipe diameter, volumetric flow rate, and the kinematic viscosity are all user-defined inputs in SI units. 6 Solve Command The ‘solve’ command is a predefined function in MATLAB. Runge-Kutta Newton method for. m finds roots using the Bisection Method. com 1 Newton's method 1. Like so much of the di erential calculus, it is based on the simple idea of linear approximation. They generate the Jacobian matrix and solve the set of algebraic equations at every time step using a Newton-like method. use the Newton-Raphson method to solve a nonlinear equation, and 4. m return column vectors and there is no need for the Jacobian matrix. Therefore to solve a higher order ODE, the ODE has to be first converted to a set of first order ODE's. Studies MATLAB programming and MATLAB GUI ii. The graph was plotted for 6 different eccentricity values. Newton's Method, For Numerical analysis. The source code and files included in this project are listed in the project files section, please make sure whether the listed source code meet your needs there. There’s a way of solving it with an infinite series but this can be tedious and using a numerical method is often times preferred. Learn more about matlab, newton-raphson MATLAB. 50+ videos Play all Mix - newton raphson Method Matlab CODE YouTube Programming in Visual Basic. m defines the derivative of the function and newtonraphson. MATLAB® allows you to develop mathematical models quickly, using powerful language constructs, and is used in almost every Engineering School on Earth. Newton-Raphson method using MATLAB. Identify appropriate command for MATLAB M-files iii. Matlab Newton-Raphson Solver - Catenary Problem. But my problem is that from now on , ı do not have any idea to combine these two equations. It's a qualitative. Solving a system of equations with two unknowns is a very easy cake to bite but when the number of unknown exceed two, solving the system of equations becomes complicated and time-consuming. In the following script M-file, we choose a grid of x and t values, solve the PDE and create a surface plot of its solution (given in Figure 1. Newton's Divided Difference is a way of finding an interpolation polynomial (a polynomial that fits a particular set of points or data). Use Algebraic Tricks if it is a Simple Polynomial. The NAG Library contains several routines for minimizing or maximizing a function which use quasi-Newton algorithms. Using multi-dimensional Taylor series, a system of non-linear equations can be written near an arbitrary starting point X i = [ x 1 , x 2 ,… , x n ] as follows: where. The Newton Method, properly used, usually homes in on a root with devastating e ciency. I have used the Newton-Raphson method to solve It would also be a good idea to decompose the cubic equation solver into a generic Newton's method solver for any. Initially in the program, the input function has been defined and is assigned to a variable ‘a’. The source code and files included in this project are listed in the project files section, please make sure whether the listed source code meet your needs there. There are three files: func. For the Gauss-Newton algorithm to converge, U 0 must be close enough to the solution. By default, no. When you have no constraints, lsqlin returns x = C\d. Do I need to include a separate loop over the Newton-Raphson method?. To achieve all the project's objectives, the developer must have fulfilled all the scope below: i. The Maple solve command gives us just one solution:. Calculate Tikhonov-regularized, Gauss-Newton nonlinear iterated inversion to solve the damped nonlinear least squares problem (Matlab code). Compared to the other methods we will consider, it is generally the fastest one (usually by far). Nonlinear equations to solve, specified as a function handle or function name. In this post, we are going to show you how you can use your computer and Matlab to solve a system of many equations. 1 Optimization and equation solving Newton’s method converges quadraticly, i. Newton's Method is an application of derivatives will allow us to approximate solutions to an equation. Alternatively, to use the parameters in the MATLAB workspace use syms to. Nonlinear solvers¶. The goal of the project has been to use the modern and robust language MATLAB to implement the NITSOL algorithm. The graph was plotted for 6 different eccentricity values. Problem with initial guess in Newton-Raphson Learn more about nonlinear, function, optimization, maximum-likelihood, estimation, newton-raphson, iteration, hazard, weibull. How to solve simultaneous equations using Newton-Raphson's method? Do you just need to find a specific solution or are you trying to just learn programming by. % NewtonRaphson solves equations of the form: % % F(X) = 0 where F and X may be scalars or vectors % % NewtonRaphson implements the damped newton method with adaptive step % size. The following Matlab project contains the source code and Matlab examples used for newton raphson solver. Newton's method for finding zeros of a function. The video can also. Around 1669, Isaac Newton (1643-1727) gave a new algorithm to solve a polynomial equation and it was illustrated on the example y 3-2y-5=0. I was just wondering if there is a more efficient way to do it. The code for solving the above equations using the ‘solve’ command is as shown. The point to notice here is that we output not just the value of the function, but also its Jacobian matrix: function [y dy]=myfunction(x). If ever you have to have help on function or percents, Factoring-polynomials. This MATLAB code is for two-dimensional elastic solid elements with large deformations (Geometric nonlinearity). In optimization, Newton's method is applied to the derivative f ′ of a twice-differentiable function f to find the roots of the derivative (solutions to f ′(x) = 0), also known as the stationary points of f. To evaluate the direction vector , will be all 1's (use the Matlab ones function), and comes from our right hand side function. 2x)sin(x=2) in interval 0> [x,fval]=fsolve(@ourfun,x0) This says by an iterative process (see next week) starting with the guess x0 approximate the vector x that satisfies the equations in the non-linear vector function ourfun, printing out the current residuals into the vector fval. In this blog post we introduce the two classes of algorithms that are used in COMSOL to solve systems of linear equations that arise when solving any finite element problem. When you have no constraints, lsqlin returns x = C\d. Write a Matlab code which uses Newton Raphson Method to compute an approximate solution to the equation f(x)=0 starting from xo and stopping when the magnitude of f(x) becomes smaller than e. 6 Solve Command The ‘solve’ command is a predefined function in MATLAB. Make an m-file that defines your objective and constraints, gradient and Jacobian. see: Monod kinetics and curve. 2 Functions of the Matlab Optimization Toolbox Linear and Quadratic Minimization problems. If you want to implement Newton-Raphson in MATLAB then that's a bigger issue. These solvers find x for which F(x) = 0. Enter transfer function in MATLAB. Write a Matlab code which uses Newton Raphson Method to compute an approximate solution to the equation f(x)=0 starting from xo and stopping when the magnitude of f(x) becomes smaller than e. This method accepts a custom parameter linesearch, which must be equal to a function computing the linesearch. Use Newton's Method. In the following script M-file, we choose a grid of x and t values, solve the PDE and create a surface plot of its solution (given in Figure 1. MATLAB is used to program the power flow solution and Graphical User Interface (GUI) use to help a user easy to use. Matlab's function fzero combines bisection, secant and inverse quadratic interpolation and is\fail-safe". Around 1669, Isaac Newton (1643-1727) gave a new algorithm to solve a polynomial equation and it was illustrated on the example y 3-2y-5=0. The user must supply a routine to evaluate the function vector. Open a new M-File and type the following. daessc (Solver for Simscape™) Computes the model's state at the next time step by solving systems of differential-algebraic equations resulting from Simscape models. However, since these slides were prepared for students how didn't learn MATLAB. Alternatively, to use the parameters in the MATLAB workspace use syms to. m finds roots using the Bisection Method. I need to solve equation e^x = 3*x in two ways: using Bisection and Newton methods, so I need two codes. In this video we are going to how we can adapt Newton's method to solve systems of nonlinear algebraic equations. This is the classical Newton algorithm with optional linesearch. Bisection Method in MATLAB Code:. Nonlinear zero finding (equation solving). Solve System of Linear Equations Using solve. While the solver itself is a fixed step solver, Simulink ® will reduce the step size at zero crossings for accuracy. described as a generalization of the Secant Method. Solver Pane. fzero - Scalar nonlinear zero finding. Use Algebraic Tricks if it is a Simple Polynomial. Open a new M-File and type the following. Can someone suggest a method to solve non linear simultaneous equations in matlab? is any better solver in Matlab or any other software package. First download the IPOPT mex and m-files, and extract to your MATLAB search path.
# Hard boundary of Heisenberg's uncertainty principle Heisenberg's uncertainty principle states that we cannot know the position and momentum of subatomic particles simultaneously...but what exactly is the boundary of size of such a particle? Does such a boundary even exist or is it simply defined as all particles in the standard model? • HUP is still valid for billiard balls and planets, just not as useful. Jul 9 '19 at 18:19 • What do you mean ?We can know both their momenta and positions simultaneously.. Jul 9 '19 at 18:20 • No, you cannot, but it's irrelevant for macroscopic objects, because your margin of error is much worse. Jul 9 '19 at 18:30 You can use uncertainy principle for everything, what determines the error of measurement though is the $$mass$$ of an object not its size. In the classical limit (a very rough estimation indeed) we can write HUP like this: $$\Delta x \Delta p\geq \hbar/2 \rightarrow \Delta x \Delta v\geq \hbar/(2m)$$ As the mass increases, the right side of inequality decreases. In the case of macroscopic objects it's safe to assume that it will become zero (i mean just look at $$\hbar$$ scale) so according to HUP you can measure velocity and poisition of an object simultaneously, without a noticeable error. In the case of microscopic object though, the right side will become big enough to make us believe that if we measure position or velocity, we will "mess with" the other greatly. In other words, momentum of macroscopic objects is big enough (because of mass) that we don't care for errors in scale of $$\hbar$$. Do note that this was not a technical answer, but it should be good enough for laymans in my opinion. The truth is you should solve Schrödinger equation for macroscopic objects and find true values of $$\Delta x$$ and $$\Delta p$$. You will see that both of them will be tiny (most of the times at least) due to the mass, or other classical limits. For objects of practical size this uncertainty is irrelevant as measurement error is much greater than the uncertainty. For example consider a ball of mass 1kg moving at 1 m/s. Using the uncertainty principle we get uncertainty in position to be of the order of $$10^{-36}m$$. This is such a small quantity that the error in measurement will be order of magnitudes greater and therefore the uncertainty principle is irrelevant.
Introduction The ISNULL() function The COALESCE() function The NULLIF() function 15. NULLIF() – practice 2 Summary ## Instruction Okay! One more exercise on NULLIF() and we'll go on to another section. ## Exercise Mr Amund wants to release a new promotion. The promotion is as follows: each customer who orders a product and picks it up on his own (i.e. no shipping required) can buy it at a special price: the initial Price minus the ShippingCost! If the ShippingCost is greater than the price itself, then the customer still pays the difference (i.e., the absolute value of Price - ShippingCost). Our customer has 1000.00 USD and he wants to know how many products of each kind he could buy. Show each product Name with the column Quantity which calculates the amount of products. Get rid of the decimal part. If you get a price of 0.00, show NULL instead. ### Stuck? Here's a hint! Use FLOOR() to get rid of the unwanted decimal part. Use ABS() to get the absolute value.
# probability Problem 2. Continuous Random Variables 2 points possible (graded, results hidden) Let 𝑋 and 𝑌 be independent continuous random variables that are uniformly distributed on (0,1) . Let 𝐻=(𝑋+2)𝑌 . Find the probability 𝐏(ln𝐻≥𝑧) where 𝑧 is a given number that satisfies 𝑒𝑧<2 . Your answer should be a function of 𝑧 . Hint: Condition on 𝑋 . 1. P(ln H > z) = unanswered 2. Let 𝑋 be a standard normal random variable, and let 𝐹𝑋(𝑥) be its CDF. Consider the random variable 𝑍=𝐹𝑋(𝑋) . Find the PDF 𝑓𝑍(𝑧) of 𝑍 . Note that 𝑓𝑍(𝑧) takes values in (0,1) . 1. 👍 2. 👎 3. 👁 ## Similar Questions 1. ### PHYSICS Vector A⃗ points in the negative x direction and has a magnitude of 22 units. The vector B⃗ points in the positive y direction. partA:Find the magnitude of B⃗ if A⃗ +B⃗ has a magnitude of 37 units. partB: Sketch A⃗ and 2. ### Calculus Explain, using the theorems, why the function is continuous at every number in its domain. F(x)= 2x^2-x-3 / x^2 +9 A) F(x) is a polynomial, so it is continuous at every number in its domain. B) F(x) is a rational function, so it 3. ### statistics In a binomial distribution: A) the random variable X is continuous. B)the probability of event of interest (pie symbol)is stable from trail to trial C) the number of trials n must be at least 30 D) the results of one trial are 4. ### Statistics Z1,Z2,…,Zn,… is a sequence of random variables that converge in distribution to another random variable Z ; Y1,Y2,…,Yn,… is a sequence of random variables each of which takes value in the interval (0,1) , and which 1. ### Stats Classify the following as discrete or continuous random variables. (A) The time it takes to run a marathon (B) The number of fractions between 1 and 2 (C) A pair of dice is rolled, and the sum to appear on the dice is recorded (D) 2. ### math Use a graph to determine whether the given function is continuous on its domain. HINT [See Example 1.] f(x) = x + 7 if x < 0 2x − 5 if x ≥ 0 1 continuous discontinuous If it is not continuous on its domain, list the points of 3. ### statistic****help***** Show all your work. Indicate clearly the methods you use because you will be graded on the correctness of your methods as well as on the accuracy of your results and explanation. A simple random sample of adults living in a suburb 4. ### Math For the discrete random variable X, the probability distribution is given by P(X=x)= kx x=1,2,3,4,5 =k(10-x) x=6,7,8,9 Find the value of the constant k E(X) I am lost , it is the bonus question in my homework on random variables 1. ### probability This figure below describes the joint PDF of the random variables X and Y. These random variables take values in [0,2] and [0,1], respectively. At x=1, the value of the joint PDF is 1/2. (figure belongs to "the science of 2. ### probability t the discrete random variable X be uniform on {0,1,2} and let the discrete random variable Y be uniform on {3,4}. Assume that X and Y are independent. Find the PMF of X+Y using convolution. Determine the values of the constants 3. ### Probability & Statistics Exercise: Convergence in probability a) Suppose that Xn is an exponential random variable with parameter λ=n. Does the sequence {Xn} converge in probability? b) Suppose that Xn is an exponential random variable with parameter 4. ### probability The random variables X and Y are jointly continuous, with a joint PDF of the form fX,Y(x,y)={cxy,if 0≤x≤y≤1 0,,otherwise, where c is a normalizing constant. For x∈[0,0.5], the conditional PDF fX|Y(x|0.5) is of the form
# GRAS 2015 Online Submission Form SUBMISSION DEADLINE: 11:59 PM (MST) Friday, February 27th, 2014 Please fill out all fields and select one option from the drop-down menus. When finished, click the SUBMIT button at the bottom of the form.If a required item is missing, your abstract will not be accepted. Proofread your abstract carefully prior to submission. Revisions, corrections, or additions will not be allowed once this form is submitted. Please note: - Presentations will be 10 minutes long, followed by an additional 5 minutes for questions. PCs (and only PCs) will be available for presentations. - Projectors for PowerPoint presentations will be available. You do NOT need to request extra equipment if this is all you need. - If you would like your presentation uploaded ahead of time, email it to gras.nmsu@gmail.com by Monday, March 16. - If you upload your talk the day of your presentation, it must be on a virus-free CD or USB Flash Drive. - Posters should be a maximum of 42" x  42" (107 x 107 cm).  Poster stands will be provided. - Tables and extension cords are available upon request for exhibits. If you have any questions, e-mail the GRAS coordinators. Italics, subscripts, and superscripts are not preserved when submitted through this form unless they are in the following forms:   \emph{text in italics} produces text in italics   variable$_{subscript}$ produces variablesubscript   variable$^{superscript}$ produces variablesuperscript If you would like the following text in your abstract:       It is known a priori that E=mc2 and (NH4)2SO4 is ammonium sulphate. you should submit this text in your abstract:       It is known \emph{a priori} that E=mc$^{2}$ and (NH$_{4}$)$_{2}$SO$_{4}$ is ammonium sulphate. * Presenter's First Name: Presenter's Middle Name: * Presenter's Last Name: * Presenter's Affiliation: Select One New Mexico State University Dona Ana Community College Eastern New Mexico University Western New Mexico University New Mexico Tech University of New Mexico University of Texas - El Paso Texas Tech University of Arizona Arizona State University Other (specify in comments section at bottom of form) * Presenter's Student Status: Select One Graduate Undergraduate Program Sponsor:(Undergraduates Only) eg, RISE, AMP, McNair, MARC, Honors thesis Co-authors or Co-Presenters (if any): * Phone Number: * Email: * Department: * Title of Submission:     DO NOT PUT IN ALL CAPS * Submission Type: Select One Research Poster Research Talk Art Poster Art Performance Exhibit Times Available to Present: Talks only We will do our best to accommodate your schedule, but cannot guarantee avoidance of all conflicts. check all Wednesday AM             check all Wednesday PM Wednesday 8:00 - 8:15                   Wednesday 1:00 - 1:15 Wednesday 8:15 - 8:30                   Wednesday 1:15 - 1:30 Wednesday 8:30 - 8:45                   Wednesday 1:30 - 1:45 Wednesday 8:45 - 9:00                   Wednesday 1:45 - 2:00 Wednesday 9:00 - 9:15                   Wednesday 2:00 - 2:15 Wednesday 9:15 - 9:30                   Wednesday 2:15 - 2:30 Wednesday 9:30 - 9:45                   Wednesday 2:30 - 2:45 Wednesday 9:45 - 10:00                 Wednesday 2:45 - 3:00 Wednesday 10:00 - 10:15               Wednesday 3:00 - 3:15 Wednesday 10:15 - 10:30               Wednesday 3:15 - 3:30 Wednesday 10:30 - 10:45               Wednesday 3:30 - 3:45 Wednesday 10:45 - 11:00               Wednesday 3:45 - 4:00                                                             Wednesday 4:00 - 4:15                                                             Wednesday 4:15 - 4:30 Include any aditional comments on your availablity. Example: Although I am available from 8-11 Wednesday, I would prefer to present from 10-11 if possible. * Extra Equipment Needed: Select One None Overhead for Transparencies Old School Slide Projector Table for Exhibit Other (specify in the box below) * Abstract (250 words or less): Please adhere to the instructions above regarding italics, subscripts, and superscripts. Comments and/or Food Allergies:
# A Tricky Word Problem, Assistance please? #### Mia ##### New member So this is the problem that has got me and most of my peers stumped. Johnny and Ernesto were participating in a 30 mile time trial on their bikes. Every 30 seconds a new rider would leave the finish line. At 9:05 am Ernesto starts riding at the speed of 440yards per minute. At 9:20am, Johnny starts riding but is going 20% faster than Ernesto. At what time will Johnny catch Ernest? How many miles will they each have ridden at that time? I haven't figured out much. Could someone please help me and explain all of the steps it took to get that answer? Thank you! #### tkhunny ##### Moderator Staff member If you have figured out anything, you have not shared it with us. 1) Name Stuff. Write Down clear and useful definitions. I'll get you started. Distance = Rate * Time D = The Distance both must travel to be in the same place. Note: 0 miles < D < 30 miles T = Time Johnny Travels E = Time Ernesto Travels = T + 15 min S = Johnny's Speed = 440 ypm U = Ernesto's Speed = 1.2*S The question asks for T, E, and D. Let's see what you get. #### soroban ##### Elite Member Hello, Mia! For these "catch-up" problems, I have a back-door approach. Johnny and Ernesto were participating in a 30-mile time trial on their bikes. .Irrelevant! Every 30 seconds a new rider would leave the finish line. .Is this necessary? At 9:05 am Ernesto starts riding at the speed of 440 yards per minute. At 9:20 am, Johnny starts riding but is going 20% faster than Ernesto. At what time will Johnny catch Ernesto? How many miles will they each have ridden at that time? I must assume that they start at the same place. $$\displaystyle \text{Ernesto has a 15 minute headstart.}$$ $$\displaystyle \text{In that time he has gone: }\,15\times 440 \:=\:6600\text{ yards.}$$ $$\displaystyle \text{Then Johnny starts at a speed which is: }20\% \times 440 \:=\:88\text{ yd/min faster.}$$ $$\displaystyle \text{Johnny is gaining on Ernesto at the rate of 88 yd/min.}$$ $$\displaystyle \text{It is }as\:i\!f\text{ Ernesto has } stopped\text{ and Johnny approaches him at 88 yd/min.}$$ $$\displaystyle \text{How long does it take Johnny to cover the 6600 yards?}$$ . . $$\displaystyle \text{It takes him: }\:\frac{6600}{88} \,=\,\boxed{75\text{ minutes}}$$ $$\displaystyle \text{Ernesto had already traveled 6600 yars.}$$ $$\displaystyle \text{In the next 75 minutes, he travels: }\:75\times 440 \,=\,33,\!000\text{ yards.}$$ $$\displaystyle \text{Therefore, Ernesto's total distance is: }\:6,\!600 + 33,\!000 \:=\:\boxed{39,600\text{ yards}}$$ . . $$\displaystyle \text{(And, of course, Johnny's total distance is the same.)}$$ Staff member All done!!! #### Denis ##### Senior Member Soroban's alternative is nice, but if the intent is for you to LEARN by using the regualr formula speed = distance / time (learn to skate before playing hockey!), then: Code: Ernesto:.....@440.........[d]...............> x hours Johnny:......@528.........[d]...............> x-1/4 hours Since speed = distance / time, then distance [d] = speed * time; so: d = 440x (Ernesto) d = 528(x - 1/4) (Johnny) ; and you have: 440x = 528(x - 1/4) Solve for x. This is probably what your teacher expects; for YOU to do at least some work :idea: I suggest you don't hand in Soroban's, but complete above instead; you'll have egg on your face if teacher asks you to redo using regular formula!
• ### Family of Bell-like inequalities as device-independent witnesses for entanglement depth(1411.7385) July 1, 2018 quant-ph We present a simple family of Bell inequalities applicable to a scenario involving arbitrarily many parties, each of which performs two binary-outcome measurements. We show that these inequalities are members of the complete set of full-correlation Bell inequalities discovered by Werner-Wolf-Zukowski-Brukner. For scenarios involving a small number of parties, we further verify that these inequalities are facet-defining for the convex set of Bell-local correlations. Moreover, we show that the amount of quantum violation of these inequalities naturally manifests the extent to which the underlying system is genuinely many-body entangled. In other words, our Bell inequalities, when supplemented with the appropriate quantum bounds, naturally serve as device-independent witnesses for entanglement depth, allowing one to certify genuine k-partite entanglement in an arbitrary $n\ge k$-partite scenario without relying on any assumption about the measurements being performed, nor the dimension of the underlying physical system. A brief comparison is made between our witnesses and those based on some other Bell inequalities, as well as the quantum Fisher information. A family of witnesses for genuine k-partite nonlocality applicable to an arbitrary $n\ge k$-partite scenario based on our Bell inequalities is also presented. • ### Macroscopic quantum states: measures, fragility and implementations(1706.06173) June 19, 2018 quant-ph Large-scale quantum effects have always played an important role in the foundations of quantum theory. With recent experimental progress and the aspiration for quantum enhanced applications, the interest in macroscopic quantum effects has been reinforced. In this review, we critically analyze and discuss measures aiming to quantify various aspects of macroscopic quantumness. We survey recent results on the difficulties and prospects to create, maintain and detect macroscopic quantum states. The role of macroscopic quantum states in foundational questions as well as practical applications is outlined. Finally, we present past and on-going experimental advances aiming to generate and observe macroscopic quantum states. • ### Semi-device-independent characterisation of multipartite entangled states and measurements(1805.00377) May 1, 2018 quant-ph The semi-device-independent framework allows one to draw conclusions about properties of an unknown quantum system under weak assumptions. Here we present a semi-device-independent scheme for the characterisation of multipartite entanglement based around a game played by several isolated parties whose devices are uncharacterised beyond an assumption about the dimension of their Hilbert spaces. Our scheme can certify that an $n$-partite high-dimensional quantum state features genuine multipartite entanglement. Moreover, the scheme can certify that a joint measurement on $n$ subsystems is entangled, and provides a lower bound on the number of entangled measurement operators. These tests are strongly robust to noise, and even optimal for certain classes of states and measurements, as we demonstrate with illustrative examples. Notably, our scheme allows for the certification of many entangled states admitting a local model, which therefore cannot violate any Bell inequality. • ### Insufficiency of avoided crossings for witnessing large-scale quantum coherence in flux qubits(1801.03441) April 27, 2018 quant-ph Do experiments based on superconducting loops segmented with Josephson junctions (e.g., flux qubits) show macroscopic quantum behavior in the sense of Schr\"odinger's cat example? Various arguments based on microscopic and phenomenological models were recently adduced in this debate. We approach this problem by adapting (to flux qubits) the framework of large-scale quantum coherence, which was already successfully applied to spin ensembles and photonic systems. We show that contemporary experiments might show quantum coherence more than 100 times larger than experiments in the classical regime. However, we argue that the often-used demonstration of an avoided crossing in the energy spectrum is not sufficient to make a conclusion about the presence of large-scale quantum coherence. Alternative, rigorous witnesses are proposed. • ### Indeterminism in Physics, Classical Chaos and Bohmian Mechanics. Are Real Numbers Really Real?(1803.06824) March 19, 2018 quant-ph, physics.hist-ph It is usual to identify initial conditions of classical dynamical systems with mathematical real numbers. However, almost all real numbers contain an infinite amount of information. Since a finite volume of space can't contain more than a finite amount of information, I argue that the mathematical real numbers are not physically real. Moreover, a better terminology for the so-called real numbers is "random numbers", as their series of bits are truly random. I propose an alternative classical mechanics that uses only finite-information numbers. This alternative classical mechanics is non-deterministic, despite the use of deterministic equations, in a way similar to quantum theory. Interestingly, both alternative classical mechanics and quantum theories can be supplemented by additional variables in such a way that the supplemented theory is deterministic. Most physicists straightforwardly supplement classical theory with real numbers to which they attribute physical existence, while most physicists reject Bohmian mechanics as supplemented quantum theory, arguing that Bohmian positions have no physical reality. I argue that it is more economical and natural to accept non-determinism with potentialities as a real mode of existence, both for classical and quantum physics. • ### Characterization of the hyperfine interaction of the excited $^5$D$_0$ state of Eu$^{3+}$:Y$_2$SiO$_5$(1710.07591) March 16, 2018 quant-ph, physics.atom-ph We characterize the Europium (Eu$^{3+}$) hyperfine interaction of the excited state ($^5$D$_0$) and determine its effective spin Hamiltonian parameters for the Zeeman and quadrupole tensors. An optical free induction decay method is used to measure all hyperfine splittings under weak external magnetic field (up to 10 mT) for various field orientations. On the basis of the determined Hamiltonian we discuss the possibility to predict optical transition probabilities between hyperfine levels for the $^7$F$_{0} \longleftrightarrow ^5$D$_{0}$ transition. The obtained results provide necessary information to realize an optical quantum memory scheme which utilizes long spin coherence properties of $^{151}$Eu$^{3+}$:Y$_2$SiO$_5$ material under external magnetic fields • ### From Quantum Foundations to Applications and Back(1802.00736) Feb. 2, 2018 quant-ph Quantum non-locality has been an extremely fruitful subject of research, leading the scientific revolution towards quantum information science, in particular to device-independent quantum information processing. We argue that time is ripe to work on another basic problem in the foundations of quantum physics, the quantum measurement problem, that should produce good physics both in theoretical, mathematical, experimental and applied physics. We briefly review how quantum non-locality contributed to physics (including some outstanding open problems) and suggest ways in which questions around Macroscopic Quantumness could equally contribute to all aspects of physics. • ### Robust Macroscopic Quantum Measurements in the presence of limited control and knowledge(1711.01105) Jan. 16, 2018 quant-ph Quantum measurements have intrinsic properties which seem incompatible with our everyday-life macroscopic measurements. Macroscopic Quantum Measurement (MQM) is a concept that aims at bridging the gap between well understood microscopic quantum measurements and macroscopic classical measurements. In this paper, we focus on the task of the polarization direction estimation of a system of $N$ spins $1/2$ particles and investigate the model some of us proposed in Barnea et al., 2017. This model is based on a von Neumann pointer measurement, where each spin component of the system is coupled to one of the three spatial components direction of a pointer. It shows traits of a classical measurement for an intermediate coupling strength. We investigate relaxations of the assumptions on the initial knowledge about the state and on the control over the MQM. We show that the model is robust with regard to these relaxations. It performs well for thermal states and a lack of knowledge about the size of the system. Furthermore, a lack of control on the MQM can be compensated by repeated "ultra-weak" measurements. • ### Simultaneous coherence enhancement of optical and microwave transitions in solid-state electronic spins(1712.08615) Solid-state electronic spins are extensively studied in quantum information science, both for quantum computation, sensing and communication. Electronic spins are highly interesting due to their large magnetic moments, which offer fast operations for computing and communication, and high sensitivity for sensing. However, the large moment also implies higher sensitivity to a noisy magnetic environment, which often reduces coherence times. Yet, material preparation of the spectroscopic properties of electronic spins, e.g. using clock transitions and isotopic engineering, can yield remarkable spin coherence times, as for electronic spins in GaAs, donors in silicon and vacancy centres in diamond. For no material it has been demonstrated, however, that such coherence enhancement techniques can be engineered at the same time for transitions in the spin and optical domains. Here we demonstrate simultaneously induced clock transitions for both microwave and optical domains in an isotopically purified $^{171}$Yb$^{3+}$:Y$_2$SiO$_5$ crystal, reaching coherence times of above 100 $\mu$s and 1~ms in the optical and microwave domain, respectively. This effect is due to the highly anisotropic hyperfine interaction in $^{171}$Yb$^{3+}$:Y$_2$SiO$_5$, which makes each electronic state an entangled Bell state. In particular, our results shows the great potential of $^{171}$Yb$^{3+}$:Y$_2$SiO$_5$ for quantum processing applications relying on both optical and spin manipulation, such as optical quantum memories, microwave-to-optical quantum transducers, and single spin detection. In general, similar effects should also be observable in a range of different materials with anisotropic hyperfine interaction. • ### Spectroscopic study of hyperfine properties in $^{171}$Yb$^{3+}$:Y$_2$SiO$_5$(1712.08616) Rare-earth ion doped crystals are promising systems for quantum communication and quantum information processing. In particular, paramagnetic rare-earth centres can be utilized to realize quantum coherent interfaces simultaneously for optical and microwave photons. In this article, we study hyperfine and magnetic properties of a Y$_2$SiO$_5$ crystal doped with $^{171}$Yb$^{3+}$ ions. This isotope is particularly interesting since it is the only rare--earth ion having electronic spin $S=\frac{1}{2}$ and nuclear spin $I=\frac{1}{2}$, which results in the simplest possible hyperfine level structure. In this work we determine the hyperfine tensors for the ground and excited states on the optical $^2$F$_{7/2}(0) \longleftrightarrow ^2$F$_{5/2}$(0) transition by combining spectral holeburning and optically detected magnetic resonance techniques. The resulting spin Hamiltonians correctly predict the magnetic-field dependence of all observed optical-hyperfine transitions, from zero applied field to the high-field regime where the Zeeman interaction is dominating. Using the optical absorption spectrum we can also determine the order of the hyperfine levels in both states. These results pave the way for realizing solid-state optical and microwave quantum memories based on a $^{171}$Yb$^{3+}$:Y$_2$SiO$_5$ crystal. • ### Quantification of multidimensional entanglement stored in a crystal(1609.05033) Oct. 9, 2017 quant-ph The use of multidimensional entanglement opens new perspectives for quantum information processing. However, an important challenge in practice is to certify and characterize multidimensional entanglement from measurement data that are typically limited. Here, we report the certification and quantification of two-photon multidimensional energy-time entanglement between many temporal modes, after one photon has been stored in a crystal. We develop a method for entanglement quantification which makes use of only sparse data obtained with limited resources. This allows us to efficiently certify an entanglement of formation of 1.18 ebits after performing quantum storage. The theoretical methods we develop can be readily extended to a wide range of experimental platforms, while our experimental results demonstrate the suitability of energy-time multidimensional entanglement for a quantum repeater architecture. • ### Quantum Measurements, Energy Conservation and Quantum Clocks(1709.10472) Sept. 29, 2017 quant-ph We consider a spin chain extending from Alice to Bob with next neighbors interactions, initially in its ground state. Assuming that Bob measures the last spin of the chain, the energy of the spin chain has to increase, at least on average, due to the measurement disturbance. Presumably, the energy is provided by Bob's measurement apparatus. Assuming now that, simultaneously to Bob's measurement, Alice measures the first spin, we show that either energy is not conserved, - implausible - or the projection postulate doesn't apply, and that there is signalling. An explicit measurement model shows that energy is conserved (as expected), but that the spin chain energy increase is not provided by the measurement apparatus(es), that the projection postulate is not always valid - illustrating the Wigner-Araki-Yanase (WAY) theorem - and that there is signalling, indeed. The signalling is due to the non-local interaction Hamiltonian. This raises the question of a suitable quantum information inspired model of such non-local Hamiltonians. • ### Universal bound on the cardinality of local hidden variables in networks(1709.00707) Sept. 3, 2017 quant-ph We present an algebraic description of the sets of local correlations in arbitrary networks, when the parties have finite inputs and outputs. We consider networks generalizing the usual Bell scenarios by the presence of multiple uncorrelated sources. We prove a finite upper bound on the cardinality of the value sets of the local hidden variables. Consequently, we find that the sets of local correlations are connected, closed and semialgebraic, and bounded by tight polynomial Bell-like inequalities. • ### The Elegant Joint Quantum Measurement and some conjectures about N-locality in the Triangle and other Configurations(1708.05556) Aug. 18, 2017 quant-ph In order to study N-locality without inputs in long lines and in configurations with loops, e.g. the triangle, we introduce a natural joint measurement on two qubits different from the usual Bell state measurement. The resulting quantum probability $p(a_1,a_2,...,a_N)$ has interesting features. In particular the probability that all results are equal is that large, while respecting full symmetry, that it seems highly implausible that one could reproduce it with any N-local model, though - unfortunately - I have not been unable to prove it. • ### Macroscopic Quantum Measurements of noncommuting observables(1605.05956) July 18, 2017 quant-ph Assuming a well-behaving quantum-to-classical transition, measuring large quantum systems should be highly informative with low measurement-induced disturbance, while the coupling between system and measurement apparatus is "fairly simple" and weak. Here, we show that this is indeed possible within the formalism of quantum mechanics. We discuss an example of estimating the collective magnetization of a spin ensemble by simultaneous measuring three orthogonal spin directions. For the task of estimating the direction of a spin-coherent state, we find that the average guessing fidelity and the system disturbance are nonmonotonic functions of the coupling strength. Strikingly, we discover an intermediate regime for the coupling strength where the guessing fidelity is quasi-optimal, while the measured state is almost not disturbed. • ### On the inequivalence of the CH and CHSH inequalities due to finite statistics(1610.01833) June 15, 2017 quant-ph, math-ph, math.MP Different variants of a Bell inequality, such as CHSH and CH, are known to be equivalent when evaluated on nonsignaling outcome probability distributions. However, in experimental setups, the outcome probability distributions are estimated using a finite number of samples. Therefore the nonsignaling conditions are only approximately satisfied and the robustness of the violation depends on the chosen inequality variant. We explain that phenomenon using the decomposition of the space of outcome probability distributions under the action of the symmetry group of the scenario, and propose a method to optimize the statistical robustness of a Bell inequality. In the process, we describe the finite group composed of relabeling of parties, measurement settings and outcomes, and identify correspondences between the irreducible representations of this group and properties of outcome probability distributions such as normalization, signaling or having uniform marginals. • ### Multi-mode and long-lived quantum correlations between photons and spins in a crystal(1705.03679) May 10, 2017 quant-ph The realization of quantum networks and quantum repeaters remains an outstanding challenge in quantum communication. These rely on entanglement of remote matter systems, which in turn requires creation of quantum correlations between a single photon and a matter system. A practical way to establish such correlations is via spontaneous Raman scattering in atomic ensembles, known as the DLCZ scheme. However, time multiplexing is inherently difficult using this method, which leads to low communication rates even in theory. Moreover, it is desirable to find solid-state ensembles where such matter-photon correlations could be generated. Here we demonstrate quantum correlations between a single photon and a spin excitation in up to 12 temporal modes, in a $^{151}$Eu$^{3+}$ doped Y$_2$SiO$_5$ crystal, using a novel DLCZ approach that is inherently multimode. After a storage time of 1 ms, the spin excitation is converted into a second photon. The quantum correlation of the generated photon pair is verified by violating a Cauchy - Schwarz inequality. Our results show that solid-state rare-earth crystals could be used to generate remote multi-mode entanglement, an important resource for future quantum networks. • ### Collapse. What else?(1701.08300) April 25, 2017 quant-ph We present the quantum measurement problem as a serious physics problem. Serious because without a resolution, quantum theory is not complete, as it does not tell how one should - in principle - perform measurements. It is physical in the sense that the solution will bring new physics, i.e. new testable predictions, hence it is not merely a matter of interpretation of a frozen formalism. I argue that the two popular ways around the measurement problem, many-worlds and Bohmian-like mechanics, do, de facto, introduce effective collapses when "I" interact with the quantum system. Hence, surprisingly, in many-worlds and Bohmian mechanics, the "I" plays a more active role than in alternative models, like e.g. collapse models. Finally, I argue that either there are several kinds of stuffs out there, i.e. physical dualism, some stuff that respects the superposition principle and some that doesn't, or there are special configurations of atoms and photons for which the superposition principle breaks down. Or, and this I argue is the most promising, the dynamics has to be modified, i.e. in the form of a stochastic Schr\"odinger equation. • ### General measure for macroscopic quantum states beyond "dead and alive"(1704.02270) April 7, 2017 quant-ph We consider the characterization of quantum superposition states beyond the pattern "dead and alive". We propose a measure that is applicable to superpositions of multiple macroscopically distinct states, superpositions with different weights as well as mixed states. The measure is based on the mutual information to characterize the distinguishability between multiple superposition states. This allows us to overcome limitations of previous proposals, and to bridge the gap between general measures for macroscopic quantumness and measures for Schr\"odinger-cat type superpositions. We discuss a number of relevant examples, provide an alternative definition using basis-dependent quantum discord and reveal connections to other proposals in the literature. Finally, we also show the connection between the size of quantum states as quantified by our measure and their vulnerability to noise. • ### Experimental certification of millions of genuinely entangled atoms in a solid(1703.04704) Quantum theory predicts that entanglement can also persist in macroscopic physical systems, albeit difficulties to demonstrate it experimentally remain. Recently, significant progress has been achieved and genuine entanglement between up to 2900 atoms was reported. Here we demonstrate 16 million genuinely entangled atoms in a solid-state quantum memory prepared by the heralded absorption of a single photon. We develop an entanglement witness for quantifying the number of genuinely entangled particles based on the collective effect of directed emission combined with the nonclassical nature of the emitted light. The method is applicable to a wide range of physical systems and is effective even in situations with significant losses. Our results clarify the role of multipartite entanglement in ensemble-based quantum memories as a necessary prerequisite to achieve a high single-photon process fidelity crucial for future quantum networks. On a more fundamental level, our results reveal the robustness of certain classes of multipartite entangled states, contrary to, e.g., Schr\"odinger-cat states, and that the depth of entanglement can be experimentally certified at unprecedented scales. • ### Quantifying photonic high-dimensional entanglement(1701.03269) Feb. 15, 2017 quant-ph High-dimensional entanglement offers promising perspectives in quantum information science. In practice, however, the main challenge is to devise efficient methods to characterize high-dimensional entanglement, based on the available experimental data which is usually rather limited. Here we report the characterization and certification of high-dimensional entanglement in photon pairs, encoded in temporal modes. Building upon recently developed theoretical methods, we certify an entanglement of formation of 2.09(7) ebits in a time-bin implementation, and 4.1(1) ebits in an energy-time implementation. These results are based on very limited sets of local measurements, which illustrates the practical relevance of these methods. • ### Correlations in star networks: from Bell inequalities to network inequalities(1702.03866) Feb. 13, 2017 quant-ph The problem of characterizing classical and quantum correlations in networks is considered. Contrary to the usual Bell scenario, where distant observers share a physical system emitted by one common source, a network features several independent sources, each distributing a physical system to a subset of observers. In the quantum setting, the observers can perform joint measurements on initially independent systems, which may lead to strong correlations across the whole network. In this work, we introduce a technique to systematically map a Bell inequality to a family of Bell-type inequalities bounding classical correlations on networks in a star-configuration. Also, we show that whenever a given Bell inequality can be violated by some entangled state $\rho$, then all the corresponding network inequalities can be violated by considering many copies of $\rho$ distributed in the star network. The relevance of these ideas is illustrated by applying our method to a specific multi-setting Bell inequality. We derive the corresponding network inequalities, and study their quantum violations. • ### All entangled pure quantum states violate the bilocality inequality(1702.00333) Feb. 1, 2017 quant-ph The nature of quantum correlations in networks featuring independent sources of entanglement remains poorly understood. Here, focusing on the simplest network of entanglement swapping, we start a systematic characterization of the set of quantum states leading to violation of the so-called "bilocality" inequality. First, we show that all possible pairs of entangled pure states can violate the inequality. Next, we derive a general criterion for violation for arbitrary pairs of mixed two-qubit states. Notably, this reveals a strong connection between the CHSH Bell inequality and the bilocality inequality, namely that any entangled state violating CHSH also violates the bilocality inequality. We conclude with a list of open questions. • ### Temporal multimode storage of entangled photon pairs(1606.07774) Dec. 9, 2016 quant-ph Multiplexed quantum memories capable of storing and processing entangled photons are essential for the development of quantum networks. In this context, we demonstrate the simultaneous storage and retrieval of two entangled photons inside a solid-state quantum memory and measure a temporal multimode capacity of ten modes. This is achieved by producing two polarization entangled pairs from parametric down conversion and mapping one photon of each pair onto a rare-earth-ion doped (REID) crystal using the atomic frequency comb (AFC) protocol. We develop a concept of indirect entanglement witnesses, which can be used as Schmidt number witness, and we use it to experimentally certify the presence of more than one entangled pair retrieved from the quantum memory. Our work puts forward REID-AFC as a platform compatible with temporal multiplexing of several entangled photon pairs along with a new entanglement certification method useful for the characterisation of multiplexed quantum memories. • ### Spectral hole lifetimes and spin population relaxation dynamics in neodymium-doped yttrium orthosilicate(1611.05444) Nov. 16, 2016 quant-ph, cond-mat.mtrl-sci We present a detailed study of the lifetime of optical spectral holes due to population storage in Zeeman sublevels of Nd$^{3+}$:Y$_2$SiO$_5$. The lifetime is measured as a function of magnetic field strength and orientation, temperature and Nd$^{3+}$ doping concentration. At the lowest temperature of 3 K we find a general trend where the lifetime is short at low field strengths, then increases to a maximum lifetime at a few hundreds of mT, and then finally decays rapidly for high field strengths. This behaviour can be modelled with a relaxation rate dominated by Nd$^{3+}$-Nd$^{3+}$ cross relaxation at low fields and spin lattice relaxation at high magnetic fields. The maximum lifetime depends strongly on both the field strength and orientation, due to the competition between these processes and their different angular dependencies. The cross relaxation limits the maximum lifetime for concentrations as low as 30 ppm of Nd$^{3+}$ ions. By decreasing the concentration to less than 1 ppm we could completely eliminate the cross relaxation, reaching a lifetime of 3.8 s at 3~K. At higher temperatures the spectral hole lifetime is limited by the magnetic-field independent Raman and Orbach processes. In addition we show that the cross relaxation rate can be strongly reduced by creating spectrally large holes of the order of the optical inhomogeneous broadening. Our results are important for the development and design of new rare-earth-ion doped crystals for quantum information processing and narrow-band spectral filtering for biological tissue imaging.
How do you find the molar mass of a gas given the density? August 11, 2020 Off By idswater How do you find the molar mass of a gas given the density? Density of a gas at STP. 1. The formula D= M/V is used at STP with M being equal to the molar mass and V being molar volume of a gas (22.4 liter/mole). 2. Molar mass can also be solved based on given information. 3. Examples. How is density related to molar mass? Molar mass is equal to density (in g/L) multiplied by molar volume. Because the density of the gas is less than 1 g/L, the molar mass is less than 22.4 g/mol. How do you find the density of a gas calculator? Gas Density Calculator 1. Formula. d = P*M / R*T. 2. Pressure (ATM) 3. Molar Mass (g/mol) 4. Temperature (K) What is the molar mass of the gas? The molecular weight (molar mass) of any gas is the mass of one particle of that gas multiplied by Avogadro’s number (6.02 x 1023). Knowing the molar mass of an element or compound can help us stoichiometrically balance a reaction equation. Is gas density high or low? Gases. The particles in gases are very far apart, so gases have a very low density. Is density directly proportional to molar mass? Since volumes of different gases contain the same number of particles (see Avogadro’s Hypothesis), the number of moles per liter at a given T and P is constant. Therefore, the density of a gas is directly proportional to its molar mass (MM). How do you find the density of an ideal gas? The original ideal gas law uses the formula PV = nRT, the density version of the ideal gas law is PM = dRT, where P is pressure measured in atmospheres (atm), T is temperature measured in kelvin (K), R is the ideal gas law constant 0.0821 atm(L)mol(K) just as in the original formula, but M is now the molar mass ( gmol … What is the density of gas? Gas is much less dense than solids and liquids. In the early days of chemistry, some chemists made the mistake of assuming that gas had no mass, and hence 0 density. In fact, gas has density but it is about 1/1000 times as dense as solids or liquids. How do you calculate gas density? To calculate the density of a gas at standard temperature and pressure, you take the molecular formula weight of the gas (grams per mole—from the periodic table) and divide that by the standard molar volume for a gas, which is 22.4 L per mole: where the formula weight (FW) is in g/mol, and the standard molar volume is 22.4 L/mol. What is the density of a gas equation? The density is determined by utilizing a variation of the ideal gas law where density and molar mass replace moles and volume. The original ideal gas law uses the formula PV = nRT, the density version of the ideal gas law is PM = dRT, where P is pressure measured in atmospheres (atm),… How to calculate the density of a gas? Convert Input (s) to Base Unit • Evaluate Formula • Convert Result to Output’s Unit • How do you calculate ideal gas? Ideal gas law equation. The properties of an ideal gas are all lined in one formula of the form pV = nRT , where: p is the pressure of the gas, measured in Pa, V is the volume of the gas, measured in m^3, n is the amount of substance, measured in moles,
# If 1 , 3 , 8 are first three terms of an arithmetic - geometric progression (with +ve common difference ) , the sum of next three terms is : $(a)\;180\qquad(b)\;160\qquad(c)\;140\qquad(d)\;120$ ## 1 Answer Answer : (a) 180 Explanation : Given , a=1 $(a+d)\;r=3$ $(a+2d)\;r^2=8$ $(1+d)\;r=3$ $(1+2d)\;r^2=8------(1)$ $(1+d)^2\;r^2=9$ $(1+2d+d^2)\;r^2=9----(2)$ Subtracting (1) from (2) $d^2\;r^2=1$ $d\;r=1$ $d=\large\frac{1}{r}$ $(1+d)\;r=3$ $(1+\large\frac{1}{r})\;r=3$ $r+1=3$ $r=2\quad\;d=\frac{1}{2}$ Next three terms , $(1) \quad\; (1+3d)\;r^3 =(1+\large\frac{3}{2})\;8=20$ $(2)\quad\;(1+4d)\;r^4=(1+\large\frac{4}{2})\;16=48$ $(3)\quad\;(1+5d)\;r^5=(1+\large\frac{5}{2})\;32=112$ Sum of next three terms = $\;20+48+112=180\;.$ answered Jan 23, 2014 by 1 answer 1 answer 1 answer 1 answer 1 answer 1 answer 1 answer
# XXIV International Workshop on Deep-Inelastic Scattering and Related Subjects (DIS16) Apr 11 – 15, 2016 DESY Hamburg Europe/Berlin timezone ## Belle II early physics program of bottomonia spectroscopy Apr 13, 2016, 5:39 PM 20m SR3 (DESY Hamburg) ### SR3 #### DESY Hamburg Future Experiments Dr Hua YE (DESY) ### Description The Belle II experiment at the SuperKEKB collider is a major upgrade of the KEK $B$ factory'' facility in Tsukuba, Japan. Phase 1 commissioning of the main ring of SuperKEKB has started in February 2016 and first physics data will be recorded in the second half of 2017 during the so-called Phase~2 commissioning, when the Belle II detector will be operated still without its vertex detector. In this talk we describe a possible physics program for this early data run at different center-of-mass energies, in particular at the $\Upsilon(3S)$ and $\Upsilon(6S)$ resonances, amongst other energy points. Slides
ENDOR spectroscopy is primarily directed to study the magnetic in ENDOR spectroscopy is primarily directed to study the magnetic interactions of the unpaired electron spin with the spins of magnetic nuclei (hyperfine interaction, HFI). These nuclei can belong either to the molecule on which the unpaired electron is localized, or to the surrounding molecules. selleck inhibitor In favorable cases, the GW-572016 supplier nuclear quadrupole interaction (NQI) experienced by nuclei with spin I > 1/2 can be tested by ENDOR. The strength of the HFI and the NQI is intimately related to the electron spin and charge density distribution of the molecule, respectively. Therefore, their detection offers a deep insight into the electronic structure of the studied systems, which is crucial for understanding their chemical reactivity and function. The two main branches of ENDOR, continuous wave (CW) and pulse, are based on CW and pulse EPR, respectively. Pulse ENDOR requires the detection of the electron spin echo (ESE) signal, which limits its application to systems with a sufficiently large transverse electron spin relaxation time (T 2  > 100 ns). This makes pulse ENDOR not suitable for studies of liquid samples and generally requires low-temperature experiments. CW ENDOR is free from this limitation and allows the experiments to be performed under physiological conditions. However, the technique requires “fine tuning” of the longitudinal relaxation times of the electron and nuclear spins YAP-TEAD Inhibitor 1 concentration for optimum signal intensities. enough Due to the strong temperature dependence of these relaxation rates, pulse ENDOR is usually superior to CW ENDOR at low temperatures. This article starts with a brief theoretical section, where the most important equations are presented. Then selected examples of ENDOR studies of photosynthetic systems are reviewed. Furthermore, limitations and perspectives of the technique are discussed. Theory Spin system The simplest system for which ENDOR can be used is a radical with the electron spin S = 1/2 which has one nucleus with nuclear spin I = 1/2. First, we assume that hyperfine coupling between them is isotropic. If the g-tensor is also isotropic, the spin-hamiltonian H of this system is (in frequency units): $$\fracHh = \fracg\beta_\texte hB_0 S_\textz – \fracg_\textn \beta_\textn hB_0 I_\textz + a(SI).$$ (1)The first term in this equation describes the electron Zeeman interaction, the second term describes the nuclear Zeeman interaction, and the third describes the HFI. Here, h is Planck’s constant, β e is the Bohr magneton, g is the electronic g-value, β n is the nuclear magneton, g n is the nuclear g-value, a is the HFI constant, S and I are the operators of the electron and nuclear spin. We assumed that the constant magnetic field of the EPR spectrometer B 0 is directed along the z-axis of the laboratory frame. The spin-hamiltonian in Eq.
• Sök i SwePub databas # Träfflista för sökning "WFRF:(Abdou Y.) " Sökning: WFRF:(Abdou Y.) • Resultat 1-10 av 61 Sortera/gruppera träfflistan NumreringReferensOmslagsbildHitta 1. • Abbasi, R., et al. (författare) • Calibration and characterization of the IceCube photomultiplier tube • 2010 • Ingår i: Nuclear Instruments and Methods in Physics Research Section A : Accelerators, Spectrometers, Detectors and Associated Equipment. - 0168-9002. ; 618:1-3, s. 139-152 • Tidskriftsartikel (refereegranskat)abstract • Over 5000 PMTs are being deployed at the South Pole to compose the IceCube neutrino observatory. Many are placed deep in the ice to detect Cherenkov light emitted by the products of high-energy neutrino interactions, and others are frozen into tanks on the surface to detect particles from atmospheric cosmic ray showers. IceCube is using the 10-in. diameter R7081-02 made by Hamamatsu Photonics. This paper describes the laboratory characterization and calibration of these PMTs before deployment. PMTs were illuminated with pulses ranging from single photons to saturation level. Parameterizations are given for the single photoelectron charge spectrum and the saturation behavior. Time resolution, late pulses and afterpulses are characterized. Because the PMTs are relatively large, the cathode sensitivity uniformity was measured. The absolute photon detection efficiency was calibrated using Rayleigh-scattered photons from a nitrogen laser. Measured characteristics are discussed in the context of their relevance to IceCube event reconstruction and simulation efforts. • 2. • Abbasi, R., et al. (författare) • Determination of the atmospheric neutrino flux and searches for new physics with AMANDA-II • 2009 • Ingår i: Physical Review D. - 1550-7998. ; 79:10, s. 102005 • Tidskriftsartikel (refereegranskat)abstract • The AMANDA-II detector, operating since 2000 in the deep ice at the geographic South Pole, has accumulated a large sample of atmospheric muon neutrinos in the 100 GeV to 10 TeV energy range. The zenith angle and energy distribution of these events can be used to search for various phenomenological signatures of quantum gravity in the neutrino sector, such as violation of Lorentz invariance or quantum decoherence. Analyzing a set of 5511 candidate neutrino events collected during 1387 days of livetime from 2000 to 2006, we find no evidence for such effects and set upper limits on violation of Lorentz invariance and quantum decoherence parameters using a maximum likelihood method. Given the absence of evidence for new flavor-changing physics, we use the same methodology to determine the conventional atmospheric muon neutrino flux above 100 GeV. • 3. • Abbasi, R., et al. (författare) • Extending the Search for Neutrino Point Sources with IceCube above the Horizon • 2009 • Ingår i: Physical Review Letters. - 0031-9007. ; 103:22, s. 221102 • Tidskriftsartikel (refereegranskat)abstract • Point source searches with the IceCube neutrino telescope have been restricted to one hemisphere, due to the exclusive selection of upward going events as a way of rejecting the atmospheric muon background. We show that the region above the horizon can be included by suppressing the background through energy-sensitive cuts. This improves the sensitivity above PeV energies, previously not accessible for declinations of more than a few degrees below the horizon due to the absorption of neutrinos in Earth. We present results based on data collected with 22 strings of IceCube, extending its field of view and energy reach for point source searches. No significant excess above the atmospheric background is observed in a sky scan and in tests of source candidates. Upper limits are reported, which for the first time cover point sources in the southern sky up to EeV energies. • 4. • Abbasi, R., et al. (författare) • First Neutrino Point-Source Results from the 22 String Icecube Detector • 2009 • Ingår i: The Astrophysical Journal Letters. - 2041-8205. ; 701:1, s. L47-L51 • Tidskriftsartikel (refereegranskat)abstract • We present new results of searches for neutrino point sources in the northern sky, using data recorded in 2007-2008 with 22 strings of the IceCube detector (approximately one-fourth of the planned total) and 275.7 days of live time. The final sample of 5114 neutrino candidate events agrees well with the expected background of atmospheric muon neutrinos and a small component of atmospheric muons. No evidence of a point source is found, with the most significant excess of events in the sky at 2.2σ after accounting for all trials. The average upper limit over the northern sky for point sources of muon-neutrinos with E –2 spectrum is ##IMG## [http://ej.iop.org/images/1538-4357/701/1/L47/apjl318527ieqn1.gif] $E^2\,Φ _ν _μ &lt; 1.4 \,\,\times\,\, 10^-11\; \mathrmTeV\;cm^-2\;\mathrms^-1$ , in the energy range from 3 TeV to 3 PeV, improving the previous best average upper limit by the AMANDA-II detector by a factor of 2. • 5. • Abbasi, R, et al. (författare) • FIRST NEUTRINO POINT-SOURCE RESULTS FROM THE 22 STRING ICECUBE DETECTOR • 2009 • Ingår i: Astrophysical Journal Letters. ; 701:1, s. L47-L51 • Tidskriftsartikel (refereegranskat)abstract • We present new results of searches for neutrino point sources in the northern sky, using data recorded in 2007-2008 with 22 strings of the IceCube detector (approximately one-fourth of the planned total) and 275.7 days of live time. The final sample of 5114 neutrino candidate events agrees well with the expected background of atmospheric muon neutrinos and a small component of atmospheric muons. No evidence of a point source is found, with the most significant excess of events in the sky at 2.2 sigma after accounting for all trials. The average upper limit over the northern sky for point sources of muon-neutrinos with E-2 spectrum is E-2 Phi(v mu) &lt; 1.4 x 10(-11) TeV cm(-2) s(-1), in the energy range from 3 TeV to 3 PeV, improving the previous best average upper limit by the AMANDA-II detector by a factor of 2. • 6. • Abbasi, R., et al. (författare) • Limits on a muon flux from Kaluza-Klein dark matter annihilations in the Sun from the IceCube 22-string detector • 2010 • Ingår i: Physical Review D. - 1550-7998. ; 81:5, s. Article ID: 057101 • Tidskriftsartikel (refereegranskat)abstract • A search for muon neutrinos from Kaluza-Klein dark matter annihilations in the Sun has been performed with the 22-string configuration of the IceCube neutrino detector using data collected in 104.3 days of live time in 2007. No excess over the expected atmospheric background has been observed. Upper limits have been obtained on the annihilation rate of captured lightest Kaluza-Klein particle (LKP) WIMPs in the Sun and converted to limits on the LKP-proton cross sections for LKP masses in the range 250-3000 GeV. These results are the most stringent limits to date on LKP annihilation in the Sun. • 7. • Abbasi, R., et al. (författare) • Limits on a muon flux from Kaluza-Klein dark matter annihilations in the Sun from the IceCube 22-string detector • 2010 • Ingår i: PHYS REV D. - 1550-7998. ; 81:5, s. 057101 • Tidskriftsartikel (refereegranskat)abstract • A search for muon neutrinos from Kaluza-Klein dark matter annihilations in the Sun has been performed with the 22-string configuration of the IceCube neutrino detector using data collected in 104.3 days of live time in 2007. No excess over the expected atmospheric background has been observed. Upper limits have been obtained on the annihilation rate of captured lightest Kaluza-Klein particle (LKP) WIMPs in the Sun and converted to limits on the LKP-proton cross sections for LKP masses in the range 250-3000 GeV. These results are the most stringent limits to date on LKP annihilation in the Sun. • 8. • Abbasi, R., et al. (författare) • Limits on a Muon Flux from Neutralino Annihilations in the Sun with the IceCube 22-String Detector • 2009 • Ingår i: Physical Review Letters. - 0031-9007. ; 102:20, s. 201302 • Tidskriftsartikel (refereegranskat)abstract • A search for muon neutrinos from neutralino annihilations in the Sun has been performed with the IceCube 22-string neutrino detector using data collected in 104.3 days of live time in 2007. No excess over the expected atmospheric background has been observed. Upper limits have been obtained on the annihilation rate of captured neutralinos in the Sun and converted to limits on the weakly interacting massive particle (WIMP) proton cross sections for WIMP masses in the range 250-5000 GeV. These results are the most stringent limits to date on neutralino annihilation in the Sun. • 9. • Abbasi, R., et al. (författare) • Measurement of sound speed vs. depth in South Pole ice for neutrino astronomy • 2010 • Ingår i: Astroparticle physics. - 0927-6505. ; 33:5-6, s. 277-286 • Tidskriftsartikel (refereegranskat)abstract • We have measured the speed of both pressure waves and shear waves as a function of depth between 80 and 500 m depth in South Pole ice with better than 1% precision. The measurements were made using the South Pole Acoustic Test Setup (SPATS), an array of transmitters and sensors deployed in the ice at the South Pole in order to measure the acoustic properties relevant to acoustic detection of astrophysical neutrinos. The transmitters and sensors use piezoceramics operating at similar to 5-25 kHz. Between 200 m and 500 m depth, the measured profile is consistent with zero variation of the sound speed with depth, resulting in zero refraction, for both pressure and shear waves. We also performed a complementary study featuring an explosive signal propagating vertically from 50 to 2250 m depth, from which we determined a value for the pressure wave speed consistent with that determined for shallower depths, higher frequencies, and horizontal propagation with the SPATS sensors. The sound speed profile presented here can be used to achieve good acoustic source position and emission time reconstruction in general, and neutrino direction and energy reconstruction in particular. The reconstructed quantities could also help separate neutrino signals from background. (C) 2010 Elsevier B.V. All rights reserved. • 10. • Abbasi, R., et al. (författare) • SEARCH FOR HIGH-ENERGY MUON NEUTRINOS FROM THE "NAKED-EYE" GRB 080319B WITH THE IceCube NEUTRINO TELESCOPE • 2009 • Ingår i: Astrophysical Journal. - 0004-637X. ; 701:2, s. 1721-1731 • Tidskriftsartikel (refereegranskat)abstract • We report on a search with the IceCube detector for high-energy muon neutrinos from GRB 080319B, one of the brightest gamma-ray bursts (GRBs) ever observed. The fireball model predicts that a mean of 0.1 events should be detected by IceCube for a bulk Lorentz boost of the jet of 300. In both the direct on-time window of 66 s and an extended window of about 300 s around the GRB, no excess was found above background. The 90% CL upper limit on the number of track-like events from the GRB is 2.7, corresponding to a muon neutrino fluence limit of 9.5 x 10(-3) erg cm(-2) in the energy range between 120 TeV and 2.2 PeV, which contains 90% of the expected events. • Skapa referenser, mejla, bekava och länka • Resultat 1-10 av 61 ### Avgränsa träffmängd Stäng Kopiera och spara länken för att återkomma till aktuell vy
# [pstricks] [Fwd: Re: binom_distribution] Herbert Voss LaTeX at zedat.fu-berlin.de Mon Apr 17 12:57:50 CEST 2006 Poul Riis wrote: > To my knowledge a binomial distribution is defined for integral values > only. you mean integer values. that's correct, but the lines are connected to show that the binomial distribution goes into the normal one for n->\infty > I don't fully understand why the following seems to work... > - And furthermore, I don't understand why it doesn't work for all values > of n and p!? the starting value (k=0) is (1-p)^n, which is a problem for n>125, e.g. 0.5^125\approx 2.35e-38, which is nearly the smallest value PostScript can handle. The latest pst-func.tex (from http://perce.de/LaTeX/pst-func/) has two macros \psBinomial and \psBinomialN for the normalized distribution. Attached an example image of this code: \documentclass[12pt]{article} \usepackage{pst-func} \pagestyle{empty} \begin{document} \psset{xunit=1cm,yunit=10cm}% \begin{pspicture}(-1,0)(7,0.55)% \psaxes[Dy=0.2,dy=0.2\psyunit]{->}(0,0)(-1,0)(7,0.5) \uput[-90](7,0){$k$} \uput[90](0,0.5){$P(X=k)$} \psBinomial[linecolor=red,markZeros,printValue,fillstyle=vlines]{6}{0.4} \end{pspicture} \vspace{1cm} \begin{pspicture}(-3,0)(4,0.55)% \psaxes[Dy=0.2,dy=0.2\psyunit]{->}(0,0)(-3,0)(4,0.5) \uput[-90](4,0){$z$} \uput[90](0,0.5){$P(Z=z)$} \psBinomialN[linecolor=red,markZeros,fillstyle=vlines]{6}{0.4} \end{pspicture} \end{document} Herbert -------------- next part -------------- A non-text attachment was scrubbed... Name: bin.png Type: image/png Size: 8848 bytes Desc: not available Url : http://tug.org/pipermail/pstricks/attachments/20060417/5408254d/attachment.png
Products Rewards from HOLOOLY We are determined to provide the latest solutions related to all subjects FREE of charge! Enjoy Limited offers, deals & Discounts by signing up to Holooly Rewards Program HOLOOLY HOLOOLY TABLES All the data tables that you may search for. HOLOOLY ARABIA For Arabic Users, find a teacher/tutor in your City or country in the Middle East. HOLOOLY TEXTBOOKS Find the Source, Textbook, Solution Manual that you are looking for in 1 click. HOLOOLY HELP DESK Need Help? We got you covered. ## Q. 4.12 A wall bracket with a rectangular cross-section is shown in Fig. 4.39. The depth of the cross-section is twice of the width. The force P acting on the bracket at 600 to the vertical is 5 kN. The material of the bracket is grey cast iron FG 200 and the factor of safety is 3.5. Determine the dimensions of the cross-section of the bracket. Assume maximum normal stress theory of failure. ## Verified Solution Given P = 5 kN $S_{u t}=200 N / mm ^{2} \quad(f s)=3.5 \quad d / w=2$. Step I Calculation of permissible stress $\sigma_{\max }=\frac{S_{u t}}{(f s)}=\frac{200}{3.5}=57.14 N / mm ^{2}$         (i). Step II Calculation of direct and bending tensile stresses The stress is maximum at the point A in the section XX. The point is subjected to combined bending and direct tensile stresses. The force P is resolved into two components—horizontal component $P_h$ and vertical component $P_v$ . $P_{h}=P \sin 60^{\circ}=5000 \sin 60^{\circ}=4330.13 N$. $P_{v}=P \cos 60^{\circ}=5000 \cos 60^{\circ}=2500 N$. The bending moment at the section XX is given by $M_{b}=P_{h} \times 150+P_{v} \times 300$. $=4330.13 \times 150+2500 \times 300$. $=1399.52 \times 10^{3} N – mm$. $\sigma_{b}=\frac{M_{b} y}{I}$. $=\frac{\left(1399.52 \times 10^{3}\right)(t)}{\left[\frac{1}{12}(t)(2 t)^{3}\right]}=\frac{2099.28 \times 10^{3}}{t^{3}} N / mm ^{2}$. The direct tensile stress due to component $P_h$ is given by, $\sigma_{t}=\frac{P_{h}}{A}=\frac{4330.13}{2 t^{2}}=\frac{2165.07}{t^{2}} N / mm ^{2}$. The vertical component $P_v$ induces shear stress at the section XX. It is however small and neglected. Step III Calculation of dimensions of cross-section The resultant tensile stress $\sigma_{\max }$. at the point A is given by, $\sigma_{\max }=\sigma_{b}+\sigma_{t}=\frac{2099.28 \times 10^{3}}{t^{3}}+\frac{2165.07}{t^{2}}$              (ii). Equating (i) and (ii), $\frac{2099.28 \times 10^{3}}{t^{3}}+\frac{2165.07}{t^{2}}=57.14$. or $t^{3}-37.89 t-36739.24=0$. Solving the above cubic equation by trial and error method, $t=33.65 mm \cong 35 mm$. The dimensions of the cross-section are 35 × 70 mm.
## Intermediate Algebra: Connecting Concepts through Application $1$ $\bf{\text{Solution Outline:}}$ To simplify the given expression, $\left( \dfrac{5w^3v^7x^{-4}}{17wx^3} \right)^0 ,$ use the laws of exponents. $\bf{\text{Solution Details:}}$ Since any expression (except $0$) raised to the $0$ power is $1,$ then the expression above simplifies to $1 .$
# Definition:Transitive Closure (Relation Theory)/Intersection of Transitive Supersets Let $\mathcal R$ be a relation on a set $S$. The transitive closure of $\mathcal R$ is defined as the intersection of all transitive relations on $S$ which contain $\mathcal R$. The transitive closure of $\mathcal R$ is denoted $\mathcal R^+$.
Department of Pre-University Education, KarnatakaPUC Karnataka Science Class 12 Frequencies of Kα X-rays of Different Materials Are Measured. Which One of the Graphs in the Figure May Represent the Relation Between the Frequency V and the Atomic Number Z ? - Physics Graph Frequencies of Kα X-rays of different materials are measured. Which one of the graphs in the figure may represent the relation between the frequency v and the atomic number ? Solution Using Moseley's Law, sqrt(v) = a(Z - b), where v = frequency of Kα X-ray Z = atomic number therefore  v = a^2(Z - b)^2 ⇒ (Z - b)^2 = v/a^2 This is  the equation of a parabola with some intercept on the axis, representing atomic number (Z). Hence, curve represent this relation correctly. Is there an error in this question or solution? APPEARS IN HC Verma Class 11, Class 12 Concepts of Physics Vol. 2 Chapter 22 X-rays MCQ | Q 7 | Page 394
# How do you solve 28= - 6y - 12+ 2y? Jun 21, 2018 $y = - 10$ #### Explanation: $28 = - 6 y - 12 + 2 y$ combine like terms: $28 = - 4 y - 12$ add $12$ to both sides: $40 = - 4 y$ divide both sides by $- 4$ $- 10 = y$
## Reading the Comics, November 3, 2018: Arithmetic Is Hard Edition If there is a theme to the last comic strips from the previous week, it’s that kids find arithmetic hard. That’s a title for you. Bill Watterson’s Calvin and Hobbes for the 2nd is one of the classics, of course. Calvin’s made the mistake of supposing that mathematics is only about getting true answers. We’ll accept the merely true, if that’s what we can get. But we want interesting. Which is stuff that’s not just true but is unexpected or unforeseeable in some way. We see this when we talk about finding a “proper” answer, or subset, or divisor, or whatever. Some things are true for every question, and so, who cares? Also, is it really true that Calvin doesn’t know any of his homework problems? It’s possible, but did he check? Were I grading, I would accept an “I don’t know”, at least for partial credit, in certain conditions. Those involve the student writing out what they would like to do to try to solve the problem. If the student has a fair idea of something that ought to find a correct answer, then the student’s showing some mathematical understanding. But there are times that what’s being tested is proficiency at an operation, and a blank “I don’t know” would not help much with that. Patrick Roberts’s Todd the Dinosaur for the 2nd has an arithmetic cameo. Fractions, particularly. They’re mentioned as something too dull to stay awake through. So for the joke’s purpose this could have been any subject that has an exposition-heavy segment. Fractions do have more complicated rules than adding whole numbers do. And introducing those rules can be hard. But anything where you introduce rules instead of showing what you can do with them is hard. I’m thinking here of several times people have tried to teach me board games by listing all the rules, instead of setting things up and letting me ask “what am I allowed to do now?” the first couple turns. I’m not sure how that would translate to fractions, but there might be something. John Zakour and Scott Roberts’s Maria’s Day for the 2nd has another of Maria’s struggles with arithmetic. It’s presented as a challenge so fierce it can defeat even superheroes. Could be any subject, really. It’s hard to beat the visual economy of having it be a division problem, though. Rick Kirkman and Jerry Scott’s Baby Blues for the 3rd shows a bit of youthful enthusiasm. Hammie’s parents would rather that enthusiasm be put to memorizing multiplication facts. I’m not sure this would match the fun of building stuff. But I remember finding patterns inside the multiplication table fascinating. Like how you could start from a perfect square and get the same sequence of numbers as you moved out along a diagonal. Or tracing out where the same number appeared in different rows and columns, like how just everything could multiply into 24. Might be worth playing with some. All of my Reading the Comics posts should be at this link. Essays where I take the chance to talk about Calvin and Hobbes are at this link. Essays that include Todd the Dinosaur are at this link. Essays with a mention of Maria’s Day should be at this link. And essays with a mention of Baby Blues are at this link. Finally, and through the rest of the year, my Fall 2018 Mathematics A-To-Z should be getting two new posts a week. Thanks again for reading. I had thought I’d culled some more pieces from my Twitter and other mathematics-writing-reading the last couple weeks and I’m not sure where it all went. I think I might be baffled by the repostings of things on Quanta Magazine (which has a lot of good mathematics articles, but not, like, a 3,000-word piece every day, and they showcase their archive just as anyone ought). So, here, first. It reviews Kim Plofker’s 2008 text Mathematics In India, a subject that I both know is important — I love to teach with historic context included — and something that I very much bluff my way through. I mean, I do research things I expect I’ll mention, but I don’t learn enough of the big picture and a determined questioner could prove how fragile my knowledge was. So Plofker’s book should go on my reading list at least. These are lecture notes about analysis. In the 19th century mathematicians tried to tighten up exactly what we meant by things like “functions” and “limits” and “integrals” and “numbers” and all that. It was a lot of good solid argument, and a lot of surprising, intuition-defying results. This isn’t something that a lay reader’s likely to appreciate, and I’m sorry for that, but if you do know the difference between Riemann and Lebesgue integrals the notes are likely to help. And this, Daniel Grieser and Svenja Maronna’s Hearing The Shape Of A Triangle, follows up on a classic mathematics paper, Mark Kac’s Can One Hear The Shape Of A Drum? This is part of a class of problems in which you try to reconstruct what kinds of things can produce a signal. It turns out to be impossible to perfectly say what shape and material of a drum produced a certain sound of a drum. But. A triangle — the instrument, that is, but also the shape — has a simpler structure. Could we go from the way a triangle sounds to knowing what it looks like? And I mentioned this before but if you want to go reading every Calvin and Hobbes strip to pick out the ones that mention mathematics, you can be doing someone a favor too. ## How January 2018 Treated My Mathematics Blog And that is: I don’t feel threatened at all so nyah. (And if you want to help them out, please, do send any Calvin and Hobbes strips with mathematical themes over their way.) Back to my usual self-preening. January 2018 was a successful month around here, in terms of people reading stuff I write. According to WordPress, there were some 1,274 pages viewed from 670 unique visitors. That’s the largest number of pages viewed since March and April 2016, when I had a particularly successful A To Z going. It’s the greatest number of unique visitors since September 2017 when I had a less successful but still pretty good A To Z going. The page views were well above December 2017’s 899, and November’s 1,052. The unique visitors were well above December’s 599 and November’s 604. I don’t have any real explanation for this. I suspect it’s spillover from my humor blog, which had its most popular month since the comic strip Apartment 3-G died a sad, slow, baffling death. Long story. I think my humor blog was popular because people don’t know what happened to the guy who writes Gasoline Alley. I don’t know either, but I tell people if I do find out anything I’ll tell them, and that’s almost as good as knowing something. Still, this popularity was accompanied by readers actually liking stuff. There were 112 pages liked in January, beating out the 71 in December and 70 in November by literally dozens of clicks. It’s the highest count since August of 2017 and summer’s A To Z sequence. There were more comments, too, 39 of them. December saw 24 and November 28 and, you see this coming, that’s the largest number of comments since summer 2017’s A To Z sequence. The popular articles for January were two of the ones I expected, one of the Reading the Comics posts, and then two surprises. What were they? These. Yes, it’s clickbait-y to talk about weird tricks for limits that mathematicians use. In my defense: mathematicians really do rely on these tricks all the time. So if it’s getting people stuff that’s useful then my conscience is as clear as it is for asking “How many grooves are on a record’s side?” and (implicitly) “How many kinds of trapezoid are there?” If I’m counting right there were 50 countries from which I drew readers, if “European Union” counts as a country and if “Trinidad and Tobago” don’t count as two. Plus there’s Hong Kong and when you get down to it, “country” is a hard concept to pin down exactly. There were 14 single-reader countries. Here’s the roster of them all: United States 879 India 89 Philippines 59 United Kingdom 37 Singapore 15 Hong Kong SAR China 11 Netherlands 11 Sweden 11 Belgium 9 Algeria 8 Austria 8 Australia 7 France 7 Italy 7 Switzerland 7 South Africa 6 Brazil 5 Slovenia 5 Argentina 4 Germany 4 Japan 4 Pakistan 4 Indonesia 3 Spain 3 Denmark 2 Egypt 2 European Union 2 Greece 2 Iraq 2 New Zealand 2 Portugal 2 South Korea 2 Thailand 2 Ukraine 2 Bulgaria 1 Czech Republic 1 Ireland 1 Malaysia 1 Mexico 1 (**) Namibia 1 Norway 1 Russia 1 (*) Saudi Arabia 1 Sri Lanka 1 Turkey 1 Uruguay 1 (*) Vietnam 1 There were 53 countries sending me readers in December and 56 in November so I guess I’m concentrating? There were 15 single-reader countries in December and 22 in November. Russia and Uruguay were single-reader countries in December; Mexico’s been a single-reader country for three months now. WordPress’s Insights panel says I started the month with 57,592 page views recorded, from 27,161 recorded unique visitors. It also shares with me the interesting statistics that, as I write this and before I post it, I’ve written 16 total posts this year, which have drawn an average two comments and seven likes per post. There’ve been 900 words per post, on average. Overall this year I’ve gotten 39 comments, 110 likes, and have published 14,398 words. I don’t know whether that counts image captions. But this also leads me to learn what previous year statistics were like; I’ve been averaging over 900 words per post since 2015. In 2015 I averaged about 750 words per post, and got three times as many likes and about twice as many comments per post. I’m sure that doesn’t teach me anything. At the least I won’t learn from it. ## Reading the Comics, January 6, 2018: Terms Edition The last couple days of last week saw a rush of comics, although most of them were simpler things to describe. Bits of play on words, if you like. Samson’s Dark Side of the Horse for the 4th of January, 2018, is one that plays on various meanings of “average”. The mean, alluded to in the first panel, is the average most people think of first. Where you have a bunch of values representing instances of something, add up the values, and divide by the number of instances. (Properly that’s the arithmetic mean. There’s some others, such as the geometric mean, but if someone’s going to use one of those they give you clear warning.) The median, in the second, is the midpoint, the number that half of all instances are less than. So you see the joke. If the distribution of intelligence is normal — which is a technical term, although it does mean “not freakish” — then the median and the mean should be equal. If you had infinitely many instances, and they were normally distributed, the two would be equal. With finitely many instances, the mean and the median won’t be exactly in line, for the same reason if you fairly toss a coin two million times it won’t turn up heads exactly one million times. Dark Side of the Horse for the 5th delivers the Roman numerals joke of the year. And I did have to think about whether ‘D’ is a legitimate Roman numeral. This would be easier to remember before 1900. Mike Lester’s Mike du Jour for the 4th is geometry wordplay. I’m not sure the joke stands up to scrutiny, but it lands well enough initially. Johnny Hart’s Back to BC for the 5th goes to the desire to quantify and count things. And to double-check what other people tell you about this counting. It’s easy, today, to think of the desire to quantify things as natural to humans. I’m not confident that it is. The history of statistics shows this gradual increase in the number and variety of things getting tracked. This strip originally ran the 11th of July, 1960. Bill Watterson’s Calvin and Hobbes for the 5th talks about averages again. And what a population average means for individuals. It doesn’t mean much. The glory of statistics is that groups are predictable in a way that individuals are not. John Graziano’s Ripley’s Believe It Or Not for the 5th features a little arithmetic coincidence, that multiplying 21,978 by four reverses its digits. It made me think of Ray Kassinger’s question the other day about parasitic numbers. But this isn’t a parasitic number. A parasitic number is one with a value, multiplied by a particular number, that’s the same as you get by moving its last digit to the front. Flipping the order of digits seems like it should be something and I don’t know what. Mark Anderson’s Andertoons for the 6th is a confident reassurance that 2018 is a normal, healthy year after all. Or can be. Prime numbers. Mark O’Hare’s Citizen Dog rerun for the 6th is part of a sequence in which Fergus takes a (human) child’s place in school. Mathematics gets used as a subject that’s just a big pile of unfamiliar terms if you just jump right in. Most subjects are like this if you take them seriously, of course. But mathematics has got an economy of technical terms to stuff into people’s heads, and that have to be understood to make any progress. In grad school my functional analysis professor took great mercy on us, and started each class with re-writing the definitions of all the technical terms introduced the previous class. Also of terms that might be a bit older, but that are important to get right, which is why I got through it confident I knew what a Sobolev Space was. (It’s a collection of functions that have enough derivatives to do your differential equations problem.) Numerator and denominator, we’re experts on by now. ## Reading the Comics, January 3, 2018: Explaining Things Edition There were a good number of mathematically-themed comic strips in the syndicated comics last week. Those from the first part of the week gave me topics I could really sink my rhetorical teeth into, too. So I’m going to lop those off into the first essay for last week and circle around to the other comics later on. Jef Mallett’s Frazz started a week of calendar talk on the 31st of December. I’ve usually counted that as mathematical enough to mention here. The 1st of January as we know it derives, as best I can figure, from the 1st of January as Julius Caesar established for 45 BCE. This was the first Roman calendar to run basically automatically. Its length was quite close to the solar year’s length. It had leap days added according to a rule that should have been easy enough to understand (one day every fourth year). Before then the Roman calendar year was far enough off the solar year that they had to be kept in synch by interventions. Mostly, by that time, adding a short extra month to put things more nearly right. This had gotten all confusingly messed up and Caesar took the chance to set things right, running 46 BCE to 445 days long. But why 445 and not, say, 443 or 457? And I find on research that my recollection might not be right. That is, I recall that the plan was to set the 1st of January, Reformed, to the first new moon after the winter solstice. A choice that makes sense only for that one year, but, where to set the 1st is literally arbitrary. While that apparently passes astronomical muster (the new moon as seen from Rome then would be just after midnight the 2nd of January, but hitting the night of 1/2 January is good enough), there’s apparently dispute about whether that was the objective. It might have been to set the winter solstice to the 25th of December. Or it might have been that the extra days matched neatly the length of two intercalated months that by rights should have gone into earlier years. It’s a good reminder of the difficulty of reading motivation. Brian Fies’s The Last Mechanical Monster for the 1st of January, 2018, continues his story about the mad scientist from the Fleischer studios’ first Superman cartoon, back in 1941. In this panel he’s describing how he realized, over the course of his long prison sentence, that his intelligence was fading with age. He uses the ability to do arithmetic in his head as proof of that. These types never try naming, like, rulers of the Byzantine Empire. Anyway, to calculate the cube root of 50,653 in his head? As he used to be able to do? … guh. It’s not the sort of mental arithmetic that I find fun. But I could think of a couple ways to do it. The one I’d use is based on a technique called Newton-Raphson iteration that can often be used to find where a function’s value is zero. Raphson here is Joseph Raphson, a late 17th century English mathematician known for the Newton-Raphson method. Newton is that falling-apples fellow. It’s an iterative scheme because you start with a guess about what the answer would be, and do calculations to make the answer better. I don’t say this is the best method, but it’s the one that demands me remember the least stuff to re-generate the algorithm. And it’ll work for any positive number ‘A’ and any root, to the ‘n’-th power. So you want the n-th root of ‘A’. Start with your current guess about what this root is. (If you have no idea, try ‘1’ or ‘A’.) Call that guess ‘x’. Then work out this number: $\frac{1}{n}\left( (n - 1) \cdot x + \frac{A}{x^{n - 1}} \right)$ Ta-da! You have, probably, now a better guess of the n-th root of ‘A’. If you want a better guess yet, take the result you just got and call that ‘x’, and go back calculating that again. Stop when you feel like your answer is good enough. This is going to be tedious but, hey, if you’re serving a prison term of the length of US copyright you’ve got time. (It’s possible with this sort of iterator to get a worse approximation, although I don’t think that happens with n-th root process. Most of the time, a couple more iterations will get you back on track.) But that’s work. Can we think instead? Now, most n-th roots of whole numbers aren’t going to be whole numbers. Most integers aren’t perfect powers of some other integer. If you think 50,653 is a perfect cube of something, though, you can say some things about it. For one, it’s going to have to be a two-digit number. 103 is 1,000; 1003 is 1,000,000. The second digit has to be a 7. 73 is 343. The cube of any number ending in 7 has to end in 3. There’s not another number from 1 to 9 with a cube that ends in 3. That’s one of those things you learn from playing with arithmetic. (A number ending in 1 cubes to something ending in 1. A number ending in 2 cubes to something ending in 8. And so on.) So the cube root has to be one of 17, 27, 37, 47, 57, 67, 77, 87, or 97. Again, if 50,653 is a perfect cube. And we can do better than saying it’s merely one of those nine possibilities. 40 times 40 times 40 is 64,000. This means, first, that 47 and up are definitely too large. But it also means that 40 is just a little more than the cube root of 50,653. So, if 50,653 is a perfect cube, then it’s most likely going to be the cube of 37. Bill Watterson’s Calvin and Hobbes rerun for the 2nd is a great sequence of Hobbes explaining arithmetic to Calvin. There is nothing which could be added to Hobbes’s explanation of 3 + 8 which would make it better. I will modify Hobbes’s explanation of what the numerator. It’s ridiculous to think it’s Latin for “number eighter”. The reality is possibly more ridiculous, as it means “a numberer”. Apparently it derives from “numeratus”, meaning, “to number”. The “denominator” comes from “de nomen”, as in “name”. So, you know, “the thing that’s named”. Which does show the terms mean something. A poet could turn “numerator over denominator” into “the number of parts of the thing we name”, or something near enough that. Hobbes continues the next day, introducing Calvin to imaginary numbers. The term “imaginary numbers” tells us their history: they looked, when first noticed in formulas for finding roots of third- and fourth-degree polynomials, like obvious nonsense. But if you carry on, following the rules as best you can, that nonsense would often shake out and you’d get back to normal numbers again. And as generations of mathematicians grew up realizing these acted like numbers we started to ask: well, how is an imaginary number any less real than, oh, the square root of six? Hobbes’s particular examples of imaginary numbers — “eleventenn” and “thirty-twelve” — are great-sounding compositions. They put me in mind, as many of Watterson’s best words do, of a 1960s Peanuts in which Charlie Brown is trying to help Sally practice arithmetic. (I can’t find it online, as that meme with edited text about Sally Brown and the sixty grapefruits confounds my web searches.) She offers suggestions like “eleventy-Q” and asks if she’s close, which Charlie Brown admits is hard to say. And finally, James Allen’s Mark Trail for the 3rd just mentions mathematics as the subject that Rusty Trail is going to have to do some work on instead of allowing the experience of a family trip to Mexico to count. This is of extremely marginal relevance, but it lets me include a picture of a comic strip, and I always like getting to do that. ## Reading the Comics, April 24, 2017: Reruns Edition I went a little wild explaining the first of last week’s mathematically-themed comic strips. So let me split the week between the strips that I know to have been reruns and the ones I’m not so sure were. Bill Amend’s FoxTrot for the 23rd — not a rerun; the strip is still new on Sundays — is a probability question. And a joke about story problems with relevance. Anyway, the question uses the binomial distribution. I know that because the question is about doing a bunch of things, homework questions, each of which can turn out one of two ways, right or wrong. It’s supposed to be equally likely to get the question right or wrong. It’s a little tedious but not hard to work out the chance of getting exactly six problems right, or exactly seven, or exactly eight, or so on. To work out the chance of getting six or more questions right — the problem given — there’s two ways to go about it. One is the conceptually easy but tedious way. Work out the chance of getting exactly six questions right. Work out the chance of getting exactly seven questions right. Exactly eight questions. Exactly nine. All ten. Add these chances up. You’ll get to a number slightly below 0.377. That is, Mary Lou would have just under a 37.7 percent chance of passing. The answer’s right and it’s easy to understand how it’s right. The only drawback is it’s a lot of calculating to get there. So here’s the conceptually harder but faster way. It works because the problem says Mary Lou is as likely to get a problem wrong as right. So she’s as likely to get exactly ten questions right as exactly ten wrong. And as likely to get at least nine questions right as at least nine wrong. To get at least eight questions right as at least eight wrong. You see where this is going: she’s as likely to get at least six right as to get at least six wrong. There’s exactly three possibilities for a ten-question assignment like this. She can get four or fewer questions right (six or more wrong). She can get exactly five questions right. She can get six or more questions right. The chance of the first case and the chance of the last have to be the same. So, take 1 — the chance that one of the three possibilities will happen — and subtract the chance she gets exactly five problems right, which is a touch over 24.6 percent. So there’s just under a 75.4 percent chance she does not get exactly five questions right. It’s equally likely to be four or fewer, or six or more. Just-under-75.4 divided by two is just under 37.7 percent, which is the chance she’ll pass as the problem’s given. It’s trickier to see why that’s right, but it’s a lot less calculating to do. That’s a common trade-off. Ruben Bolling’s Super-Fun-Pax Comix rerun for the 23rd is an aptly titled installment of A Million Monkeys At A Million Typewriters. It reminds me that I don’t remember if I’d retired the monkeys-at-typewriters motif from Reading the Comics collections. If I haven’t I probably should, at least after making a proper essay explaining what the monkeys-at-typewriters thing is all about. Ted Shearer’s Quincy from the 28th of February, 1978 reveals to me that pocket calculators were a thing much earlier than I realized. Well, I was too young to be allowed near stuff like that in 1978. I don’t think my parents got their first credit-card-sized, solar-powered calculator that kind of worked for another couple years after that. Kids, ask about them. They looked like good ideas, but you could use them for maybe five minutes before the things came apart. Your cell phone is so much better. Bil Watterson’s Calvin and Hobbes rerun for the 24th can be classed as a resisting-the-word-problem joke. It’s so not about that, but who am I to slow you down from reading a Calvin and Hobbes story? Garry Trudeau’s Doonesbury rerun for the 24th started a story about high school kids and their bad geography skills. I rate it as qualifying for inclusion here because it’s a mathematics teacher deciding to include more geography in his course. I was amused by the week’s jokes anyway. There’s no hint given what mathematics Gil teaches, but given the links between geometry, navigation, and geography there is surely something that could be relevant. It might not help with geographic points like which states are in New England and where they are, though. Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 24th is built on a plot point from Carl Sagan’s science fiction novel Contact. In it, a particular “message” is found in the digits of π. (By “message” I mean a string of digits that are interesting to us. I’m not sure that you can properly call something a message if it hasn’t got any sender and if there’s not obviously some intended receiver.) In the book this is an astounding thing because the message can’t be; any reasonable explanation for how it should be there is impossible. But short “messages” are going to turn up in π also, as per the comic strips. I assume the peer review would correct the cartoon mathematicians’ unfortunate spelling of understanding. ## Reading the Comics, April 15, 2017: Extended Week Edition It turns out last Saturday only had the one comic strip that was even remotely on point for me. And it wasn’t very on point either, but since it’s one of the Creators.com strips I’ve got the strip to show. That’s enough for me. Henry Scarpelli and Craig Boldman’s Archie for the 8th is just about how algebra hurts. Some days I agree. Ruben Bolling’s Super-Fun-Pak Comix for the 8th is an installation of They Came From The Third Dimension. “Dimension” is one of those oft-used words that’s come loose of any technical definition. We use it in mathematics all the time, at least once we get into Introduction to Linear Algebra. That’s the course that talks about how blocks of space can be stretched and squashed and twisted into each other. You’d expect this to be a warmup act to geometry, and I guess it’s relevant. But where it really pays off is in studying differential equations and how systems of stuff changes over time. When you get introduced to dimensions in linear algebra they describe degrees of freedom, or how much information you need about a problem to pin down exactly one solution. It does give mathematicians cause to talk about “dimensions of space”, though, and these are intuitively at least like the two- and three-dimensional spaces that, you know, stuff moves in. That there could be more dimensions of space, ordinarily inaccessible, is an old enough idea we don’t really notice it. Perhaps it’s hidden somewhere too. Amanda El-Dweek’s Amanda the Great of the 9th started a story with the adult Becky needing to take a mathematics qualification exam. It seems to be prerequisite to enrolling in some new classes. It’s a typical set of mathematics anxiety jokes in the service of a story comic. One might tsk Becky for going through university without ever having a proper mathematics class, but then, I got through university without ever taking a philosophy class that really challenged me. Not that I didn’t take the classes seriously, but that I took stuff like Intro to Logic that I was already conversant in. We all cut corners. It’s a shame not to use chances like that, but there’s always so much to do. Mark Anderson’s Andertoons for the 10th relieves the worry that Mark Anderson’s Andertoons might not have got in an appearance this week. It’s your common kid at the chalkboard sort of problem, this one a kid with no idea where to put the decimal. As always happens I’m sympathetic. The rules about where to move decimals in this kind of multiplication come out really weird if the last digit, or worse, digits in the product are zeroes. Mel Henze’s Gentle Creatures is in reruns. The strip from the 10th is part of a story I’m so sure I’ve featured here before that I’m not even going to look up when it aired. But it uses your standard story problem to stand in for science-fiction gadget mathematics calculation. Dave Blazek’s Loose Parts for the 12th is the natural extension of sleep numbers. Yes, I’m relieved to see Dave Blazek’s Loose Parts around here again too. Feels weird when it’s not. Bill Watterson’s Calvin and Hobbes rerun for the 13th is a resisting-the-story-problem joke. But Calvin resists so very well. John Deering’s Strange Brew for the 13th is a “math club” joke featuring horses. Oh, it’s a big silly one, but who doesn’t like those too? Dan Thompson’s Brevity for the 14th is one of the small set of punning jokes you can make using mathematician names. Good for the wall of a mathematics teacher’s classroom. Shaenon K Garrity and Jefferey C Wells’s Skin Horse for the 14th is set inside a virtual reality game. (This is why there’s talk about duplicating objects.) Within the game, the characters are playing that game where you start with a set number (in this case 20) tokens and take turn removing a couple of them. The “rigged” part of it is that the house can, by perfect play, force a win every time. It’s a bit of game theory that creeps into recreational mathematics books and that I imagine is imprinted in the minds of people who grow up to design games. ## Reading the Comics, July 7, 2015: Carrying On The Streak Edition I admit I’ve been a little unnerved lately. Between the A To Z project and the flood of mathematics-themed jokes from Comic Strip Master Command — and miscellaneous follies like my WordPress statistics-reading issues — I’ve had a post a day for several weeks now. The streak has to end sometime, surely, right? So it must, but not today. I admit the bunch of comics mentioning mathematical topics the past couple days was more one of continuing well-explored jokes rather than breaking new territory. But every comic strip is somebody’s first, isn’t it? (That’s an intimidating thought.) Disney’s Mickey Mouse (June 6, rerun from who knows when) is another example of the word problem that even adults can’t do. I think it’s an interesting one for being also a tongue-twister. I tend to think of this sort of problem as a calculus question, but that’s surely just that I spend more time with calculus than with algebra or simpler arithmetic. And then Disney’s Donald Duck (June 6 also, but probably a rerun from some other date) is a joke built on counting sheep. Might help someone practice their four-times table, too. I like the internal logic of this one. Maybe I just like sheep in comic strips. Eric Teitelbaum and Bill Teitelbaum’s Bottomliners (June 6) is a bit of wordplay based on the idiom that figures will “add up” if they’re correct. There are so many things one can do with figures, though, aren’t there? Surely something will be right. Justin Thompson’s Mythtickle (June 6, again a rerun) is about the curious way that objects are mostly empty space. The first panel shows on the alien’s chalkboard legitimate equations from quantum mechanics. The first line describes (in part) a function called psi that describes where a particle is likely to be found over time. The second and third lines describe how the probability distribution — where a particle is likely to be found — changes over time. Doug Bratton’s Pop Culture Shock Therapy (July 7) just name-drops mathematics as something a kid will do badly in. In this case the kid is Calvin, from Calvin and Hobbes. While it’s true he did badly in mathematics I suspect that’s because it’s so easy to fit an elementary-school arithmetic question and a wrong answer in a single panel. The idea of mathematics as a way to bludgeon people into accepting your arguments must have caught someone’s imagination over at the Parker studios. Jeff Parker’s The Wizard of Id for July 7 uses this joke, just as Mason Mastroianni, Mick Mastroianni, and Perri Hart’s B.C. did back on June 19th. (Both comic strips were created by the prolific Johnny Hart. I was surprised to learn they’re not still drawn and written by the same teams.) As I mentioned at the time, smothering people beneath mathematical symbols is logically fallacious. This is not to say it doesn’t work. ## Reading the Comics, January 24, 2015: Many, But Not Complicated Edition I’m sorry to have fallen behind on my mathematics-comics posts, but I’ve been very busy wielding a cudgel at Microsoft IIS all week in the service of my day job. And since I telecommute it’s quite hard to convincingly threaten the server, however much it deserves it. Sorry. Comic Strip Master Command decided to send me three hundred billion gazillion strips, too, so this is going to be a bit of a long post. Jenny Campbell’s Flo and Friends (January 19) is almost a perfect example of the use of calculus as a signifier of “something really intelligent people think of”. Which is flattening to mathematicians, certainly, although I worry that attitude does make people freeze up in panic when they hear that they have to take calculus. The Amazing Yet Tautological feature of Ruben Bolling’s Super-Fun-Pak Comix (January 19) lives up to its title, at least provided we are all in agreement about what “average” means. From context this seems to be the arithmetic mean — that’s usually what people, mathematicians included mean by “average” if they don’t specify otherwise — although you can produce logical mischief by slipping in an alternate average, such as the “median” — the amount that half the results are less than and half are greater than — or the “mode” — the most common result. There are other averages too, but they’re not so often useful. On the 21st Super-Fun-Pak Comix returned with another installation of Chaos Butterfly, by the way. ## Reading the Comics, September 15, 2014: Are You Trying To Overload Me Edition One of the little challenges in writing about mathematics-themed comics is one of pacing: how often should I do a roundup? Posting weekly, say, helps figure out a reasonable posting schedule for those rare moments when I’m working ahead of deadline, but that leaves the problem of weeks that just don’t have anything. Waiting for a certain number of comics before writing about them seems more reasonable, but then I have to figure how many comics are enough. I’ve settled into five-to-six as my threshold for a new post, but that can mean I have weeks where it seems like I’m doing nothing but comic strips posts. And then there’s conditions like this one where Comic Strip Master Command had its cartoonists put up just enough that I’d started composing a fresh post, and then tossed in a whole bunch more the next day. It’s like they’re trying to shake me by having too many strips to write about. I’d have though they’d be flattered to have me writing about them so. Bud Blake’s Tiger (September 11, rerun) mentions Tiger as studying the times tables and points out the difference between studying a thing and learning it. Marc Anderson’s Andertoons (September 12) belongs to that vein of humor about using technology words to explain stuff to kids. I admit I’m vague enough on the concept of mashups that I can accept that it might be a way of explaining addition, but it feels like it might also be a way of describing multiplication or for that matter the composition of functions. I suppose the kids would be drawn as older in those cases, though. Bill Amend’s FoxTrot (September 13, rerun) does a word problem joke, but it does have the nice beat in the penultimate panel of Paige running a sanity check and telling at a glance that “two dollars” can’t possibly be the right answer. Sanity checks are nice things to have; they don’t guarantee against making mistakes, but they at least provide some protection against the easiest mistakes, and having some idea of what an answer could plausibly be might help in working out the answer. For example, if Paige had absolutely no idea how to set up equations for this problem, she could reason that the apple and the orange have to cost something from 1 to 29 cents, and could try out prices until finding something that satisfies both requirements. This is an exhausting method, but it would eventually work, too, and sometimes “working eventually” is better than “working cleverly”. Bill Schorr’s The Grizzwells (September 13) starts out by playing on the fact that “yard” has multiple meanings; it also circles around one of those things that distinguishes word problems from normal mathematics. A word problem, by convention, normally contains exactly the information needed to solve what’s being asked — there’s neither useless information included nor necessary information omitted, except if the question-writer has made a mistake. In a real world application, figuring out what you need, and what you don’t need, is part of the work, possibly the most important part of the work. So to answer how many feet are in a yard, Gunther (the bear) is right to ask more questions about how big the yard is, as a start. Steve Kelley and Jeff Parker’s Dustin (September 14) is about one of the applications for mental arithmetic that people find awfully practical: counting the number of food calories that you eat. Ed’s point about it being convenient to have food servings be nice round numbers, as they’re easier to work with, is a pretty good one, and it’s already kind of accounted for in food labelling: it’s permitted (in the United States) to round off calorie counts to the nearest ten or so, on the rather sure grounds that if you are counting calories you’d rather add 70 to the daily total than 68 or 73. Don’t read the comments thread, which includes the usual whining about the Common Core and the wild idea that mental arithmetic might be well done by working out a calculation that’s close to the one you want but easier to do and then refining it to get the accuracy you need. Mac and Bill King’s Magic In A Minute kids activity panel (September 14) presents a magic trick that depends on a bit of mental arithmetic. It’s a nice stunt, although it is certainly going to require kids to practice things because, besides dividing numbers by 4, it also requires adding 6, and that’s an annoying number to deal with. There’s also a nice little high school algebra problem to be done in explaining why the trick works. Bill Watterson’s Calvin and Hobbes (September 15, rerun) includes one of Hobbes’s brilliant explanations of how arithmetic works, and if I haven’t wasted the time spent memorizing the strips where Calvin tries to do arithmetic homework then Hobbes follows up tomorrow with imaginary numbers. Can’t wait. Jef Mallet’s Frazz (September 15) expresses skepticism about a projection being made for the year 2040. Extrapolations and interpolations are a big part of numerical mathematics and there’s fair grounds to be skeptical: even having a model of whatever your phenomenon is that accurately matches past data isn’t a guarantee that there isn’t some important factor that’s been trivial so far but will become important and will make the reality very different from the calculations. But that hardly makes extrapolations useless: for one, the fact that there might be something unknown which becomes important is hardly a guarantee that there is. If the modelling is good and the reasoning sound, what else are you supposed to use for a plan? And of course you should watch for evidence that the model and the reality aren’t too very different as time goes on. Gary Wise and Lance Aldrich’s Real Life Adventures (September 15) describes mathematics as “insufferable and enigmatic”, which is a shame, as mathematics hasn’t said anything nasty about them, now has it?
# Quark Matter 2018 13-19 May 2018 Venice, Italy Europe/Zurich timezone The organisers warmly thank all participants for such a lively QM2018! See you in China in 2019! ## Two-particle correlations in azimuthal angle and pseudorapidity in Be+Be collisions at SPS energies 15 May 2018, 17:00 2h 40m First floor and third floor (Palazzo del Casinò) ### First floor and third floor #### Palazzo del Casinò Poster Correlations and fluctuations ### Speaker Bartosz Maksiak (Warsaw University of Technology (PL)) ### Description The NA61/SHINE experiment aims to discover the critical point of strongly interacting matter and study the properties of the onset of deconfinement. These goals are to be achieved by performing a two dimensional phase diagram $(T-\mu_B)$ scan by measurements of hadron production properties in proton-proton, proton-nucleus and nucleus-nucleus interactions. Two-particle correlations in pseudorapidity and azimuthal angle will be presented for Be+Be interactions at beam momenta: 20, 30, 40, 75 and 150 GeV/c per nucleon. The NA61/SHINE results, corrected for detector inefficiencies, will be compared with the already published results from proton-proton collisions at similar beam momenta as well as to predictions of the EPOS model. Centralised submission by Collaboration Presenter name already specified Experiment NA61/SHINE ### Primary author Bartosz Maksiak (Warsaw University of Technology (PL)) ### Presentation Materials ###### Your browser is out of date! Update your browser to view this website correctly. Update my browser now ×
# Secant line slope vs derivative at endpoints Let $f:\Bbb{R}\rightarrow \Bbb{R}$, $f$ continuous and differentiable. When is it true that for $a<b$ that: $$f'(a)\leq \frac{f(b)-f(a)}{b-a}\leq f'(b)$$ So the motivation for this was in comparing average velocity, which is given by the slope of secant line, to the instantaneous velocities at the endpoints, given by derivatives, in a 1-d motion problem. So naturally, if $f'$ is increasing on $[a,b]$ we can apply the mean value theorem to say that there is an $x \in (a,b)$ such that $f(x)=\cfrac{f(b)-f(a)}{b-a},$ and since $a<x<b$, and $f'$ is increasing we're done. Are these conditions too strict? I suppose the more general question is when should one expect either of the instantaneous quantities, or the secant slope to be larger? Just as an example for a function which is not increasing: $f(x)=2x^3-3x^2$, whereby $f'(0)=f'(1)=0$ whilst the secant line slope is $-1$. In this case $f'$ is decreasing in the interval. • We will need some sort of "global" condition such as your $f'(x)$ increasing, for $f'(a)$ and $f'(b)$ can be modified arbitrarily without affecting $f(b)-f(a)$ significantly. – André Nicolas Oct 5 '15 at 21:09 • Without any conditions, all three values can be picked at random, and a function $f$ exists which has them. Both of your examples are monotone, so there is still a relationship that can be established, but if you don't require a monotone function, it can start off heading in any direction, then turn around if needed to head for the other point. – Paul Sinclair Oct 5 '15 at 22:28
# Importing database of 4 million rows into Pandas DataFrame I am using the following code to import database table into a DataFrame: def import_db_table(chunk_size, offset): dfs_ct = [] j = 0 start = dt.datetime.now() df = pd.DataFrame() while True: sql_ct = "SELECT * FROM my_table limit %d offset %d" % (chunk_size, offset) offset += chunk_size if len(dfs_ct[-1]) < chunk_size: break df = pd.concat(dfs_ct) # Convert columns to datetime columns = ['col1', 'col2', 'col3','col4', 'col5', 'col6', 'col7', 'col8', 'col9', 'col10', 'col11', 'col12', 'col13', 'col14', 'col15'] for column in columns: df[column] = pd.to_datetime(df[column], errors='coerce') # Remove the uninteresting columns columns_remove = ['col42', 'col43', 'col67','col52', 'col39', 'col48','col49', 'col50', 'col60', 'col61', 'col62', 'col63', 'col64','col75', 'col80'] for c in df.columns: if c not in columns_remove: df = df.drop(c, axis=1) j+=1 print('{} seconds: completed {} rows'.format((dt.datetime.now() - start).seconds, j*chunk_size)) return df I am calling it with: df = import_db_table(100000, 0) This seems to be very slow - it starts with importing 100000 rows in 7 seconds but later after 1 million rows the number of seconds needed grows to 40-50 and more. Could this be improved somehow? I am using PostgreSQL, Python 3.5. 7 seconds: completed 100000 rows 17 seconds: completed 200000 rows 30 seconds: completed 300000 rows 47 seconds: completed 400000 rows 69 seconds: completed 500000 rows 92 seconds: completed 600000 rows 121 seconds: completed 700000 rows 153 seconds: completed 800000 rows 188 seconds: completed 900000 rows 228 seconds: completed 1000000 rows 271 seconds: completed 1100000 rows 318 seconds: completed 1200000 rows 368 seconds: completed 1300000 rows 422 seconds: completed 1400000 rows 480 seconds: completed 1500000 rows 540 seconds: completed 1600000 rows 605 seconds: completed 1700000 rows 674 seconds: completed 1800000 rows 746 seconds: completed 1900000 rows • Shouldn't you reset dfs_ct every iteration of the while loop? Otherwise it looks like you add all previously added entries as well as the next chunk. This would explain why it gets slower and slower... May 3 '17 at 14:10 • @Graipher I think you are right, but I couldn't figure out the way to do it. Could you advise how? May 3 '17 at 14:40 • The easiest way would be to just call the concat once, after the loop. Alternatively, write df_chunk = psql.read_sql_query(sql_ct, connection); # check for abort condition; df = pd.concat(df, df_chunk) inside the loop. Doing it outside the loop will be faster (but will have a list of all chunk data frames in memory, just like your current code). Doing it inside the loop has the added overhead of calling the function everytime but only ever has one chunk in memory (and the total dataframe). May 3 '17 at 14:46 def import_db_table(chunk_size, offset): It doesn't look like you need to pass offset to this function. All it does is give you the functionality to read from a given row to the bottom. I would omit it, or at least give it a default value of 0. It also looks like you need connection as one of the variables. dfs_ct = [] j = 0 start = dt.datetime.now() df = pd.DataFrame() while True: sql_ct = "SELECT * FROM my_table limit %d offset %d" % (chunk_size, offset) offset += chunk_size if len(dfs_ct[-1]) < chunk_size: break As written, the while loop should stop here. You can also get better performance by making a generator instead of a list out of the query results. For example: Code suggestions def generate_df_pieces(connection, chunk_size, offset = 0): while True: sql_ct = "SELECT * FROM my_table limit %d offset %d" % (chunk_size, offset) # don't yield an empty data frame if not df_piece.shape[0]: break yield df_piece # don't make an unnecessary database query if df_piece.shape[0] < chunk_size: break offset += chunk_size Then you can call: df = pd.concat(generate_df_pieces(connection, chunk_size, offset=offset)) The function pd.concat can take a sequence. Making the sequence be a generator like this is more efficient than growing a list, as you don't need to keep more than one df_piece in memory until you actually make them into the final, larger one. df = pd.concat(dfs_ct) You're resetting the entire dataframe each time and rebuilding it anew from the whole list! If this were outside of the loop it would make sense. # Convert columns to datetime columns = ['col1', 'col2', 'col3','col4', 'col5', 'col6', 'col7', 'col8', 'col9', 'col10', 'col11', 'col12', 'col13', 'col14', 'col15'] for column in columns: df[column] = pd.to_datetime(df[column], errors='coerce') # Remove the uninteresting columns columns_remove = ['col42', 'col43', 'col67','col52', 'col39', 'col48','col49', 'col50', 'col60', 'col61', 'col62', 'col63', 'col64','col75', 'col80'] for c in df.columns: if c not in columns_remove: df = df.drop(c, axis=1) This part could be done in the loop / generator function or outside. Dropping columns is a good thing to place inside as then the big dataframe you build won't ever need to be larger than you want. If you're able to put only the columns you want in the SQL query, that would be even better, as it would be less to send over the connection. Another point to make about df.drop is that by default it makes a new dataframe. So use inplace = True so you don't copy your huge dataframe. And it also accepts a list of columns to be dropped: Code suggestions df.drop(columns_remove, inplace = True, axis = 1) gives the same result without looping and copying df over and over. You can also use: columns_remove_numbers = [ ... ] # list the column numbers columns_remove = df.columns[columns_remove_numbers] So you don't have to type all those strings. j+=1 • Great, thanks for this. I am not really sure, where should I call this: df = pd.concat(generate_df_pieces(connection, chunk_size, offset=offset))? Inside the generate_df_pieces method or outside? If inside, isn't it a recursive function? Also, if I do that with generators, when I try to apply some pandas operations on a generated dataframe, I get errors that the functions don't exist since I am not dealing with a pandas dataframe but a generator. May 8 '17 at 14:59 • Put that outside the function, you don't want it to be recursive. I tried to line up the indentation of the code so it would fit together. As for getting an error, what version of pandas do you have? It looks like support for generators in pd.concat was added in 0.15.2.
For teaching purposes, the most simple example (which I use frequently in a first course in linear algebra) is a generic sub-vector space of $\mathbb{R}^n$. Any vector plane in the $3$-space that is not cardinal works.
1k views ### How can I install a .cls file with MikTeX? [duplicate] Possible Duplicate: How can I manually install a package on MiKTeX (Windows) I am new to LaTeX, but need to write up a report pretty soon. Anyway, the instruction for authors state that I ... 978 views Possible Duplicate: How can I manually install a package on MiKTeX (Windows) I am writing this because I need a more efficient way to use the acmsmall class. The problem is when I add the ... 555 views ### How do I install mbboard package (MikTeX, Windows)? This didn't help me much, 'cause this package has MF fonts and I don't have an idea to where should they go. Any help appreciated. 179 views ### Latex Standalone always rebuilds / apostroph in filename? I have started toying around with Martin Scharrer's standalone package because tikz external does not work with todonotes and I feel that the workflow for standalone makes more sense as far as I have ... 1k views ### Where should I insert the titlesec package? [closed] I am writing a thesis and I want the chapter and its heading right aligned. On searching in TEX I got an answer to download the titlesec package. I downloaded it but I don't know where to insert the ... 736 views I have downloaded a template named moderncv but don't know how to use it. As I am a beginner, please suggest me the pdf's describing the solution. 521 views I am not the best when it comes to working with programs like LaTeX and I am unable to open a .tex document, or rather 'LaTeX it' since it keeps coming up with this error. I have downloaded ... 88 views ### Wanting to write units properly [duplicate] Could anyone explain how I write units properly in LaTeX? In the form kg.s^-1. They come out all slanted and squished together when i use math mode and \frac. 201 views ### Enumerate Package I'm trying to write an exam paper using LaTeX. I use the enumerate package, e.g. \begin{enumerate} \item This question is about balloons. \begin{enumerate} \item What shape are balloons? ... 576 views This question is a follow-up question in regards to this topic: Error: File 'pdfpages.sty' not found The following is just a copy paste of my question asked in that topic: I am so so ... 545 views ### How can I install MiKTeX packages that are not found in the Package Manager? [duplicate] Possible Duplicate: How can I manually install a package on MikTex (Windows) I am new to LaTeX and looking to convert my undergraduate thesis to LaTeX format for practice. This community ... 419 views ### How do I install mtpro2? I have been unsuccessful in getting the mtpro2 package to work. I am running WinEdt 7 and MiKTeX 2.9 in Windows 7. I'm about to give up on it, but I really want the \widehat feature to produce ... 190 views ### Kantlipsum installation process Why for the kantlipsum package the process below is required to install it, instead of the normal positioning of the file .sty like for the other packages? Are there some advantages? To install the ...
# Neutrino mixing 1. Sep 13, 2010 ### thoms2543 can any body explain what is the difference between neutrino flavour state and neutrino mass eigenstate?getting confuse on it again...... 2. Sep 13, 2010 ### mathman 3. Oct 16, 2010 ### Xia Ligang It is aslo hard for me to understand their exact meanings. "flavor" eigenstates label their roles participating in various interactions. For example, W bosons couple electron with electron neutrinos, not muon neutrinos. And "mass" eigenstates determine their evolution with time. But still, it is very abstract. For example, a free electron neutrino will oscillate into muon neutrino or tauon neutrino or itself with time. But I can't find how it oscillates. I think there should be some external field, which combines with these "free" neutrino, forming a "Energy" eigenstate. But it is also a profound problem. 4. Oct 16, 2010 ### thoms2543 hmmm....flavor state is unphysical field i.e. no definite mass mass eigenstate is physical field i.e. with definite mass do the wave function or spinor contain any information of their mass to distinguish them? 5. Oct 16, 2010 ### Parlyne The mass state are the actual physical neutrino states which remain diagonal under evolution by the free Hamiltonian. The flavor states are the superpositions of mass states which have charged current interactions with the respective charged leptons. Because neutrinos interact so weakly and have such small mass differences, a superposition of neutrino mass states can retain quantum coherence over astrophysics (and even, possibly cosmological) distances. However, the small differences in mass mean that the free evolution of the different mass states will lead to energy and distance dependent phase differences between the eigenstates, changing both the overall phase and relative phases of the coefficients in the superposition. This, then, is how neutrino "flavors" change. 6. Oct 17, 2010 ### Xia Ligang “... The flavor states are the superpositions of mass states which have charged current interactions with the respective charged leptons. ..." Here I have a question. Which states have charged current interactions, flavor eigenstates or mass eigenstates? If we use the former one, it is OK. But if we use the latter one, we have to multiply by $U_{\alpha i}$ 's at each vertex, which is like dealing with quarks using CKM matrix. (Sorry, I don't know how to insert mathematical symbols here!) 7. Oct 17, 2010 ### Parlyne The flavor states have diagonal charged current interactions with their respective charged leptons. However, it would be more physical to use the mass states and a mixing matrix element (in analogy to the quarks). 8. Oct 17, 2010 ### Xia Ligang Mass eigenstates correpond to diagonal elements in "free" Hamiltonians, while flavor eigenstates to diagonal elements in the "interaction" part. Could we combine mass eigenstate and flavor eigenstates to construct an eigenstates for the "whole" Hamiltonian? Maybe I go back to the beginning.
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # Single Variable Subtraction Equations ## Solve one - step equations using addition. Estimated6 minsto complete % Progress Practice Single Variable Subtraction Equations MEMORY METER This indicates how strong in your memory this concept is Progress Estimated6 minsto complete % Single Variable Subtraction Equations Credit: Yellowstone National Park Steve and his family are going on a mini-vacation or a weekend getaway to the beach. Steve and his two sisters are excited about staying at a hotel on the beach. When they arrive at the hotel, Steve’s father, Mr. Richards, goes to the registration desk to check for his reservation. The manager tells him it will cost 288 after he receives a 10% discount on the two-room family suite. Mr. Richards thanks the manager for the discount. How can Mr. Richard figure out how much the suite was before the discount? In this concept, you will learn to solve single-variable subtraction equations. ### Solving Single-Variable Subtraction Equations You can use inverses to solve single-variable subtraction equations. The goal is to get the variable alone on one side of the equal signs and using the inverse of subtraction – addition, can help you do this. Keep in mind that what you do to one side of the equation you have to do to the other side. Here is an equation. \begin{align*}x - 12 = 40\end{align*} To solve this equation, you can use the inverse of subtraction (addition) and add 12 to both sides of the equation. That will help to get the variable alone and solve the problem. First, identify the number being subtracted from the variable. -12 Next, using the inverse of 12, add 12 to both sides of the equation. \begin{align*}\begin{array}{rcl} && x-12\ = \ \ 40\\ && \underline{\quad + 12 \quad \ +12}\\ && \ x-0 \ \ = \ \ 52 \end{array}\end{align*} The plus 12 cancels the minus 12 on the left side of the equation, leaving only \begin{align*}x\end{align*}. On the other side of the equation, 12 and 40 are added to get 52. The answer is \begin{align*}x = 52\end{align*}. To check this answer, substitute it back into the original problem and see if the statement is true. \begin{align*}\begin{array}{rcl} x-12 &=& 40\\ 52-12 &=& 40\\ 40 &=& 40 \end{array}\end{align*} The answer is true. Sometimes, in single-variable subtraction equations, the variable will be subtracted from a number. Take a look at the equation below. \begin{align*}32 - x = 7\end{align*} One way to solve this equation without getting into positive and negative numbers is to simply turn it into a single-variable addition equation by using the inverse of subtraction and adding \begin{align*}x\end{align*} to both sides. First, since the variable \begin{align*}x\end{align*} is being subtracted, add \begin{align*}x\end{align*} to both sides. \begin{align*}\begin{array}{rcl} && 32 - x = 7\\ && \ \underline{\;\;\;\;\; +x = 7 + x}\\ && \ \quad \ \ 32 = 7 + x \end{array}\end{align*} Now, you have a single-variable addition equation. \begin{align*}x + 7 = 32\end{align*} Next, solve the addition equation by using the inverse of addition and subtracting 7 from both sides. \begin{align*}\begin{array}{rcl} && x + 7 = 32\\ && \underline{\;\;\;\; - 7 = -7}\\ && \qquad x = 25 \end{array}\end{align*} The answer is \begin{align*}x = 25\end{align*}. Then, check your answer by substituting 25 for \begin{align*}x\end{align*} in the original equation. \begin{align*}\begin{array}{rcl} 32 - x & = & 7\\ 32 - 25 & = & 7\\ 7 & = & 7 \end{array}\end{align*} The answer checks out. ### Examples #### Example 1 Earlier, you were given a problem about Steve and his family’s weekend beach getaway. Mr. Richards wants to know the usual cost of the room if he paid288 after a 32 discount. First, write an equation for the situation. The total cost (\begin{align*}x\end{align*}) minus 32 (discount) is 288. Or \begin{align*}x - 32 = 288\end{align*} Next, identify the number being subtracted from the variable. - 32 Then, using the inverse of -32, add 32 to both sides of the equation. \begin{align*}\begin{array}{rcl} && x - 32 = 288\\ && \underline{\;\;\;\; +32 \ \ +32}\\ && \ \ x - 0 = 320 \end{array}\end{align*} The answer is \begin{align*}x = \320\end{align*}. To check this answer, substitute320 back into the original problem and see if the statement is true. \begin{align*}\begin{array}{rcl} x -32 &=& 288\\ 320 - 32 &=& 288\\ 320 &=& 320 \end{array}\end{align*} The room cost \$320 before the discount. #### Example 2 Solve the following equation. \begin{align*}y- 21 = 59\end{align*} First, identify the number being subtracted from the variable. -21 Next, using the inverse of 21, add 21 to both sides of the equation. \begin{align*}\begin{array}{rcl} && y - 21 = 59\\ && \underline{\;\;\;\; +21 \ \ + 21}\\ && \ \ y - 0 = 80 \end{array}\end{align*} The answer is \begin{align*}y = 80\end{align*}. To check this answer, substitute it back into the original problem and see if the statement is true. \begin{align*}\begin{array}{rcl} y - 21 &=& 59\\ 80 - 21 &=& 59\\ 59 &=& 59 \end{array}\end{align*} Solve each equation and write your answer in the format: \begin{align*}x = \underline{\;\;\;\;\;\;\;\;\;\;}\end{align*}. #### Example 3 \begin{align*}x-9 = 22\end{align*} First, identify the number being subtracted from the variable. -9 Next, using the inverse of 9, add 9 to both sides of the equation. \begin{align*}\begin{array}{rcl} && \ \ x - 9 = 22\\ &&\ \ \underline{ \ \ +9 \ \ \ +9}\\ && \ \ x - 0 = 31 \end{array}\end{align*} The answer is \begin{align*}x = 31\end{align*}. To check this answer, substitute it back into the original problem and see if the statement is true. \begin{align*}\begin{array}{rcl} x - 9 &=& 22\\ 31 - 9 &=& 22\\ 22 &=& 22 \end{array}\end{align*} #### Example 4 \begin{align*}x-3 = 46\end{align*} First, identify the number being subtracted from the variable. -3 Next, using the inverse of -3, add 3 to both sides of the equation. \begin{align*}\begin{array}{rcl} && \ \ x - 3 = 46\\ && \ \ \underline{\ \ +3 \ \ \ +3}\\ && \ \ x - 0 = 49 \end{array}\end{align*} The answer is \begin{align*}x = 49\end{align*}. To check this answer, substitute it back into the original problem and see if the statement is true. \begin{align*}\begin{array}{rcl} x -3 &=& 46\\ 49 - 3 &=& 46\\ 46 &=& 46 \end{array}\end{align*} #### Example 5 \begin{align*}x- 7 = 23\end{align*} First, identify the number being subtracted from the variable. -7 Next, using the inverse of -7, add 7 to both sides of the equation. \begin{align*}\begin{array}{rcl} && \ \ x - 7 = 23\\ &&\ \ \underline{ \ \ +7 \ \ \ +7}\\ && \ \ x - 0 = 30 \end{array}\end{align*} The answer is \begin{align*}x = 30\end{align*}. To check this answer, substitute it back into the original problem and see if the statement is true. \begin{align*}\begin{array}{rcl} x - 7 &=& 23\\ 30 - 7 &=& 23\\ 23 &=& 23 \end{array}\end{align*} ### Review Solve each single-variable subtraction problem using the inverse operation. Write your answer in the form: \begin{align*}\text{variable} = \underline{\;\;\;\;\;\;\;\;\;\;}\end{align*}. 1. \begin{align*}y-5=10\end{align*} 2. \begin{align*}x-7=17\end{align*} 3. \begin{align*}a-4=12\end{align*} 4. \begin{align*}z-6=22\end{align*} 5. \begin{align*}y-9=11\end{align*} 6. \begin{align*}b-5=12\end{align*} 7. \begin{align*}x-8=30\end{align*} 8. \begin{align*}y-7=2\end{align*} 9. \begin{align*}x-9=1\end{align*} 10. \begin{align*}x-19=15\end{align*} 11. \begin{align*}x-18=12\end{align*} 12. \begin{align*}x-29=31\end{align*} 13. \begin{align*}x-15=62\end{align*} 14. \begin{align*}x-22=45\end{align*} 15. \begin{align*}x-19=37\end{align*} To see the Review answers, open this PDF file and look for section 12.6. ### Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes ### Vocabulary Language: English TermDefinition Difference The result of a subtraction operation is called a difference. Expression An expression is a mathematical phrase containing variables, operations and/or numbers. Expressions do not include comparative operators such as equal signs or inequality symbols. Simplify To simplify means to rewrite an expression to make it as "simple" as possible. You can simplify by removing parentheses, combining like terms, or reducing fractions. Sum The sum is the result after two or more amounts have been added together.
# What exactly does linear dependence and linear independence imply? I have a very hard time remembering which is which between linear independence and linear dependence... that is, if I am asked to specify whether a set of vectors are linearly dependent or independent, I'd be able to find out if $\vec{x}=\vec{0}$ is the only solution to $A\vec{x}=\vec{0}$, but I would be stuck guessing whether that means that the vectors are linearly dependent or independent. Is there a way that I can understand what the consequence of this trait is so that I can confidently answer such a question on a test? Intuitively vectors being linearly independent means they represent independent directions in your vector spaces, while linearly dependent vectors means they don't. So for example if you have a set of vector $\{x_1, ..., x_5\}$ and you can walk some distance in the $x_1$ direction, then a difference distance in $x_2$, then again in the direction of $x_3$. If in the end you are back where you started then the vectors are linearly dependent (notice that I did not use all the vectors). This is the intuition behind the notion and you can make it into a definition because in the above example if we start at $0$ then we walk $a_i$ in the $x_i$ direction, then the above paragraph says that $a_1x_1+a_2x_2+a_3x_3=0$. (This is how you should think of linear combinations, as directions to go given by your vectors.) Finally I will say that you should memorize the definitions. I've taught linear algebra to students that it is their first proof-based math class, and many students don't realize how important knowing the PRECISE definition is. Definitions are crucial and changing one single word can completely change the meaning. So my advice when just starting out is that you should make flash cards of ALL definitions in your book and memorize them. Then once you know them exactly look at the examples after the definition in the book and see how the examples fit the definition. The vectors are dependent ('they depend on one another') if there is some relation among them (in addition to the one with all 0 present for any collection of vectors). So, dependent means there is some relation other than all 0. Differently: independent means if you want a linear combination of the vectors to sum to the 0 vector, you need to assure that each part of the coombination independently is 0; thus each coordinate in the solution $0$. A broader perspective on linear dependence is the theory of relations in group theory. Roughly speaking, a relation is some equation satisfied by the elements of a group, e.g. $(ab)^{-1}=b^{-1}a^{-1}$; relations basically amount to declaring how group elements depend on each other. One useful convenience is that relations can always be put into the form "$\rm blah=identity~element$" by simply inverting one side over to the other, e.g. $ab=c\Leftrightarrow abc^{-1}=e$. Abelian groups (generally, modules) have additive group operations, so a relation would look like an equation $2a+b=3c$ or equivalently $2a+b-3c=0$. In particular a vector space is a module over a field, so instead of just integers we can have any field scalars involved in our equations with vectors. Ultimately, a linear dependency is where vectors satisfy some relationship with each other. Conversely, a set of vectors is linearly independent if they satisfy no linearity equation other than the obvious, trivial one involving only zeros (this case is uninteresting because it applies universally and so essentially says nothing of value). So e.g. $2a+b=3c$ is impossible if $\{a,b,c\}$ is L.I. • You went "Bill Dubuque" on this one, all the way! Quite enlightening. +1 – Pedro Jul 31, 2013 at 0:06 I've found the best way to understand this is as follows: a set of vectors is linearly dependent if you can write one of them in terms of the others. When you multiply a matrix by a vector, $A\vec{x}$, that's shorthand for "multiply each column of $A$ by the corresponding entry of $\vec{x}$, and then add them together." If the columns in $A$ are linearly dependent, then there's some $\vec{x}$ that will allow them to cancel, giving you the zero vector. It is a long answer but kindly bear with me To understand linear dependence and linear independence we first need to understand linear combination and span. I assume only two in a 2D plane. ## Span The span of two vectors v1 and v2 is the set of all their linear combinations. OR The set of all possible vectors which can be reach through the linear combination of two vectors V1 and V2 is the span of those two vectors. ## What Is The Span Of A Single Vector The span of a single vector is all the vectors which lie on the single line. ## Linear Dependence Lets say we have two vectors in a 2D plane and they are collinear that is one of the vector is redundant. It means one of the vector is not adding anything to the span of the first vector. In such case the two vectors are known as linearly dependent. ## Mathematical Definition of Linear Dependence Let S be the set of vectors S = {V1, V2, V3,…..,Vn} The set S is linearly dependent if and only if CV1+ C2V2 + C3V3 +….+ CnVn=zero vector for some all Ci’s at least one is non zero. The condition of checking linear dependence if c1 or c2 is non zero then the two vectors are linearly dependent ## Linearly Independence If in a 2D plane the two vectors V1 and V2 are not collinear then one of the vector is increasing the span of the first vector that is with only vector the span was just a single line but with the linear combination of V1 and V2 we can reach every single vector in the 2D plane(Span of V1 and V2 is the whole 2D plane). It means that no vector is redundant. In such case the two vectors are known as linearly independent. ## Mathematical Definition of Linear Independence Let S be the set of vectors S = {V1, V2, V3,…..,Vn} The set S is linearly independence if and only if CV1+ C2V2 + C3V3 +….+ CnVn=zero vector The condition of checking linear independence if c1 and c2 are both zero then the two vectors are linearly independent # But Why This Formulas Make Sense? The conditions to check the linear dependence/independence basically check whether the two vectors in the 2D plane are collinear or not. Lets dive into it deeper. We know that to find the linear combination of two vectors we multiply the vectors by some scalar and add them. Since we equated our linear combination of V1 and V2 to zero vector .It means we are basically asking the question that to reach the zero vector by the linear combination of V1 and V2 by which scalar we need to multiply our vectors. We got c1=c2=0 in our example that means the only way to reach the zero vector by the linear combination of V1 and V2 is to multiply those vectors by 0. This shows that the two vectors V1 and V2 do not lie on the same line and hence they are Linearly independent because the only way to reach the zero vector by the linear combination of V1 and V2 is to scale both the vectors by zero. ## Note: If V1 and V2 were collinear there will be infinite values of c1 and c2 through which we can reach the zero vector by the linear combination of two vectors.(The vectors will be opposite and direction having the same magnitude). I assumed that we are working in 2D plane. The concept of Linear Dependence/Independence also applies to higher dimensions but this intuition of collinearity will not be applicable in higher dimensions. I hope it helps. Consider three linearly independent vectors $X=(x,0,0),Y=(0,y,0),Z=(0,0,z)$ that make up a standard basis of $\mathbb{R^{3}}$. These vectors, by themselves, form the $xyz-axis$. If we took away the vector $(0,0,z)$, what linear combination of the vectors $X= (x,0,0),Y= (0,y,0)$, or $span(X,Y)= cX + kY$ would get us $Z$? None, because there is no combination $c(x,0,0)+ k(0,y,0) = (0,0,Z)$ as $Z \notin span(X,Y)$. This is because $Z$ or the $z-axis$ is linearly independent from the $x-axis$ and the $y-axis$ which fill out the $xy-plane$. Note that this is not formally how linear independence is defined, just a fairly intuitive way to visualize a set of linearly independent vectors. A subset $S$ of a vector space is linearly independent if and only if for any distinct $\vec s_{1}, ... , \vec s_{n} \in S$ the only linear relationship among those vectors $$c_{1} \vec s_{1} + ... + c_{n} \vec s_{n} = \vec 0$$ with $c_{1},...,c_{n} \in \mathbb{R}$ is the trivial one: $c_{1}=0,...,c_{n}=0$.
# zbMATH — the first resource for mathematics Reversible skew Laurent polynomial rings and deformations of Poisson automorphisms. (English) Zbl 1188.16022 The authors consider the skew Laurent polynomial ring $$S=R[x^{\pm 1};\alpha]$$, where $$\alpha$$ is an automorphism of $$R$$, and study involutions $$\theta$$ on $$S$$ such that $$\theta(x)=x^{-1}$$ and the restriction $$\theta|_R$$ is an involution $$\gamma$$ of $$R$$. They show that such $$\theta$$ exists if and only if $$\gamma\alpha\gamma^{-1}=\alpha^{-1}$$, in which case they say that $$\theta$$ is a reversing automorphism and $$S$$ is a reversible skew Laurent polynomial ring. The concept of reversibility arises in dynamical systems and the theory of flows. The authors study invariants for reversing automorphisms and then apply their results to two principal examples: the localization at the powers of a normal element of the enveloping algebra of the two-dimensional non-Abelian Lie algebra and the coordinate ring of the quantum torus. Both these rings are deformations of Poisson algebras over the base field $$\mathbb{F}$$ and in each case the ring of $$\theta$$-invariants is a deformation of the coordinate ring of a surface in $$\mathbb{F}^3$$ and is a factor of a deformation of $$\mathbb{F}[x_1,x_2,x_3]$$ for a Poisson bracket determined by the appropriate surface. Both deformations are examples of algebras determined by noncommutative potentials. ##### MSC: 16S36 Ordinary and skew polynomial rings and semigroup rings 16W20 Automorphisms and endomorphisms 17B63 Poisson algebras 16S80 Deformations of associative rings 16W22 Actions of groups and semigroups; invariant theory (associative rings and algebras) 16W10 Rings with involution; Lie, Jordan and other nonassociative structures Full Text: ##### References: [1] DOI: 10.1006/jabr.1998.7511 · Zbl 0922.17006 · doi:10.1006/jabr.1998.7511 [2] DOI: 10.1007/978-3-0348-8205-7 · doi:10.1007/978-3-0348-8205-7 [3] DOI: 10.1090/S0002-9947-1976-0402815-3 · doi:10.1090/S0002-9947-1976-0402815-3 [4] Devaney R., An Introduction to Chaotic Dynamical Systems (1989) · Zbl 0695.58002 [5] DOI: 10.1080/00927879608825643 · Zbl 0851.16025 · doi:10.1080/00927879608825643 [6] DOI: 10.1016/S0022-4049(97)00079-0 · Zbl 0934.16025 · doi:10.1016/S0022-4049(97)00079-0 [7] DOI: 10.1088/0305-4470/23/5/001 · Zbl 0715.17017 · doi:10.1088/0305-4470/23/5/001 [8] DOI: 10.1017/CBO9780511841699 · doi:10.1017/CBO9780511841699 [9] DOI: 10.1063/1.532856 · Zbl 0959.17015 · doi:10.1063/1.532856 [10] DOI: 10.1063/1.1328078 · Zbl 1032.17022 · doi:10.1063/1.1328078 [11] DOI: 10.1016/S0022-247X(02)00164-6 · Zbl 1015.37031 · doi:10.1016/S0022-247X(02)00164-6 [12] DOI: 10.1080/10586458.2002.10504479 · Zbl 1117.32300 · doi:10.1080/10586458.2002.10504479 [13] DOI: 10.1006/jabr.1999.8264 · Zbl 0958.16030 · doi:10.1006/jabr.1999.8264 [14] DOI: 10.1081/AGB-120013183 · Zbl 1010.16024 · doi:10.1081/AGB-120013183 [15] Krause G. R., Grad. Studies in Maths., in: Growth of Algebras and Gelfand–Kirillov Dimension (2000) [16] Lorenz M., Multiplicative Invariant Theory, Encyclopaedia of Mathematical Sciences 135 (2005) · Zbl 1078.13003 [17] McConnell J. C., Noncommutative Noetherian Rings (1987) · Zbl 0644.16008 [18] O’Farrell A. G., Irish Math. Soc. Bull. pp 41– [19] Passman D. S., Infinite Crossed Products (1989) · Zbl 0662.16001 [20] DOI: 10.1016/j.jalgebra.2005.10.029 · Zbl 1113.17009 · doi:10.1016/j.jalgebra.2005.10.029 [21] DOI: 10.1090/S0273-0979-01-00894-1 · Zbl 1042.16016 · doi:10.1090/S0273-0979-01-00894-1 [22] DOI: 10.1016/j.jalgebra.2005.05.033 · Zbl 1079.15005 · doi:10.1016/j.jalgebra.2005.05.033 [23] DOI: 10.1142/S0219498804000940 · Zbl 1062.33018 · doi:10.1142/S0219498804000940 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# Geometrical frustration In condensed matter physics, the term geometrical frustration (or in short: frustration [1]) refers to a phenomenon, where atoms tend to stick to non-trivial positions or where, on a regular crystal lattice, conflicting inter-atomic forces (each one favoring rather simple, but different structures) lead to quite complex structures. As a consequence of the frustration in the geometry or in the forces, a plenitude of distinct ground states may result, at zero temperature, and usual thermal ordering may be suppressed, at higher temperatures. Much studied examples are amorphous materials, glasses, or dilute magnets. The term frustration, in the context of magnetic systems, has been introduced by Gerard Toulouse (1977) .[2][3][4] Indeed, frustrated magnetic systems had been studied even before. Early work includes a study of the Ising model on a triangular lattice with nearest-neighbor spins coupled antiferromagnetically, by G. H. Wannier, published in 1950 .[5] Related features occur in magnets with competing interactions, where both ferro- as well as antiferromagnetic couplings between pairs of spins or magnetic moments are present, with the type of interaction depending on the separation distance of the spins. In that case commensurability, such as helical spin arrangements may result, as had been discussed originally, especially, by A. Yoshimori,[6] T. A. Kaplan,[7] R. J. Elliott,[8] and others, starting in 1959, to describe experimental findings on rare-earth metals. A renewed interest in such spin systems with frustrated or competing interactions arose about two decades later, beginning in the 70s of the 20th century, in the context of spin glasses and spatially modulated magnetic superstructures. In spin glasses, frustration is augmented by stochastic disorder in the interactions, as may occur, experimentally, in non-stoichiometric magnetic alloys. Carefully analyzed spin models with frustration include the Sherrington-Kirkpatrick model ,[9] describing spin glasses, and the ANNNI model ,[10] describing commensurability magnetic superstructures. ## Magnetic ordering Geometrical frustration is an important feature in magnetism, where it stems from the topological arrangement of spins. A simple 2D example is shown in Figure 1. Three magnetic ions reside on the corners of a triangle with antiferromagnetic interactions between them; the energy is minimized when each spin is aligned opposite to neighbors. Once the first two spins align anti-parallel, the third one is frustrated because its two possible orientations, up and down, give the same energy. The third spin cannot simultaneously minimize its interactions with both of the other two. Since this effect occurs for each spin, the ground state is sixfold degenerate. Only the two states where all spins are up or down have more energy. Similarly in three dimensions, four spins arranged in a tetrahedron (Figure 2) may experience geometric frustration. If there is an antiferromagnetic interaction between spins, then it is not possible to arrange the spins so that all interactions between spins are antiparallel. There are six nearest-neighbor interactions, four of which are antiparallel and thus favourable, but two of which (between 1 and 2, and between 3 and 4) are unfavourable. It is impossible to have all interactions favourable, and the system is frustrated. Figure 1: Antiferromagnetically interacting spins in a triangular arrangement Figure 2: Antiferromagnetically interacting spins in a tetrahedral arrangement Geometrical frustration is also possible if the spins are arranged in a non-collinear way. If we consider a tetrahedron with a spin on each vertex pointing along the easy axis (that is, directly towards or away from the centre of the tetrahedron), then it is possible to arrange the four spins so that there is no net spin (Figure 3). This is exactly equivalent to having an antiferromagnetic interaction between each pair of spins, so in this case there is no geometrical frustration. With these axes, geometric frustration arises if there is a ferromagnetic interaction between neighbours, where energy is minimized by parallel spins. The best possible arrangement is shown in Figure 4, with two spins pointing towards the centre and two pointing away. The net magnetic moment points upwards, maximising ferromagnetic interactions in this direction, but left and right vectors cancel out (i.e. are antiferromagnetically aligned), as do forwards and backwards. There are three different equivalent arrangements with two spins out and two in, so the ground state is three-fold degenerate. Figure 3: Spins along the easy axes of a tetrahedron Figure 4: Frustrated easy spins in a tetrahedron ## Mathematical definition The mathematical definition is simple (and analogous to the so-called Wilson loop in Quantum chromodynamics): One considers for example expressions ("total energies" or "Hamiltonians") of the form $\mathcal H=\sum_G\,-I_{k_\nu , k_\mu}\,\,S_{k_\nu}\cdot S_{k_\mu}\,,$ where G is the graph considered, whereas the quantities $I_{k_\nu , k_{\mu}}\,,$ are the so-called "exchange energies" between nearest-neighbours, which (in the energy units considered) assume the values $\pm 1$ (mathematically, this is a signed graph), while the $S_{k_\nu}\cdot S_{k_\mu}$ are inner products of scalar or vectorial spins or pseudo-spins. If the graph G has quadratic or triangular faces P, the so-called "plaquette variables" $P_W$, "loop-products" of the following kind, appear: $P_W=I_{1,2}\,I_{2,3}\,I_{3,4}\,I_{4,1}$ resp. $P_W=I_{1,2}\,I_{2,3}\,I_{3,1}\,,$ which are also called "frustration products". One has to perform a sum over these products, summed over all plaquettes. The result for a single plaquette is either +1 or -1. In the last-mentioned case the plaquette is "geometrically frustrated". It can be shown that the result has a simple gauge invariance: it does not change – nor do other measurable quantities, e.g. the "total energy" $\mathcal H$ – even if locally the exchange integrals and the spins are simultaneously modified as follows: $I_{i,k}\to\epsilon_i I_{i,k}\epsilon_k ,\quad S_i\to\epsilon_i S_i ,\quad S_k\to \epsilon_k S_k\,.$ Here the numbers $\epsilon_i$ und $\epsilon_k$ are arbitrary signs, i.e. = +1 or = −1, so that the modified structure may look totally random. ## Water ice Figure 5: Scheme of water ice molecules Although most previous and current research on frustration focuses on spin systems, the phenomenon was first studied in ordinary ice. In 1936 Giauque and Stout published The Entropy of Water and the Third Law of Thermodynamics. Heat Capacity of Ice from 15K to 273K, reporting calorimeter measurements on water through the freezing and vaporization transitions up to the high temperature gas phase. The entropy was calculated by integrating the heat capacity and adding the latent heat contributions; the low temperature measurements were extrapolated to zero, using Debye’s then recently derived formula .[11] The resulting entropy, S1 = 44.28 cal/(K•mol) = 185.3 J/(mol•K) was compared to the theoretical result from statistical mechanics of an ideal gas, S2 = 45.10 cal/(K•mol) = 188.7 J/(mol•K). The two values differ by S0 = 0.82±0.05 cal/(K•mol) = 3.4 J/(mol•K). This result was then explained by Linus Pauling[12] to an excellent approximation, who showed that ice possesses a finite entropy (estimated as 0.81 cal/(K•mol) or 3.4 J/(mol•K)) at zero temperature due to the configurational disorder intrinsic to the protons in ice. In the hexagonal or cubic ice phase the oxygen ions form a tetrahedral structure with an O-O bond length 2.76 Å (276 pm), while the O-H bond length measures only 0.96 Å (96 pm). Every oxygen (white) ion is surrounded by four hydrogen ions (black) and each hydrogen ion is surrounded by 2 oxygen ions, as shown in Figure 5. Maintaining the internal H2O molecule structure, the minimum energy position of a proton is not half-way between two adjacent oxygen ions. There are two equivalent positions a hydrogen may occupy on the line of the O-O bond, a far and a near position. Thus a rule leads to the frustration of positions of the proton for a ground state configuration: for each oxygen two of the neighboring protons must reside in the far position and two of them in the near position, so-called ‘Ice rules’. Pauling proposed that the open tetrahedral structure of ice affords many equivalent states satisfying the ice rules. Pauling went on to compute the configurational entropy in the following way: consider one mole of ice, consisting of N of O2- and 2N of protons. Each O-O bond has two positions for a proton, leading to 22N possible configurations. However, among the 16 possible configurations associated with each oxygen, only 6 are energetically favorable, maintaining the H2O molecule constraint. Then an upper bound of the numbers that the ground state can take is estimated as Ω<22N(6/16)N. Correspondingly the configurational entropy S0 = kBln(Ω) = NkBln(3/2) = 0.81 cal/(K•mol) = 3.4 J/(mol•K) is in amazing agreement with the missing entropy measured by Giauque and Stout. Although Pauling’s calculation neglected both the global constraint on the number of protons and the local constraint arising from closed loops on the Wurtzite lattice, the estimate was subsequently shown to be of excellent accuracy. ## Spin ice Figure 6: Scheme of spin ice molecules A mathematically analogous situation to the degeneracy in water ice is found in the spin ices. A common spin ice structure is shown in Figure 6 in the cubic pyrochlore structure with one magnetic atom or ion residing on each of the four corners. Due to the strong crystal field in the material, each of the magnetic ions can be represented by an Ising ground state doublet with a large moment. This suggests a picture of Ising spins residing on the corner-sharing tetrahedral lattice with spins fixed along the local quantization axis, the <111> cubic axes, which coincide with the lines connecting each tetrahedral vertex to the center. Every tetrahedral cell must have two spins pointing in and two pointing out in order to minimize the energy. Currently the spin ice model has been approximately realized by real materials, most notably the rare earth pyrochlores Ho2Ti2O7, Dy2Ti2O7, and Ho2Sn2O7. These materials all show nonzero residual entropy at low temperature. ## Extension of Pauling’s model: General frustration The spin ice model is only one subdivision of frustrated systems. The word frustration was initially introduced to describe a system’s inability to simultaneously minimize the competing interaction energy between its components. In general frustration is caused either by competing interactions due to site disorder (see also the Villain model [13] or by lattice structure such as in the triangular, face-centered cubic (fcc), hexagonal-close-packed, tetrahedron, pyrochlore and kagome lattices with antiferromagnetic interaction. So frustration is divided into two categories: the first corresponds to the spin glass, which has both disorder in structure and frustration in spin; the second is the geometrical frustration with an ordered lattice structure and frustration of spin. The frustration of a spin glass is understood within the framework of the RKKY model, in which the interaction property, either ferromagnetic or anti-ferromagnetic, is dependent on the distance of the two magnetic ions. Due to the lattice disorder in the spin glass, one spin of interest and its nearest neighbors could be at different distances and have a different interaction property, which thus leads to different preferred alignment of the spin. ## Artificial geometrically frustrated ferromagnets Although many properties of spin ice materials have been studied experimentally, little has been revealed about the local accommodation of spin to frustration within the system, since that individual spins cannot be probed without altering the state of the system. Fortunately, with the help of new nanometer techniques, it is possible to fabricate nanometer size magnetic islands analogous to those of the naturally occurring spin ice materials, and they can be probed without altering the moment configuration. In 2006 R.F.Wang et al. reported the discovery of an artificial geometrically frustrated magnet composed of arrays of lithographically fabricated single-domain ferromagnetic islands. These islands are manually arranged to create a two-dimensional analog to spin ice. As shown in Figure 7a[citation needed], to mimic the frustration of spin ice, a two-dimensional analog is created by frustrated arrays consisting of square lattices, in which a single lattice is represented by four ferromagnetic islands meeting at a vertex. For a pair of moments at one vertex, it is favorable to have one pointing in and the other pointing out, while unfavorable to have both pointing out or pointing in, due to energy minimization (Figure 7b)[citation needed]. For the four moments at one vertex, there are 16 kinds of configurations, as in Figure 7c[citation needed]. The lowest energy vertex configurations is Type I and II, which have two moments pointing in toward the centre of the vertex, and two pointing out. The percentage of Type I and II are 12.5% and 25% respectively. Using lithographically fabricated arrays, it is possible to engineer frustrated systems to alter the strength of interactions, the geometry of the lattice, the type and number of defects, and other properties which impact the nature of frustration. The lattice parameters range from 320 nm to 880 nm, with a fixed island size of 80 nm × 220 nm laterally and 25 nm thick, which is small enough for magnetic moments to point lengthwise along the islands and big enough to be stable at 300 K. Figure 8 is AFM (Atomic force microscopy) and MFM (Magnetic force microscopy) images of the frustrated lattice. The black and white halves in Figure 8b indicate the north and south poles of the ferromagnetic island. From the MFM images, the moment configuration of array can be easily determined. The vertex types can be directly observed as described in Figure 7c[citation needed]: the pink vertex is Type I, the green vertex is Type III and the blue vertex is Type II. Thus the artificial spin ice is demonstrated. In this work on a square lattice of frustrated magnets, Wang et al. observed both ice-like short-range correlations and the absence of long-range correlations, just like in the spin ice at low temperature. These results solidify the uncharted ground on which the real physics of frustration can be visualized and modeled by these artificial geometrically frustrated magnets, and inspires further research activity. ## Geometric Frustration without Lattice Another type of geometrical frustration arises from the propagation of a local order. A main question that a condensed matter physicist faces is to explain the stability of a solid. It is sometime possible to establish some local rules, of chemical nature, which lead to low energy configurations and therefore govern structural and chemical order. This is not generally the case and often the local order defined by local interactions cannot propagate freely, leading to geometric frustration. A common feature of all these systems is that, even with simple local rules, they present a large set of, often complex, structural realizations. Geometric frustration plays in role in fields of condensed matter, ranging from clusters and amorphous solids to complex fluids. The general method of approach to resolve these complications follows two steps. First, the constraint of perfect space-filling is relaxed by allowing for space curvature. An ideal, un-frustrated, structure is defined in this curved space. Then, specific distortions are applied to this ideal template in order to embed it into three dimensional Euclidean space. The final structure is a mixture of ordered regions, where the local order is similar to that of the template, and defects arising from the embedding. Among the possible defects, disclinations play an important role. Tiling of a plane by pentagons is impossible but can be realized on a sphere in the form of pentagonal dodecahedron as demonstrated in quasicrystals ### Simple two-dimensional examples Two-dimensional examples are helpful in order to get some understanding about the origin of the competition between local rules and geometry in the large. Consider first an arrangement of identical discs (a model for a hypothetical two-dimensional metal) on a plane; we suppose that the interaction between discs is isotropic and locally tends to arrange the disks in the densest way as possible. The best arrangement for three disks is trivially an equilateral triangle with the disk centers located at the triangle vertices. The study of the long range structure can therefore be reduced to that of plane tilings with equilateral triangles. A well known solution is provided by the triangular tiling with a total compatibility between the local and global rules: the system is said to be un-frustrated". But now, the interaction energy is supposed to be at a minimum when atoms sit on the vertices of a regular pentagon. Trying to propagate in the long range a packing of these pentagons sharing edges (atomic bonds) and vertices (atoms) is impossible. This is due to the impossibility of tiling a plane with regular pentagons, simply because the pentagon vertex angle does not divide $2\pi$. Three such pentagons can easily fit at a common vertex, but a gap remains between two edges. It is this kind of discrepancy which is called "geometric frustration". There is one way to overcome this difficulty. Let the surface to be tiled be free of any presupposed topology, and let us build the tiling with a strict application of the local interaction rule. In this simple example, we observe that the surface inherits the topology of a sphere and so receives a curvature. The final structure, here a pentagonal dodecahedron, allows for a perfect propagation of the pentagonal order. It is called an "ideal" (defect-free) model for the considered structure. ### Dense structures and tetrahedral packings The stability of metals is a longstanding question of solid state physics, which can only be understood in the quantum mechanical framework by properly taking into account the interaction between the positively charged ions and the valence and conduction electrons. It is nevertheless possible to use a very simplified picture of metallic bonding and only keeps an isotropic type of interactions, leading to structures which can be represented as densely packed spheres. And indeed the crystalline simple metal structures are often either close packed face centered cubic (f.c.c.) or hexagonal close packing (h.c.p.) lattices. Up to some extend amorphous metals and quasicrystals can also be modeled by close packing of spheres. The local atomic order is well modeled by a close packing of tetrahedra, leading to an imperfect icosahedral order. Tetrahedral packing: The dihedral angle of a tetrahedron is not commensurable with $2\pi$; consequently, a hole remains between two faces of a packing of five tetrahedra with a common edge. A packing of twenty tetrahedra with a common vertex in such a way that the twelve outer vertices form an irregular icosahedron A regular tetrahedron is the densest configuration for the packing of four equal spheres. The dense random packing of hard spheres problem can thus be mapped on the tetrahedral packing problem. It is a practical exercise to try to pack table tennis balls in order to form only tetrahedral configurations. One starts with four balls arranged as a perfect tetrahedron, and try to add new spheres, while forming new tetrahedra. The next solution, with five balls, is trivially two tetrahedra sharing a common face; note that already with this solution, the f.c.c. structure, which contains individual tetrahedral holes, does not show such a configuration (the tetrahedra share edges, not faces). With six balls, three regular tetrahedra are built, and the cluster is incompatible with all compact crystalline structures (f.c.c. and h.c.p.). Adding a seventh sphere gives a new cluster consisting in two "axial" balls touching each other and five others touching the latter two balls, the outer shape being an almost regular pentagonal bi-pyramid. However, we are facing now a real packing problem, analogous to the one encountered above with the pentagonal tiling in two dimensions. The dihedral angle of a tetrahedron is not commensurable with $2\pi$; consequently, a hole remains between two faces of neighboring tetrahedra. As a consequence, a perfect tiling of the Euclidean space R3 is impossible with regular tetrahedra. The frustration has a topological character: it is impossible to fill Euclidean space with tetrahedra, even severely distorted, if we impose that a constant number of tetrahedral (here five) share a common edge. The next step is crucial: the search for an un-frustrated structure by allowing for curvature in the space, in order for the local configurations to propagate identically and without defects throughout the whole space. ### Regular packing of tetrahedra: the polytope $\{ 3,3,5 \}$ Twenty tetrahedra pack with a common vertex in such a way that the twelve outer vertices form an irregular icosahedron. Indeed the icosahedron edge length $l$ is slightly longer than the circumsphere radius $r$ ($l \simeq 1.05 r$). There is a solution with regular icosahedra if the space is not Euclidean, but spherical. It is the polytope $\{ 3,3,5 \}$, using the Schläffli notation. There are one hundred and twenty vertices which all belong to the hypersphere $S^3$ with radius equal to the golden ratio $(\tau = (1 +\surd 5 )/2 )$ if the edges are of unit length. - The six hundred cells are regular tetrahedra grouped by five around a common edge and by twenty around a common vertex. - This structure is called a polytope (see Coxeter) which is the general name in higher dimension in the series polygon, polyhedron, ... - Even if this structure is embedded in four dimensions, it has been considered as a three dimensional (curved) manifold. This point is conceptually important for the following reason. The ideal models that have been introduced in the curved Space are three dimensional curved templates. They look locally as three dimensional Euclidean models. So, the $\{ 3,3,5 \}$ polytope, which is a tiling by tetrahedra, provides a very dense atomic structure if atoms are located on its vertices. It is therefore naturally used as a template for amorphous metals, but one should not forget that it is at the price of successive idealizations. ## Literature • J.F. Sadoc and R. Mosseri, Geometrical Frustration, Cambridge Univ. Press; 1999, reedited 2007 • Sadoc JF, editor. Geometry in condensed matter physics, Singapore, World Scientific; 1990. • H.S.M. Coxeter, Regular polytopes, Dover pub.; 1973. ## References 1. ^ The psychological side of this problem is treated in a different article, frustration 2. ^ G. Toulouse, Commun. Phys.2, 115 (1977) 115 3. ^ Vannimenus, J.; Toulouse, G. (1977). "Theory of the frustration effect. II. Ising spins on a square lattice". J. Phys. C 10 (18): L537. Bibcode:1977JPhC...10L.537V. doi:10.1088/0022-3719/10/18/008. 4. ^ Toulouse, Gérard (1980). "The frustration model". In Pekalski, Andrzej; Przystawa, Jerzy. Modern Trends in the Theory of Condensed Matter. Lecture Notes in Physics 115. Springer Berlin / Heidelberg. pp. 195–203. doi:10.1007/BFb0120136. ISBN 978-3-540-09752-5. 5. ^ Wannier, G. H. (1950). "Antiferromagnetism. The Triangular Ising Net". Phys. Rev. 79 (2): 357–364. Bibcode:1950PhRv...79..357W. doi:10.1103/PhysRev.79.357. 6. ^ Yoshimori, A. (1959). "A New Type of Antiferromagnetic Structure in the Rutile Type Crystal". J. Phys. Soc. Japan 14 (6): 807–821. doi:10.1143/JPSJ.14.807. 7. ^ Kaplan, T. A. (1961). "Some Effects of Anisotropy on Spiral Spin-Configurations with Application to Rare-Earth Metals". Phys. Rev. 124 (2): 329–339. Bibcode:1961PhRv..124..329K. doi:10.1103/PhysRev.124.329. 8. ^ Elliott, R. J. (1961). "Phenomenological Discussion of Magnetic Ordering in the Heavy Rare-Earth Metals". Phys. Rev. 124 (2): 346–353. Bibcode:1961PhRv..124..346E. doi:10.1103/PhysRev.124.346. 9. ^ Sherrington, D.; Kirkpatrick, S. (1975). "Solvable Model of a Spin-Glass". Phys. Rev. Lett. 35 (26): 1792–1796. Bibcode:1975PhRvL..35.1792S. doi:10.1103/PhysRevLett.35.1792. 10. ^ Fisher, M. E.; Selke, W. (1980). "Infinitely Many Commensurate Phases in a Simple Ising Model". Phys. Rev. Lett. 44 (23): 1502–1505. Bibcode:1980PhRvL..44.1502F. doi:10.1103/PhysRevLett.44.1502. 11. ^ Debye, P. (1912). "Zur Theorie der spezifischen Wärmen". Ann. Phys. 344 (14): 789–839. Bibcode:1912AnP...344..789D. doi:10.1002/andp.19123441404. 12. ^ Pauling, L. (1935). J. Am. Chem. Soc. 57 (12): 2680–2684. doi:10.1021/ja01315a102. 13. ^ Villain, J. (1977). "Spin glass with non-random interactions". J. Phys. C: Solid State Phys. 10 (10): 1717–1734. Bibcode:1977JPhC...10.1717V. doi:10.1088/0022-3719/10/10/014.
Can we prevent tornadoes from occurring? Related Earth Sciences News on Phys.org DaveC426913 Gold Member What are your thoughs on the matter? Ivan Seeking Staff Emeritus Gold Member While weather experts understand what conditions tend to produce tornadoes, I think there is a good bit not understood about exactly when, where, and why they occur. Until we have a better understanding, it would seem that prevention is a little ahead of the game. Beyond that, there are such tremendous amounts of energy involved that one wonders if intervention could ever be practical. For the foreseeable future, increasingly effective early warning systems are probably the best hope. Borek Mentor I think it is possible - you have to find (and kill) correct butterfly in time. Trick is to find it early enough and here comes this "better understanding" part that Ivan mentioned. Astronuc Staff Emeritus Can we prevent tornadoes from occurring? That would essentially require the ability to modify the weather or local climate. Basically tornadoes form where cool air masses interact with warm air masses with a certain level of moisture. A thundercloud (cumulonimbus) forms and the shear region between falling cold air and rising warm air causes a circular rotation, which can evolve into a tornado. The problem is one of determining precisely when and where the conditions for tornado exist - then one of determining the precursors to those conditions. I've wondered if it would be feasible to fly 2 or more jets (capable of supersonic speed) into the critical region of a tornado and use the shock wave(s) to disrupt the vortex (i.e., the jets would 'break' the sound barrier in the vortex generating region). But there is perhaps a risk to the jets from debris and strong fluid dynamics. Last edited by a moderator: There is pretty good computer software to detect tornadoes, but they don't have enough radar sensors to gather the necessary data in most places, and even then it could probably only be predicted hours in advance. You need very fine-grained data to detect tornadoes because their formation is a chaotic process highly sensitive to fine scale initial conditions. This also means it would be fairly easy to prevent, since it's sensitive to initial conditions...but the hard part would be being prepared to employ preventative tactics wherever it was forming given only a few hours notice. Ivan Seeking Staff Emeritus Gold Member This also means it would be fairly easy to prevent, since it's sensitive to initial conditions... How can we say this with any confidence? That assumes that he initial conditions would be fairly easy to manipulate. Even the idea of disrupting things with something like a shock wave may be little more than a flea on a dog, so to speak. Assuming that tornadoes are a purely thermodynamic phenomenon, and considering the amount of thermodynamic energy getting expended in a supercell thunderstorm, I agree with the "flee on a dog" description. But the principles of thermodynamics don't even begin to adequately describe tornadoes. The air flowing into a tornado follows the path of greatest resistance, and that ain't exactly one of the standard principles of thermodynamics. So dismissing tornado prevention because the thermodynamic forces are too large, and on too large of a scale, is based on a false assumption. I've been working on a broad-stroke theory that suggests that tornadoes are produced by a combination of thermodynamic and electromagnetic forces. If this is correct, then it opens up new possibilities. The thermodynamic piece is, and always will be, out-of-reach. But the electromagnetic component is accessible. We can induce lightning strikes to neutralize the electric charges within the storm. If the theory in question is correct, this would reduce the strength of the tornado, perhaps below the threshold necessary for its sustenance. If you want more detail, there is an online book that I am still (and perhaps forever) working on, to be found here: Please freely give up your comments and criticisms of this work. Unless somebody can prove that this definitely could not work, then next Spring, I'll be out in Tornado Alley shooting rockets into supercell thunderstorms. :) If anybody is interested in the academic support for this line of reasoning, there is an extensive list of references at the URL cited above, but for starters, check this: Dehel, T. F, Dickinson, M., Lorge, F., and Startzel, F. Jr., 2007: Electric field and Lorentz force contribution to atmospheric vortex phenomena. Journal of Electrostatics, Vol. 65, Issues 10-11, 631-638. Last edited by a moderator: Instead of researching ways to prevent tornadoes from happening which I doubt will ever happen especially as our Earth goes through its normal climate cycles... I think we need to focus on the more important matter, earlier warning! If we can slowly increase our warning time every 10 years or so, we will slowly start saving more and more lives. Good groups such as SKYWARN, V.O.R.T.E.X. and others are working on ways to do this, I just wish more people would get involved! Couldn't we launch explosives into the tornado with enough power to 'kill' it? I think we need to focus on the more important matter, earlier warning! Warning and prevention are not necessarily unrelated issues. Both require that we understand the phenomenon. 60 years and a billion dollars have been spent attempting to understand tornadic storms. I think it's time we try something different for a while, especially since what I'm talking about would be ridiculously easy to test. If it worked, it would not only prove that tornado prevention was at least theoretically possible, but it would also teach us a lot about how tornadic storms work. That could lead to better prediction, and earlier, more accurate warnings. Couldn't we launch explosives into the tornado with enough power to 'kill' it? This assumes that tornadoes are mechanisms whose internal structures could be wrecked by an explosion. This is not the case. Tornadic storms are fundamentally thermodynamic, where the fluxes are getting modulated by electromagnetic forces. There is no complex internal mechanism. Detonating an explosive would merely add to the thermodynamic force at play, which would probably strengthen the tornado. mgb_phys Homework Helper Couldn't we launch explosives into the tornado with enough power to 'kill' it? Possibly for that particular tornado, what might be tricky is then stopping the one forming 1m away or 2 seconds later. If you disrupt the start of one vortex you don't do anything about the driving weather conditions. russ_watters Mentor Couldn't we launch explosives into the tornado with enough power to 'kill' it? Absolutely, and the larger the explosives, the longer in advance and wider of an area you could cover with this "prevention" method. But as Ivan said, you run into issues with practicality: nuking a 10 mile diameter, 50,000 foot tall cumulonimbus cloud could no doubt prevent a tornado perhaps hours before forming, however... Though I doubt many would consider the idea to be conscionable, a practical person would probably want to at least consider the idea of nuking a hurricane. Hurricane Katrinia cost an estimated $300 billion and if for the cost of one nuke you could eliminate it offshore, it may be a worthwhile thing to do. Last edited: Possibly for that particular tornado, what might be tricky is then stopping the one forming 1m away or 2 seconds later. If you disrupt the start of one vortex you don't do anything about the driving weather conditions. In any open-air thermodynamic system, it is certainly true that all of the energy is going to get released sooner or later, and disrupting one thunderstorm could certainly cause another thunderstorm somewhere else. But the chance of that secondary thunderstorm becoming a supercell is 1 in 1,000. The chance of a supercell spawning a tornado is 1 in 3, so the chance of a secondary tornado is 1 in 3,000. The chance of a tornado being an F2 or above is 1 in 4, so the chance of a secondary tornado that could do some real damage is 1 in 12,000. Since most of the country is open space, the chance of a tornado actually hitting something is roughly 1 in 100. So the chance of secondary damage is 1 in 1.2 million. Then the only question is how successful tornado fighters will be in shooting down the secondary tornado, the same way they shot down the first one. There wouldn't be a secondary problem if they didn't succeed in the first place, so just to ask the question we have to assume that they are capable of succeeding. The worst case scenario would be that the chance of failure would be 1 in 2, nominally speaking. This puts the chance of an unmitigated secondary tornado at 1 in 2.4 million. Allowing 2.4 million primary tornadoes to hit populated areas because once in all of that, a secondary tornado will hit a populated area, wouldn't make much sense. Absolutely, and the larger the explosives, the longer in advance and wider of an area you could cover with this "prevention" method. But as Ivan said, you run into issues with practicality: nuking a 10 mile diameter, 50,000 foot tall cumulonimbus cloud could no doubt prevent a tornado perhaps hours before forming, however... )))) I just can't resist this -- you left too much up to the imagination there... ))) Nuking tornadoes would definitely work. It might not actually prevent the tornado. But after nuking the whole city, nobody is really going to notice whether or not a tornado came in and stirred up the rubble a bit. So we'll still be able to say, "Look on the bright side -- at least we didn't get hit by a tornado!" ...consider the idea of nuking a hurricane. Hurricane Katrinia cost an estimated$300 billion and if for the cost of one nuke you could eliminate it offshore, it may be a worthwhile thing to do. Another approach that is currently being researched is to beam microwave energy down from a satellite, to selectively add heat to the storm, to disrupt it, or to steer it away from land, or at least away from major cities on the coast. This is a highly dubious initiative, since there is truly no way to anticipate the side-effects. Nevertheless, you're right that considering what's at stake, stuff like this is at least worth looking into. Hello all, sorry to revive an old thread. As sometimes happens while I am ruminating about something else, an observation strikes me in a new light, and raises new questions. As also sometimes happens to me, this new thought occurred while I was taking a shower. I have a bit of a slow drain, so a little water backs up. But the drain is fast enough for the water to spiral down it. However, I noticed sometimes it stopped spiraling and backed up. I then realized this happened every time I rinsed some soap off. I realize water going down a drain doesn't follow the same rules as colliding weather fronts. And even if it did, there are a lot of problems taking a model from the micro to the macro level. However, I wonder. Could 'seeding' a threatening supercell with an aerosolized surfactant prevent or lessen the severity of tornadoes? Could it be cost effective to do so? The surfactant would have to be cheap, non-toxic, and not volatile (yet hopefully biodegradable). And the delivery system would also need to be cost effective and reliably able to function in a powerful storm. Either an airplane or perhaps a missile. It occurs to me that the technology that the military uses to engineer those horrible fuel-air/cluster bombs might be put to a more humane use. But that is getting ahead of things. Could the properties of the water molecules in a supercell be changed enough by an aerosolized soap-like substance to prevent (or lessen) a tornado? Sincerely, Ben Schainker DaveC426913 Gold Member Could the properties of the water molecules in a supercell be changed enough by an aerosolized soap-like substance to prevent (or lessen) a tornado? Tornadoes are caused by warm ground heating air resulting in a rising air mass. It has nothing to do with water content. Tornadoes do quite nicely in bone-dry areas. Tornadoes are caused by warm ground heating air resulting in a rising air mass. It has nothing to do with water content. Tornadoes do quite nicely in bone-dry areas. Perhaps I am wrong, but this is not my understanding of tornadoes. Though not definitive, my brief look at Wikipedia yields this quote: "For a vortex to be classified as a tornado, it must be in contact with both the ground and the cloud base." If clouds are necessary for a tornadoes, then water vapor must be present. From what I understand, water's unique properties are needed in the boundaries between the colliding air masses. Ben Schainker Astronuc Staff Emeritus There is a research program to better understand the formation of tornados, and why only a few percent of rotational thunderstorms produce tornados. http://www.nssl.noaa.gov/vortex2/ [Broken] Last edited by a moderator: I think that in the old days the advice was to open your windows whe a tornado was coming (in order to equalize inside and outside air pressure). In the 1970s after a devastating tornado outbreak that was changed. Opening the windows makes it far more likely that the house will be destroyed. DaveC426913 Gold Member I think that in the old days the advice was to open your windows whe a tornado was coming (in order to equalize inside and outside air pressure). In the 1970s after a devastating tornado outbreak that was changed. Opening the windows makes it far more likely that the house will be destroyed. Can you provide some further reading? I've not heard that it was a myth that was overturned. I'm very saddened to hear about designs to prevent tornados. Tornados are natural to our planet and should be protected from extinction. It’s not the tornado’s fault we are encroaching upon their habitat. They were there first. Borg Gold Member I'm very saddened to hear about designs to prevent tornados. Tornados are natural to our planet and should be protected from extinction. It’s not the tornado’s fault we are encroaching upon their habitat. They were there first.
# All Questions 2answers 54 views 0answers 33 views ### Is Geometric Brownian Model suitable for long term price forecast? I was thinking of using Geometric Brownian Motion to forecast future prices of timber (say one variable, the stumpage price of sawtimber). I tested the time series with Augmented Dickey-Fuller test ... 0answers 42 views ### How to calculate probability of option expiring in the money? Given the following values ... 1answer 61 views ### How to prove the “Law of one price” theorem? There are two subparts to Fundamental Asset Pricing theorem. The Law Of One Price (LOOP thereafter) holds if and only if there exists a state price vector. In a market in which the LOOP holds, the ... 1answer 96 views ### Why gamma and theta have opposite signs? I saw some textbooks use B-S equation to explain why gamma and theta have opposite signs in most of the cases. For example, John Hull's classic book. The explanation is, first write B-S equation in ... 15 30 50 per page
## Stream: maths ### Topic: conditionally complete linear order for nnreal #### Koundinya Vajjha (Jul 18 2019 at 16:49): I needed to prove that Sup K \in K for a K : set nnreal. I searched through the library and found the lemma cSup_mem_of_is_closed. But I couldn't use it since nnreal is not an instance of conditionally_complete_linear_order? (It is an instance of conditionally_complete_linear_order_with_bot though). Can this situation be salvaged? #### Koundinya Vajjha (Jul 18 2019 at 17:14): I guess if I remove the with_both condition it becomes a conditionally_complete_linear_order. #### Chris Hughes (Jul 18 2019 at 17:43): Write an instance, proving one from the other. The fields won't quite be identical, the linear order bot will omit conditions about sets that aren't bounded below, but the definition of Inf stays the same, which is what matters. Strictly, I think complete_lattice should probably extend conditionally_complete_lattice. Last updated: May 14 2021 at 19:21 UTC
## Southern Great Plains 1997 (SGP97) Hydrology Experiment Plan Section 11 - Science Investigations Goto Section in Document: 1. Overview 2. Soil Moisture and Temperature 3. Vegetation and Land Cover 4. Soil Physical and Hydraulic Properties 5. Planetary Boundary Layer Studies 6. Satellite Data Acquisition 7. DOE ARM CART Program 8. Oklahoma Mesonet Program 9. Operations 10. Data Management and Availability 11. Science Investigations 12. Sampling Protocols 13. Local Information 14. References 15. List of Participants 11. SCIENTIFIC INVESTIGATIONS Investigators actively participating were asked to submit an abstract describing their planned activities. These are included here as provided. 1. Meyers, Baldrocchi 2. Kustas, Schmugge, Jackson, Prueger, Hatfield, Sauer, Starks, Norman, Diak, Anderson, Doraiswamy 3. Starks 4. Miller, Mohanty, Tsegaye, Rawls 5. Daughtry, Doraiswamy, Hollinger 6. Entekhabi, McLaughlin 7. Entekhabi, Rodriguez­Iturbe 8. Barros, Bindlish, Yanming 9. Peters­Lidard 10. Kumar 11. Chauhan 12. Diak, Norman, Kustas 13. Finch, Burke, Simmonds 14. Browell, Ismail, Lenschow, Davis 15. Salvucci 16. Njoku 17. Houser, Shuttleworth 18. Laymon, Crosson, Fahsi, Tsegaye, Manu 19. van Oevelen, Menenti 20. Mahrt, Sun 21. Valdes, North 22. Mohanty, Shouse, van Genuchten 23. Famiglietti 24. Elliott, Senay 25. Islam 26. Doraiswamy, Daughtry, Jackson, Kustas, Hatfield 27. Wood, Jackson 28. Wetzel 29. Duffy 30. Humes 31. England, Judge, Hornbuckle, Kim, Boprie 32. MacPherson, Mailhot, Strapp, Belair Investigator(s): Tilden P. Meyers and Dennis D. Baldocchi Institutions(s): NOAA/Air Resources Laboratory Atm Turbulence and Diffusion Div: Title:Continuous Long­term Energy Flux Measurements within the GCIP Domain Numerical regional and global scale models will continue to be used for future climate and hydrological assessments. However, predicted climate scenarios are sensitive to the surface layer processes such as evapotranspiration and soil moisture. Preliminary results have shown significant variations in predicted evapotranspiration from the land­surface submodels that are currently used. Observational data sets that allow for detailed testing for an annual cycle are few. The credibility of climate simulations depends on the predictive capabilities of the submodels used in the parameterizations of the physical and biological processes. Long­term continuous measurements of water and heat fluxes are needed to assess and reduce uncertainties in the land­surface models. The results from the proposed work plan will provide a data base that can be used directly to meet the first two objectives of the GCIP scientific plan which are (1) to determine the temporal variability of the hydrological and energy budgets over a continental scale, and (2) to develop and validate coupled atmosphere­surface hydrological models. Continuous measurements of the surface energy balance components (net radiation, sensible heat flux latent heat flux , ground heat flux , and heat storage ) will continue at the Little Washita Watershed Latent energy fluxes from the soil and canopy systems will be determined to provide a complete data set for (1) the evaluation of the surface layer submodels currently used in synoptic scale and general circulation models, and (2) the determination of seasonal probability distributions and statistics for evaluating predictive capabilities of models. Measurements of additional hydrological components include precipitation and soil moisture. Other measurements that will continue to be measured include solar and net radiation, air temperature and humidity, wind speed and direction, and soil temperatures. Biophysical data will include determinations of leaf area indices, stomatal conductance, and surface albedo. Data from these sites will be used to: 1) evaluate the temporal variability of surface fluxes as a function of season; 2) determine daily and weekly probability distributions of energy fluxes; 3) evaluate and test current surface­biosphere submodels that are currently used for both short and long term numerical weather prediction; 4) determine the relative latent energy contributions from the soil and vegetative components as functions of season; and 5) test a hierarchy of models for estimating the surface energy fluxes from standard meteorological data. Tilden P. Meyers 423­576­1245 FAX: 423­576­1245 FED EX: NOAA/ATDD 456 S. Illinois Avenue Oak Ridge, TN Use of Optical and Microwave Remote Sensing for Mapping Surface Fluxes During the SGP Experiment Investigators/Institutions: Bill Kustas, Tom Schmugge & Tom Jackson/USDA­ARS Hydro Lab Beltsville, MD John Prueger & Jerry Hatfield/USDA­ARS Soil Tilth Lab, Ames, IA Tom Sauer/USDA­ARS SPA Fayetteville, AR Pat Starks/USDA­ARS Grazing Lands Res. El Reno, OK John Norman, George Diak & Martha Anderson/Univ. of Wisconsin, Madison WI Paul Doraiswamy/USDA­ARS RS & Modeling Lab, Beltsville, MD Radiometric temperature and passive microwave observations provide unique spatially distributed surface boundary conditions for surface energy balance modeling. Several relatively simple remote sensing models have recently been developed and tested with ground­truth measurements for computing the surface energy balance (Norman et al., 1995; Kustas and Humes, 1996; Anderson et al., 1997; Zhan et al., 1997). There has also been recent applications of remote sensing data from weather satellites in a simple hydrologic model for monitoring vegetation growth and predicting crop yields (Doraiswamy and Cook, 1995). These modeling algorithms will be applied to remote sensing data collected over the whole SGP study area, but with primary focus on the El Reno site where there will be ground truth hydrometeorological data collected by J. Prueger, B. Kustas, T. Sauer and P. Starks. These data will include standard weather data (wind speed, wind direction, air temperature, relative humidity, solar radiation and precipitation), the surface energy balance, and profiles of soil moisture, temperature and soil heat flux. There will be several aircraft flights with the TIMS instrument coordinated by Tom Schmugge for collecting high resolution thermal­IR data in the early and later morning in order to evaluate the Two­Source­Time­Integrated­Model (TSTIM; Anderson et al., 1997) and the Dual­Source­Energy flux­Model (DSEM; Norman et al., 1995) with local flux observations. In particular, the high spatial resolution TIMS data can be used to evaluate how well the TSTIM model performs on small pixels and whether simple methods exist for interpolating 5 km flux estimates from GOES down to the small scale of 10's of meters. The daily surface moisture maps from the ESTAR passive microwave observations on the P­3 aircraft coordinated by Tom Jackson will be used to test a version of DSEM that uses surface moisture for surface energy flux predictions (Zhan et al., 1997). Landsat TM and NOAA AVHRR data for the study sites and surrounding area will be acquired, processed and mapped by Paul Doraiswamy. In addition, the ground­based measurements of evapotranspiration and soil moisture profile changes will be used for testing the hydrologic model predictions (Kalluri and Doraiswmay, 1995; Doraiswamy, et al., 1997). Once model validation/calibration is performed at the El Reno site, the models will be used with satellite data (i.e., LANDSAT, NOAA­AVHRR and GOES) for mapping fluxes over the entire SGP domain. These estimates will be compared to regional fluxes derived from aircraft eddy correlation and LASE measurements. References: Anderson, M.A., J.M. Norman, G.R. Diak, W.P. Kustas and J.R. Mecikalski. 1997. A two­source time­integrated model for estimating surface fluxes using thermal infrared remote sensing. Remote Sensing of Environment [In Press] Doraiswamy, P.C. and P.W. Cook. 1995. Spring Wheat Yield assessment using NOAA AVHRR data. Canadian J Remote Sens. 21:43­51. Doraiswamy, P.C., P. Zara S. Moulin and P.W. Cook 1997. Spring wheat yield assessment using Landsat TM imagery and a crop simulation model. (Submitted to Remote Sensing of the Environ.) Kalluri, S. and P.C. Doraiswamy. 1995. Modelling transpiration and water stress in vegetation from satellite and ground measurements. Presentation at the 1995 International Geoscience and Remote Sensing Symposium. Firenze, Italy , p1483­1487. Kustas, W.P., and K.S. Humes. 1996. Sensible heat flux from remotely­sensed data at different resolutions. Chapter 8. In: Scaling up in Hydrology Using Remote Sensing (J.B. Stewart, E.T. Engman, R.A. Feddes and Y. Kerr editors) John Wiley and Sons London pp. 127­145. Norman, J. M., W. P. Kustas and K. S. Humes. 1995. A two­source approach for estimating soil and vegetation energy fluxes from observations of directional radiometric surface temperature. Agricultural and Forest Meteorology 77:263­293. Zhan, X., W.P. Kustas, T.J. Schmugge and T.J. Jackson. 1997. Mapping surface energy fluxes in a semiarid watershed with remotely sensed surface information. Preprint of the 13th Conference on Hydrology, American Meteorological Society, pp. 194­197. Bill Kustas bkustas@hydrolab.arsusda.gov USDA­ARS­Hydrology Lab Beltsville, MD 20705 USA Voice: (301) 504­8498 Fax: (301) 504­8931 Title: Investigation of spatial distribution of soil water and heat flux. Abstract: A series of Soil Heat and Water Measurement Stations (SHAWMS) have been installed on the Little Washita River Watershed (LWRS) which make profile measurements of soil temperature, soil heat flux, the three parameters of soil heat, and soil moisture. Data from the SHAWMS will be used to investigate the temporal and spatial variability of soil water and heat flux under rangeland condtions and to provide another source of ground truth data for the ESTAR instrument. A limited number of SHAWMS will be installed on the Ft. Reno site under both natural rangeland and winter wheat fields to investigate differences in these fluxes for representative ground cover conditions in central Oklahoma. References: Patrick J. Starks pstarks@grl1.ars.usda.gov (405) 262­5291 fax (405) 262­0133 USDA­ARS­GRL 7207 W. Cheyenne St. El Reno, Oklahoma 73036 Investigator: Doug Miller Collaborators: Binayak Mohanty, Teferi Tsegaye, Walter Rawls Title: Combining Soil Survey Information and Point Observations of Soil Physical and Hydraulic Properties to Improve the Extension of Pedo­Transfer Functions to Regional Areas. Abstract: Soil moisture is a much sought after parameter for a wide range of modeling and management applications. Direct measurement of soil water status, however, is an expensive, time-consuming exercise which is largely prohibitive beyond a few select areas. Previous work has shown the utility of "pedo-transfer" functions to predict the water retention curve or unsaturated hydraulic conductivity of the soil. These functions are based on commonly measured soil physical properties such as particle-size distribution, organic matter content, and bulk density. Pedo-transfer functions in combination with routine spatial information from soil survey and spatial information on topographic and land surface characteristics could potentially be used to improve regional estimates of soil moisture. We will focus on combining spatial information from soil survey, topographic and land surface characteristics with point observations of soil physical properties and soil moisture content to improve soil moisture predictions. The Little Washita River Basin in the southwestern portion of the SGP97 operations area will be the location of detailed study and correlation of field observations of soil physical and hydraulic properties. Ground sampling for this work will be performed in conjunction with soil moisture sampling in support of the main remote sensing objectives of SGP97. Manpower for sampling and access to sampling sites may, necessarily, restrict our opportunities to obtain a full range of representative soil map units. However, it is our hope that we can obtain enough samples to be able to characterize several key combinations of soil, topographic, and land surface conditions which in turn may be used to test our ability to "scale up" to larger areas. Sponsor: NASA through the Penn State EOS IDS Investigation of the Global Water Cycle Spatial variability of biomass and fraction of absorbed PAR within the SGP97 site. Craig Daughtry and Paul Doraiswamy, USDA/ARS Remote Sensing and Modeling Lab, Beltsville, MD Steven Hollinger, Illinois State Water Survey, 2204 Griffith Dr., Champaign, IL 61820. Abstract: Relationships between phytomass production and absorbed photosynthetically active radiation (PAR) have been reported for numerous plant species (Daughtry, et al., 1992). The fraction of absorbed PAR (fA) may be estimated from multispectral remotely sensed data (Prince, 1991). Together these two concepts provide a basis for monitoring vegetation production using remotely sensed data. Our primary objective is to characterize the spatial variability of vegetation within the SGP97 site. We will sample fresh and dry phytomass, leaf area index (LAI), and fA in approximately 60 fields and will extract the multispectral data for each field from Landsat TM scenes. Most of the fields for vegetation sampling will also be used for the gravimetric and profile soil moisture sampling. Global positioning system (GPS) data will be used to register the images and locate the sample sites within the images. Various models will be used to relate the multispectral and vegetation data (Moran et al., 1995) and to estimate phytomass in other fields of the SGP97 site. In addition, for selected fields of winter wheat, we will measure crop residue cover using line­transect methods (Morrison et al., 1993) and will estimate residue cover for other fields using multispectral data from Landsat and other sources (Daughtry et al., 1996). Anticipated products include land use/cover maps, maps of vegetation density, and crop residue cover maps for the SGP97 site. These data should be useful for developing and extending various surface energy balance models and vegetation assessment models from local to regional scales. References: Daughtry, C.S.T., K.P. Gallo, S.N. Goward, S.D. Prince, and W.P. Kustas. 1992. Spectral estimates of absorbed radiation and phytomass production in corn and soybean canopies. Remote Sensing Environment 39:141­152. Daughtry, C.S.T., J.E. McMurtrey III, E.W. Chappelle, W.J. Hunter, and J.L. Steiner. 1996. Measuring crop residue cover using remote sensing techniques. Theor. Appl. Climatol. 54:17­26. Morrison Jr, J.E., C Huang, D.T. Lightle, C.S.T. Daughtry. 1993. Residue cover measurement techniques. J. Soil Water Conserv. 48:479­483. Moran. M.S., S.J. Maas, and P.J. Pinter Jr. 1995. Combining remote sensing and modeling for estimating surface evaporation and biomass production. Remote Sensing Reviews 12:335­353. Prince, S.N. 1991. A model of regional primary production for use with coarse­resolution satellite data. Int. J. Remote Sensing 12:1313­1330. Craig Daughtry voice 301­504­5015 USDA­ARS Remote Sensing and Modeling lab fax 301­504­5031 10300 Baltimore Ave email Beltsville, MD 20705 cdaughtry@asrr.arsusda.gov Investigators/Institutions: Dara Entekhabi, 48­331, MIT, Cambridge, MA 02139 Tel: (617) 253­9698 Fax: (617) 258­8850 Email: darae@mit.edu Dennis McLaughlin, 48­209, MIT, Cambridge, MA 02139 Tel: (617) 253­7176 Fax: (617) 253­7462 Email: dennism@mit.edu Title: Using Data Assimilation to Infer Soil Moisture from Remotely Sensed Observations: A Feasibility Study Abstract: A state­space formulation of the data assimilation problem is developed including the following components: near­surface soil moisture and subsurface profile dynamics, surface energy balance, multispectral radiobrightness, soil type and pedotransfer functions. The data assimilation model will be tested using data from numerical experiments whose statistics are derived from the SPG97 and Washita92 experiments. References: McLaughlin, D. B. , 1996: Recent advances in hydrologic data assimilation, Reviews of Geophysics, 977­984. Investigators/Institutions: Dara Entekhabi, 48­331, MIT, Cambridge, MA 02139 Tel: (617) 253­9698 Fax: (617) 258­8850 Email: darae@mit.edu Ignacio Rodriguez­Iturbe, Dept. Civil Engineering, Texas A&M University, College Station, TX 77843 Tel: (409) 845­7435 Fax: (409) 845­6156 Email: iri9280@vms2.tamu.edu Title: On Space­Time Organization of Soil Moisture Fields: Dynamics and Interaction with the Atmosphere Abstract: The decrease in second­order statistics of soil moisture random fields under aggregation may be estimated using scaling functions whose parameters vary in time (during dry­downs) in a predictable manner and whose parameters have known dependencies on soil and climate properties. We plan to use the multiple scale observations of soil moisture fields using a variety of platforms and sensors to characterize the required scaling functions. Next using simple models of dry­down and percolation, we intend to relate the parameters of these functions to soil and climate properties. References: Rodriguez­Iturbe, I., G. K. Vogel, R. Rigon, D. Entekhabi, F. Castelli and A. Rinaldo, 1995: On the spatial organization of soil moisture fields, Geophysical Research Letters, 22(20), 2757­2760. Scaling Issues in the Retrieval and Modeling of Soil Moisture ­­ A Geomorphology Perspective Ana P. Barros, Rajat Bindlish, and Li Yanming The Pennsylvania State University ABSTRACT Remote sensing and the prospect of long­term monitoring of soil moisture over large areas offer unique opportunities in hydrologic science both for climate studies and for operational applications. Pertinent research issues include: 1) the formulation and accuracy of the algorithms used to transform the remotely­sensed signal (i.e. surface radiometric temperature) into estimates of soil moisture; 2) scaling and the relationship between the scale of measurement and data resolution; 3) data assimilation into operational mesoscale models. In this context, the objectives of our research are to: 1) investigate and quantify the functional dependencies between observed soil moisture dynamics at different scales and the forming and development factors that determine the properties of soils in their natural setting­­climate, vegetation, topography and geology; 2) investigate and quantify the functional dependencies between remotely sensed brightness temperatures at different scales and soil forming and development factors; 3) elucidate the scaling mechanisms implicit in remotely sensed brightness temperatures at different resolutions, and determine the effective scale of measurement at each resolution; 4) use the results of 1), 2) and 3) to constrain a transformation model to retrieve soil moisture. Sensitivity analysis to will be conducted to evaluate model's accuracy and transportability; 5) evaluate the skill of a mesoscale model, specifically MM5, when remote­sensing estimates of soil moisture are used as surface boundary conditions in operational mode. The focus is on short to medium­range forecasts of surface temperature, humidity, and precipitation. Multidimensional spectral analysis, system identification techniques such as cluster analysis and self­organizing neural networks, geostatistics and deconvolution methods will be used to identify soil­topography, soil­vegetation, soil­climate and soil­geology relationships. Data from SGP97 will be analyzed along with data from previous field experiments (e.g. Washita­92 and 94). Vertical Profiles of the Atmospheric Boundary Layer and Upper Air for the Southern Great Plains 1997 Field Experiment Principal Investigator: C. D. Peters­Lidard Environmental Hydraulics and Water Resources School of Civil and Environmental Engineering Georgia Institute of Technology, Atlanta, GA 30332­0355 tel: 404­894­5190; fax: 404­894­2677 e­mail: cpeters@ce.gatech.edu Abstract In support of the eventual goal to integrate remotely sensed observations with coupled land­atmosphere models, Georgia Institute of Technology and the National Severe Storms Laboratory propose to provide vertical profiles of atmospheric pressure, temperature, humidity, wind speed and wind direction during the Southern Great Plains 1997 field experiment (June 17­July 11). Our sounding design is based on three science needs directly related to the existing objectives of the experiment: (1) Provide boundary and initial conditions for coupled atmospheric­hydrologic modeling; (2) Provide data necessary for atmospheric correction of thermal remote sensing; and (3) Support water vapor and heat budget computations over the SGP97 domain. In addition to these science needs, surface and boundary layer profiles will provide data to support the estimation of roughness lengths and stability correction functions and to study boundary layer top entrainment processes and vertical structure. We plan to deploy two sounding systems: one boundary layer and upper air sounding system and one tethersonde system collocated within the Little Washita River Watershed in the southern portion of the SGP97 domain. The launch times will coincide with the launch times of the ARM/CART IOP Sounding program, and will therefore provide complete coverage around the boundary of the SGP97 domain to support vapor budget computations. Sponsor: NASA (Program Manager: Ming­Ying Wei) References Betts, A. K. and A. C. M. Beljaars, Estimation of effective roughness length for heat and momentum from FIFE data, Atmos. Res., 30, 251­261, 1993. Peters­Lidard, C. D. and E. F. Wood, Spatial variability and scale in land­atmosphere interactions: 2. Model validation and results, submitted to Water Resour. Res., 1996b. Ziegler, C. L and L. C. Showell, Chapter XII: Atmospheric Soundings in Hydrology Data Report Washita 1994, eds. P. J. Starks and K. S. Humes, NAWQL 96­1, USDA ARS, Durant OK, June 1996. Investigator(s)/Institutions(s): Dr. Praveen Kumar, Hydrosystems Lab. # 2527B, 205 North Matthews Avenue, Department of Civil Engineering, University of Illinois, Urbana, Illinois 61801 (217)­333­4688 Fax (217)­333­0687 email: kumar1@uiuc.edu Students: Patricia Saco (saco@uiuc.edu) Ji Chen (jichen@uiuc.edu) Title: Estimation, Modeling and Simulation of Soil Moisture Variability and Surface Energy Balance Using Multisensor Measurements at Large Scales Abstract: In order to understand the feedback interaction between land and atmosphere we need a method to characterize the near surface soil­moisture variability and surface energy balance at a vast range of scales. Due to the formidable cost of making such measurements the strategy adopted is to make fine scale measurements of limited coverage embedded within coarse scale measurements of larger coverage using instruments on different platforms. The PI has recently developed a multiple scale conditional simulation (MSCS) technique [Kumar, 1996] to obtain soil moisture fields by combining the multisensor measurements (obtained at multiple scales). The technique uses multiple scale Kalman filtering algorithms for the estimation and a conditional simulation technique for obtaining realistic soil­moisture fields. It relies on a fractal model of soil moisture [Iturbe et al., 1995]. The method can be easily extended to multiple variable fields such as the energy balance components at the land surface. The objectives of our participation in the Southern Great Plains Experiment are to: (a) Extensively validate the multiple scale conditional simulation technique for a wide range of scales and soil­moisture conditions; (b) Apply it to multiple variable surface energy fields and assess its performance; © Assess the impact of the conditionally simulated fields on the atmosphere. References: 1. Kumar, P., Application of Multiple Scale Estimation and Conditional Simulation for Characterizing Soil Moisture Variability, submitted to {\em Water Resources Res.}, 1996. 2 Rodriguez­Iturbe, I., G. K. Vogel, R. Rigon, D. Entekhabi, F. Castelli, A. Rinaldo, On the Spatial Organization of Soil Moisture Fields, {\em Geophysical Res. Letters}, 22(20), 2757­2760, 1995. VEGETATION EFFECTS ON SOIL MOISTURE ESTIMATION Narinder Chauhan Code 923 NASA/Goddard Space Flight Center Greenbelt MD 20771 301 286 4840 FAX: 301 286 1757 E­mail: nsc@fire.gsfc.nasa.gov The estimation of soil moisture depends strongly on the vegetation and its quantization. I will be working with Paul Doraiswamy of USDA and David LeVine of GSFC/NASA for the characterization of vegetation. The plan is to participate in the collection of gross vegetation parameters such as plant density, LAI, vegetation water content, etc. for most of the vegetation in the area. In addition, specific vegetation types will be targeted for collection of detailed canopy geometry data. This can involve measuring canopy architecture, leaf and stem angle distributions. In the past, the measurement of soil moisture under certain crops, like grass and alfalfa has been a problem. The plan is to characterize such crops with a higher degree of accuracy and to use theory (Discrete Scatter Models) to compare predictions with passive microwave measurements. The goal is to learn how to characterize these vegetation canopies to accurately estimate soil moisture. Investigators: George R. Diak and John M. Norman, University of Wisconsin­Madison William P. Kustas ,USDA­ARS Title of Investigation: Estimation and Validation of Evapotranspiration at 10 km Scales During The SGP­97 Experiment Abstract: We will investigate the performance of a two­source time­integrated model (TSTIM) for evaluating the surface energy balance over the domain of the SGP­97 experiment. This model is comprised of a surface component (describing the relationship between radiometric temperatures, sensible heat flux and the temperatures of the air, canopy and soil surface), coupled with a time­integrated component (connecting the time­integrated surface sensible heat flux with planetary boundary layer development). The required data inputs are radiometric surface temperatures at two times (from GOES), analyzed surface and upper air synoptic data, and vegetation cover estimates from satellite sources. Surface energy balance components will be estimated at approximately a 10­km resolution over the SGP­97 domain. These estimates will be compared with available surface and aircraft­based flux estimates. The TSTIM has the ability to utilize information on soil­surface evapotranpiration from any source. Using the SGP­97 data, we will also investigate how microwave­based near­surface soil moisture estimates from passive microwave sensors can be incorporated into this model. References: Anderson, M. C., J. M. Norman, G. R. Diak and W. P. Kustas, 1996: A two­source time integrated model for estimating surface fluxes for thermal infrared satellite observations. Accepted for publication, Rem. Sens. Environ. Diak, G. R. and M. S. Whipple, 1995: A note on estimating surface sensible heat fluxes using surface temperatures measured from a geostationary satellite during FIFE­1989. J. Geophys. Res. 100, 25,453­25,461. Norman, J. M., W. P, Kustas and K. S. Humes, 1995: A two­source approach for estimating soil and vegetation energy fluxes from observations of directional radiometric surface temperatures, Agric. For. Meteor., 77, 263­293. Contact: Dr. George R. Diak 1225 W. Dayton St., #205 Phone: 608­263­5862 Fax: 608­262­5974 email: georged.@ssec.wisc.edu Title: Estimating Soil Hydraulic Properties from Airborne Passive Microwave Data ­ The Effects of Subpixel Heterogeneity Investigators/Institutions: J. Finch and E. Burke, Institute of Hydrology Abstract: A physically based model that couples a soil water/energy model to a microwave emission model (MICRO­SWEAT) has recently been developed. MICRO­SWEAT predicts the time series of microwave emission from input parameters of the soil properties, soil water status, vegetation parameters and a time series of meteorological data. One application of MICRO­SWEAT has been to successfully estimate soil hydraulic properties from ground­based microwave data, i.e. essentially point measurements, by fitting the model to detailed time series of data. The next step in this line of research is to estimate soil hydraulic properties at the spatial scale of a pixel of remotely sensed data. The proposed research will investigate the effect of sub­pixel heterogeneity in soil hydraulic properties, soil roughness, vegetation water content and soil moisture on microwave data. The objectives of the project will be achieved by using the microwave values predicted from MICRO­SWEAT. The ground and ESTAR data acquired during SGP'97 will provide a data set that contains both the input parameters for MICRO­SWEAT and microwave data that can be used to test the values predicted by the model. The proposed research will make additional measurements on the ground of the soil and vegetation parameters required by MICRO­SWEAT at a series of sites in order to quantify the spatial heterogeneity within a pixel of the ESTAR data. Between 50 and 100 sites will be selected to represent the variations in soil and vegetation and measurements of soil moisture taken daily except during periods of rapid change when a reduced number of sites will be monitored more frequently. Other parameters will be estimated at different periods reflecting their rate of change. The key input and validation parameters which will be measured are: rainfall, plant height, leaf area index and leaf angle, vegetation water content, surface soil moisture, TDR soil water down to 120 cm, surface roughness, soil bulk density. In addition, gravimetric soil moisture samples for calibration will be collected and soil samples will be taken for laboratory analysis. The field data will be analyzed to assess the temporal and spatial variability of the input parameters required by MICRO­SWEAT. The first step of the modelling will be to test the values predicted by MICRO­SWEAT against the values recorded by the ground­based microwave radiometer in order to verify that the model is predicting the values to an acceptable accuracy. The next stage will be to use MICRO­SWEAT to predict the microwave emission from the range of soils and land cover types that occur within a pixel of the airborne remotely sensed data. These values will then be aggregated to produce a time­series of 'averaged' values that will be tested against the values of the airborne remotely sensed data. A sensitivity analysis will be carried out to assess the contribution from the different land surface parameter combinations to the time series of 'averaged' remotely sensed data. Finally, the simulated times series of remotely sensed data will be inverted to estimate the soil hydraulic properties of the pixel and a comparison made between these values and the variability of the values actually occurring within the pixel. Sponsor: UK Natural Environment Research Council Staff: Dr. Jon Finch Institute of Hydrology Wallingford Oxon OX10 8BB UK tel. + 44 (0)1491 838800 fax. + 44 (0)1491 692424 email: J.Finch@ioh.ac.uk Dr. Lester Simmonds Soil Science Department UK tel. +44 (0)1189 316557 fax. +44(0)1189 316660 Miss Eleanor Burke Institute of Hydrology Wallingford Oxon OX10 8BB UK tel. + 44 (0)1491 838800 fax. + 44 (0)1491 692424 email: E.Burke@ioh.ac.uk Investigator(s)/Institutions(s): Edward V. Browell, PI, NASA Langley Research Center, Syed Ismail, co­I, NASA Langley Research Center, Donald H. Lenschow, co­I, National Center for Atmospheric Research Kenneth J. Davis, co­I, University of Minnesota Title: INVESTIGATION OF MESOSCALE VARIABILITY IN CONVECTIVE BOUNDARY LAYER DEVELOPMENT USING LASE Abstract: One of the four objective of the Southern Great Plains 1997 (SGP97) Experiment is the examination of 'the effect of soil moisture on the evolution of the atmospheric boundary layer and clouds over the southern great plains". This study seeks to advance our understanding of this coupled land­atmosphere system, a fundamental component of the hydrologic, weather and climatic systems. We will study the spatial variability in the development of the convective boundary layer (CBL) over a fairly uniform land surface with spatially varying soil moisture content. Soil moisture will be measured with ESTAR on­board the NASA P­3 aircraft. NASA's Lidar Atmospheric Sensing Experiment (LASE) will also be flow on­board the P­3 aircraft. The LASE instrument, reconfigured to fly on the P­3, will be capable of resolving the vertical and horizontal structure of the developing CBL, including information on the two dimensional moisture structure of the atmospheric boundary layer. LASE and ESTAR together will provide a unique and comprehensive mesoscale remote sensing data set for studying the evolution of the CBL and its relation to the land surface. This study will benefit from complementary data from the Canadian Twin Otter aircraft (real­time images of boundary layer structure obtained by LASE can be used, when appropriate, to guide the Twin Otter). Other in situ surface and tower measurements, and satellite remote sensing data will also be used in this study. The primary goals of this research are: evaluation of the influence of soil moisture on the local surface energy budget (SEB) over the SGP97 region; 2) evaluation of the influence of mesoscale spatial variability in the SEB on CBL development, including CBL depth and cloud cover; 3) quantification of the CBL water vapor budget (advection, entrainment, evapotranspiration) using remotely sensed and in situ data; and investigation of microscale mechanisms responsible for the entrainment of tropospheric air into the CBL. References: Kenneth J. Davis, Assistant Professor phone: 612­625­2774 Department of Soil, Water, and Climate fax: 612­625­2208 University of Minnesota email: kdavis@soils.umn.edu 1991 Upper Buford Circle St. Paul, MN 55108­6028 Investigator: Guido D. Salvucci, Boston University, Dept. of Geography 675 Commonwealth Ave., Boston, MA 02215 617­353­8344 Fax 617­353­8399 gdsalvuc@bu.edu Title: Detection and modeling of transitions between atmosphere and soil limited evapotranspiration in the southern great plains summer 1997 experiment Abstract: Salvucci [WRR 33(1), 111­122, 1997] presented a simple diagnostic model of bare soil evaporation which expresses the daily rate of evaporation during soil limited periods as a function of the duration (td) and average rate (ep) of stage­one (potential) evaporation. The model does not require in situ estimates of soil hydraulic properties or initial water content, as these are implicitly related to td and ep. Surface and remote observations of detectable changes in near surface moisture content, temperature, and albedo may be used to estimate the transition time (td). With extensions to estimate stressed transpiration from grasses, the model thus has the potential to yield ET estimates over large areas using satellite data. The microwave estimates of soil moisture collected over the month long SGP experiment will be used in conjunction with concurrent surface flux measurements taken at the ARM sites to further test and develop this methodology, with special emphasis on the detection of transition time via microwave­estimated surface soil moisture dynamics. References: Salvucci, G.D., 1997. Soil and moisture independent estimation of stage­two evaporation from potential evaporation and albedo or surface temperature, Water Resources Research, 33(1), 111­122 Sponsor: NASA Grant NAGW­5255 "Thermal and Hydrologic Signatures of Soil Controls on Evaporation" Investigator: Eni G. Njoku Institution:Jet Propulsion Laboratory Title: Multichannel land parameter retrieval at different spatial scales Abstract: Soil moisture is the dominant effect on microwave emission from soils at L­to C­band for soils with low to moderate vegetation. Surface roughness, temperature, and low­opacity vegetation cover affect soil microwave emission, but to lesser extents than soil moisture. As the opacity of vegetation cover increases it becomes the dominant effect on the microwave emission, and can mask the soil moisture signal. Multifrequency retrieval algorithms are a means for utilizing the varying sensitivity of brightness temperature to the surface parameters at different frequencies to correct for vegetation, roughness, and temperature in retrieving soil moisture. Theoretical simulations using models based on recent empirical data show that multichannel algorithms should work well in practice. However, there have been few opportunities to demonstrate this in actual field experiments. SGP'97 provides an opportunity for such a demonstration. Truck­based L­, S­, and C­band measurements are planned, providing data at a local scale, and L­band aircraft data and AVHRR satellite data will be available at the 1­km resolution scale. SSM/I data will be available at a 50­km resolution scale, providing a historical database of 19.3 and 37 GHz brightness temperatures over the SGP'97 site at that scale. We will provide the AVHRR and SSM/I data to the SGP'97 experiment database as a contribution of this investigation. Soil moisture retrievals will be performed at three scales, using different algorithms and available data sets: (1) local ­ truck­based; (2) regional ­ aircraft microwave/satellite AVHRR; (3) time­series ­ satellite SSM/I. Soil moisture retrievals for these cases will be compared with in­situ observations and output from numerical models over the SGP'97 site, and results of the analyses will be published. Research using the truck­based, aircraft, in­situ, and model data will be performed in collaboration with the data providers. References: Njoku, E.G. and D. Entekhabi (1996): Passive microwave remote sensing of soil moisture. J. Hydrology, 184, 101­129. Njoku, E. G., S. J. Hook, and A. Chehbouni (1996): Effects of surface heterogeneity on thermal remote sensing of land parameters. In: Scaling Up In Hydrology Using Remote Sensing (J. B. Stewart, E. T. Engman, R. A. Feddes, and Y. Kerr, Eds.), Wiley, New York. Investigator(s)/Institutions(s): Paul R. Houser (NASA­GSFC), and Jim Shuttleworth (U. of Arizona) Title: Regional In­Situ Profile Soil Moisture and Surface Energy Flux Observations in support of the 1997 Southern Great Plains Experiment. Abstract: Our contribution to the Southern Great Plains 1997 experiment will be in four areas: (1) general mission support through surface gravimetric sampling and processing, (2) profile soil moisture observations using TDR and gravimetric techniques, (3) Soil characterization at selected sites, and (4) operation of a surface energy and water flux station at the ARM central facility. Observations of Profile Soil Moisture and Characteristics: The primary objective of the Southern Great Plains 1997 (SGP97) Experiment is to map soil moisture using an airborne passive microwave radiometer (ESTAR, LeVine et al., 1992) over a 60 km by 250 km area in central Oklahoma for a one month period during the summer of 1997 (Jackson, 1996). Passive microwave instruments are only sensitive to moisture in the top few centimeters of soil, but knowledge of moisture in the entire soil profile is essential for hydrologic, ecologic, and climatic studies (Wei, 1995; Ragab, 1995; Jackson, 1980). Therefore, profile soil moisture observations will be essential for understanding the relationship between the remotely­sensed measurements and deeper moisture stores. Profile measurements will enable further development and validation of methodologies that extend remotely sensed surface soil moisture estimates to the entire root zone (Jackson, 1980), will enable the definition of vertical soil moisture error correlation structures which are essential in soil moisture data assimilation studies, and will help to calibrate existing profile sensors. Profile soil moisture observations using Campbell heat dissipation probes are currently in place in the SGP97 area at 14 Little Washita Micronet, 5 Oklahoma Mesonet, 2 ARM Central Facility, and 5 El Reno sites. Observations made with these sensors are known to vary with soil characteristics and temperature, therefore each of these sites will be instrumented with an ESI MoisturePoint profile TDR that will be monitored daily during SGP97 (installation done prior to the experiment by Pat Starks, USDA­ARS El Reno), and profile gravimetric observations at selected sites (mostly at El Reno) will be collected as frequently as possible (selected soil cores will be sent to the USSL for water retention, and soil characterization analysis). The TDR probes and MoisturePoint equipment for this plan are currently available (Pat Starks, USDA­ARS, and Ron Elliot, OK Mesonet), and both truck­mounted and hand operated gravimetric sampling equipment is available (USSL­Binayak Mohanty), but truck sampling may be limited to the EL Reno facility. The existing profile soil moisture sensors are located next to weather observation stations that are typically on the edges of fields in non­characteristic soil and vegetation. To assess the representiveness of these observations, additional in­field TDR profile observations will be made at a subset of sites (2 at the ARM Central Facility, 2 at El Reno, and 1 at the Little Washita). It is thought that a minimum of 3 in­field TDR observations will be necessary at each of these sites to assess the field average profile soil moisture. At one site (El Reno) a larger number of in­field TDR observations (9 samples) will be made to determine if 3 samples is adequate for determination of in­field average profile soil moisture. Approximately 4 of these 21 additional probes are currently available (Pat Starks, USDA­ARS), leaving only ~17 to purchase ($350ea * 17probes =$5950)! Observations of Surface Water and Energy Fluxes: The DOE­ARM program has embarked on an extensive environmental observation program in the Oklahoma and Kansas area. As part of this program, observations of surface water and energy fluxes are being performed with eddy correlation and Bowen ratio techniques. To characterize the quality of these observations for use in applications such as validation and calibration of regional land surface and atmospheric modeling projects, a well established eddy correlation system will be co­located with the ARM surface flux measurement sites at the ARM Central Facility. The University of Arizona's CO2/H2O eddy correlation system (Shuttleworth) will initially be co­located with other mobile surface flux measurement systems at the EL Reno Facility for a period of a few days just prior to the SGP97 experiment for intercomparison. During this time, two other Campbell LiCor Bowen Ratio systems may be deployed and maintained at El Reno as part of this project. The UA eddy correlation system will be re­deployed to the ARM Central Facility at the start of the SGP97 experiment. It will be located near the ARM Bowen ratio system in rangeland vegetation for two weeks, and near the ARM eddy correlation system in a winter wheat field for two weeks. The exact location and height of the UA system may vary from the ARM sensors to minimize fetch problems. Personnel: Paul Houser (available for experiment duration) Chawn Harlow (available for experiment duration) Jim Shuttleworth (questionable availability) NASA­GSFC: Houser's salary, computer support, GPS NASA­HQ: Houser's Travel, and hopefully some equipment U of Arizona: NASA Contract NAS­5­3492 will provide salary and travel for 1 student, computer support, 1 flux station Cooperator(s): USDA­ARS (Pat Starks at El Reno): cooperating on MoisturePoint TDR sampling USDA-ARS-SL (Binayak Mohanty): Use of soil sampling equipment, possibly including a hydraulic press for use at El Reno Oklahoma Mesonet (Ron Elliot): Use of 2­3 MoisturePoint "Boxes" References: Jackson, T. J., 1996. Southern Great Plains 1997 (SGP97) Experiment Plan, http://hydrolab.arsusda.gov/~tjackson/. Jackson, T. J., 1980. Profile Soil Moisture from Surface Measurements. Journal of the Irrigation and Drainage Division, June 1980. Le Vine, D. M., A. Griffis, C. T. Swift, ant T. J. Jackson, 1992. ESTAR: A Synthetic Aperture Microwave Radiometer for Measuring Soil Moisture. International Geoscience and Remote Sensing Symposium 1992, Vol 1. Ragab, R., 1995. Towards a continuous operational system to estimate the root­zone soil moisture from intermittent remotely sensed surface moisture. Journal of Hydrology, 173:1­25. Wei, Ming­Ying, editor, 1995. Soil Moisture: Report of a Workshop Held in Tiburon, California, 25­27 January 1994. NASA Conference Publication 3319. Primary Contact: Paul R. Houser houser@hydro4.gsfc.nasa.gov (301)­286­7702 fax (301) 286­1758 NASA's Goddard Space Flight Center Hydrological Sciences Branch / Data Assimilation Office Code 974 (Bldg. 22, Room C277) Greenbelt, MD 20771 Participation in SGP97 from the Center for Hydrology, Soil Climatology and Remote Sensing The Center for Hydrology, Soil Climatology, and Remote Sensing (HSCaRS) under NASA sponsorship has as one of its objectives to develop a Local­scale Hydrology Model (LHM) and a Regional­scale Hydrology Model (RHM) that can utilize periodic input of remotely­sensed soil moisture data to "adjust" the surface soil moisture field used to calculate root zone moisture. In addition, we recognize the need to address the issue of disaggregating large pixel soil moisture data from satellites to the process­scale represented in the hydrologic models. The Southern Great Plains 1997 (SGP97) Experiment will provide data necessary for HSCaRS to pursue its hydrologic modeling research objectives. HSCaRS will provide support to the SGP97 Experiment and acquire additional characterization information needed for hydrologic modeling by conducting research in the following five areas: 1.) Relate surface soil moisture measurements to the soil moisture profile: We will install and operate a soil profile station (see description below) on each of the two plots in the vicinity of the calibration plots to relate the observed surface soil moisture to the underlying soil moisture profile. One energy balance Bowen ratio (EBBR) station is available for deployment at the SLMR calibration site to relate soil moisture changes to surface energy fluxes. Depending on which site is selected for the calibration site, instead we may choose to deploy the EBBR in the Little Washita River basin. Additional meteorological measurements, including rainfall, air temperature, relative humidity, shortwave and infrared radiation wind direction and speed will be made at the SLMR site. Chip Laymon (GHCC) will service these stations and will also assist Peggy O'Neill in SLMR operation and data acquisition. Up to four additional soil profile stations will be deployed in the Little Washita River watershed to a.) provide additional points for relating remotely­sensed surface soil moisture to the underlying soil moisture profile, b.) to relate SHAWMS soil profile measurements at field borders to measurements within the field, and c.) provide time continuity to periodic manual soil moisture profile measurements to be made at approximately 20­30 sites in the SGP97 study area (coordinated by Paul Houser). Bill Crosson (GHCC) will be the lead on this activity. Description of Soil Profile Stations: Soil moisture and temperature measurements will be made at several depths down to about 75 cm in each pit. Soil moisture will be measured using Water Content Reflectometers (Campbell Scientific, Inc.), a device based on time domain reflectometry, and using Soil Moisture Probes (Radiation and Energy Balance Systems), a device based on electrical resistance. Soil temperature will be measured in each pit using soil thermistors. Ground heat flux will be determined using a heat flux plate installed at 5 cm depth plus the heat storage in the upper 5 cm layer calculated from the time rate of change of temperature, which is measured using 4­sensor averaging thermocouple probes installed at 1, 2, 3 and 4 cm depths. We are currently examining techniques to derive the soil dielectric constant from Water Content Reflectometers or similar sensors. At this point this appears feasible; if so, we will provide dielectric constant profiles at one or more of the profile stations. This information should be valuable in understanding both SLMR and ESTAR measurements vis­a­vis soil moisture measurements in the upper 5 cm as well as in the profile. 2.) Soil hydraulic property characterization: Accurate knowledge of the spatial distribution of soil hydraulic properties is necessary for SGP97 soil moisture retrieval as well as for hydrologic modeling activities. Soil profiles will be described and sampled for texture, hydraulic conductivity, bulk density and porosity at the sites where the HSCaRS soil profile stations are installed. A representative grass and winter wheat field in the Little Washita River watershed will be sampled (up top 100 samples each) for surface hydraulic properties. All soil samples will be analyzed at Alabama A&M University. Teferi Tsegaye (Alabama A&M University) will be lead on this activity. 3.) Classify vegetation: An accurate land cover classification is necessary for the SGP97 soil moisture retrieval algorithm and subsequent hydrologic modeling. Landsat TM data will serve as the basis of the classification. HSCaRS will provide personnel to support this effort being coordinated by other SGP97 team scientists. Ahmed Fahsi (Alabama A&M University) will assist in this activity and coordinate additional student support provided by Alabama A&M University. 4.) Surface soil moisture variability: Some understanding of the spatial variability of surface soil moisture is required to a.) assess the accuracy of using a limited number of gravimetric samples for remote sensing verification, b.) assess the accuracy of the remote sensing technique to represent the mean surface moisture of the field, c.) assess the linearity of integrating moisture variability by the ESTAR instrument within a single pixel, d.) test mixed­pixel algorithms, and e.) evaluate field­ and sub­watershed­scale hydrologic processes. While this activity will be conducted with a large cooperative group from many institutions, HSCaRS scientists from GHCC and Alabama A&M University have contributed significantly to developing the science and implementation plans for this activity. Teferi Tsegaye has particular interest in studying field­scale variability and Chip Laymon and Bill Crosson have interests in the application of these data to remote sensing interpretation and verification of hydrologic models. In addition to field sampling, Chip Laymon is developing a GIS application for rapid mapping and evaluation of the field measurements. Site information and field measurements will be downloaded nightly from portable data recorders to a PC. These data can then be uploaded into a GIS application and for mapping and production of soft and hard copy output and thereby used by the field team leaders in redirecting labor resources the next day. In addition, near "real­time" visualization of the field measurements will contribute greatly to morale by making the science more tangible and understandable to those participating. 5.) Develop and test surface TDR measurement capability: The surface soil moisture variability study (#4 above) is dependent on a portable, rapid measurement technique. Recent advances in time domain reflectometry techniques have resulted in sensors with "on­board" signal processing. We are currently investigating the ability to modify several off­the­shelf products for use in surface (0­5 cm) soil moisture determination. Preliminary results indicate that we will be successful in providing an instrument for use during SGP97. Current research is focusing on sensor intercomparison and calibration. Recommendations on equipment are forthcoming. HSCaRS Participants: Global Hydrology and Climate Center Chip Laymon week 1, 2, 4 chip.laymon@msfc.nasa.gov Bill Crosson week 1, 3, 4 bill.crosson@msfc.nasa.gov Vishwas Soman vishwas.soman@msfc.nasa.gov Alabama A&M University Ahmed Fahsi afahsi@asnaam.aamu.edu Teferi Tsegaye tsegaye@asnaam.aamu.edu Andrew Manu amanu@ asnaam.aamu.edu Rajbhandari Narayan rajbhandari@asnaam.aamu.edu ~ 5­8 grad. students 2 week each ? Investigator(s)/Institution: P.J. van Oevelen, Dept. Water Resources, WAU, Wageningen, The Netherlands M. Menenti, Winand Staring Centre, Wageningen, The Netherlands Title of Investigation: Estimation of spatial soil moisture fields estimation using sensor fusion: SSM-I, ERS, Radarsat and ESTAR Abstract: Microwave radiometry has been widely accepted as the most practical tool to estimate spatial soil moisture fields, especially at L-band the results have been encouraging. However, currently there are no spaceborne microwave radiometers available with an acceptable resolution to be used in watershed studies. Therefore, the usefulness of SAR, in particular Radarsat and ERS, to estimate the same type of soil moisture fields as is possible with the airborne ESTAR (at a resolution of 1 km) will be investigated. The combination of data originating from various sensors to estimate the same property is referred to as sensor fusion. Within the EOS framework this study will also investigate the usefulness of low resolution SAR systems such as ASAR and the application of these fields in Numerical Weather Prediction models. To facilitate this study an extensive soil moisture measurement campaign will be set-up using portable TDR's (Time Domain Reflectometry), an FD (frequency domain) sensor along transects/grids and the EM38 instrument to give a more spatially average measurement over the same transect/grid. The grid size and spatial sampling scheme should be set up such that the measurements are representative enough to cover the spatial resolutions of the various sensors (25 m up to 1 km). All these measurements should occur as closely as possible to the overpass times of the various instruments. References: Investigator(s)/Institutions(s): Larry Mahrt (Oregon State University) and Jielun Sun (University of Colorado/NCAR) Title: Aircraft measured surface fluxes and relationship to soil moisture. Abstract: The Canadian Twin Otter and the NOAA LongEZ will be deployed during SGP to measure the spatial variability of fluxes of heat, moisture and carbon dioxide. The LongEZ will fly primarily low level flights below 50 m (subject to final FAA approval) to concentrate on surface flux measurements while the Twin Otter will fly multiple levels to include vertical structure of the boundary layer and assessment of entrainment of dry air. Two principal modes of operation will be "chasing" spatial gradients of surface moisture and coordinated flights with the P3. Additional flights will feature tower-aircraft flux comparisons. The aircraft data, and eventually the tower flux, Mesonet and sounding data will be archived at Oregon State. The aircraft data will be quality controlled and evaluated in terms of flux sampling errors. The analyzed fluxes will be provided to the community along with a suite of other processed parameters such as surface roughness and surface radiation temperature. The analyzed fluxes from the two aircraft will be combined with the sounding data, the Mesonet data, LASE water vapor measurements, ESTAR brightness temperature and the soil moisture estimates to examine the response of the boundary layer to spatial variations of the soil moisture and the feedback of boundary layer evolution on the surface moisture fluxes. For example surface dryer conditions lead to greater heat flux, boundary layer growth and entrainment drying which reduces the surface relative humidity. For a given soil moisture, this enhances the soil moisture loss. Its effect on transpiration depends on stomatal control. Methods are being developed to estimate area averaged moisture fluxes by modelling the evaporative fraction in terms of remotely sensed variables including the surface radiation temperature, red and near infrared channels and microwave band. Larry Mahrt COAS OSU Corvallis, OR 97331 mahrt@ats.orst.edu 541 737 5691 fax 2540 Jielun Sun MMM NCAR P.O. Box 3000 Boulder, CO 80307 jsun@elder.mmm.ucar.edu 303 497 8994 fax 8171 Space-Time Characterization of Soil Moisture Variability for Assessment of Sampling Errors by Space-Borne Sensors and Related Ground Truth Issues Investigators: Juan B. Valdes, Department of Civil Engineering and Climate System Research Program Gerald R. North, Department of Meteorology and Climate System Research Program Abstract There is a great need of a set of observations of soil moisture that cover large areas and time intervals. The available records of Washita'92 have been extensively analyzed and used in our research but the data set have some limitations both in temporal and in areal extent. The planned experiment would greatly improve the data availability of soil moisture. In our research we are planning to use those measurements to characterize the space-time spectrum of soil moisture to be used in the estimation of sampling errors by sensors that are intermittent in time and/or space. The measurements will also be used to estimate nominal parameters for one-layer/two-layer models of the upper soil zone to carry out controlled experiments of proposed missions. The statistics of the observed point values on the ground and the observed surrogates on the overflights will be used to determine the possible bias in a procedure similar to the one carried out for precipitation. Juan B. Valdes Department of Civil Engineering Texas A&M University College Station TX 77843-3136 (409) 845-1340 (409) 862-1542 FAX e-mail: jvaldes@tamu.edu SGP-97: An Integrated Validation Framework Investigators: B.P. Mohanty, P. Shouse, M. Th. van Genuchten (U.S. Salinity Lab) Rationale: The spatio-temporal dynamics of water and energy transport across the soil-atmosphere boundary layer in relation to climate change, hydrology, near-surface thermodynamics, and land use is still poorly understood. The problem of accurately estimating regional-scale soil water contents of the near-surface, variably-saturated (vadose) zone is complicated by the overwhelming heterogeneity of both the soil surface and the subsurface, the highly nonlinear nature of local-scale water and heat transport processes, and the difficulty of measuring or estimating the subsurface unsaturated soil-hydraulic functions (the constitutive functions relating soil water content, soil-water pressure head and the unsaturated hydraulic conductivity) and soil thermal properties (heat capacity and soil thermal conductivity). As remote sensing techniques make it increasingly possible to obtain large-scale soil water content and heat flux measurements, validation of these measurements using ground-based data and/or indirect estimates from relevant soil, landscape, and vegetation parameters is essential. Objective: The overall objective of our project is to develop and evaluate an "integrated validation framework" for remote sensing data of soil moisture content in the shallow subsurface. Specific scopes of our investigation for SGP-97 experiment will include: 1. Coupling of digital soil maps (e.g., SSURGO, STATSGO) with soil hydraulic and thermal property databases (e.g., UNSODA) using ARC/INFO geographical information systems (GIS) and neural network (NN) based pedotransfer functions (PTFs) (in collaboration with Doug Miller, and others). 2. Identification of important soil (e.g., soil type, texture, porosity, bulk density), landscape (e.g., slope, aspect, elevation, depth to water table), and land use/cover (vegetation type, vegetation density, management practice, etc.) parameters for establishing pedotransfer functions to describe soil hydrologic and thermal properties of relatively large land areas (in collaboration with Jay Famiglietti, Charles Laymon, Doug Miller, Paul Houser, and others). 3. Measurement of soil water retention and hydraulic conductivity functions across the space and time domains of SGP-97 experiment (in collaboration with Paul Houser and others). 4. Investigation of the suitability of different exploratory data analyses, Bayesian statistics, spatial statistics, numerical or other up-scaling techniques for estimating effective soil hydraulic and thermal parameters of the larger land areas (pixels) from point measurements in the vadose zone (in collaboration with Dennis McLaughlin, and Dara Entekhabi). The ultimate purpose of this research is to obtain pixel-scale estimates of the soil hydraulic and soil thermal properties for possible use in land-soil-atmospheric interaction simulation models to test space-borne measurements of transient soil moisture and soil temperature data, thereby yielding alternative (provide supplementary data) to ground-truth measurements. Investigator/Institution: Jay Famiglietti, University of Texas at Austin Title: Ground-Based Investigation of Spatial-Temporal Soil Moisture variability in Support of SGP '97 Abstract: Surface (0-5 cm) soil moisture exhibits a high degree of variability in both space and time. However, larger-scale remote sensing integrates over this variability, masking the underlying detail observed at the land surface. Since many earth system processes are nonlinearly dependent upon surface moisture content, this variability must be better understood to enable full utilization of the larger-scale remotely-sensed averages by the earth science community. The overall goals of this investigation are to (a) characterize soil moisture variability at high spatial and temporal frequencies; (b) understand the processes controlling this variability (e.g. precipitation, topography, soils, vegetation); and © determine how well this variability is represented in a time series of 1-km (approximately) remotely-sensed soil moisture maps. Specific tasks are to (a) quantify the spatial-temporal variability of surface moisture content (mean, variance, distributional form, spatial pattern) in selected, representative quarter sections by means of supplementary sampling; (b) assess the accuracy of the remotely-sensed soil moisture maps by comparing ESTAR-derived mean moisture contents to those observed in the field; © assess the representativeness of remotely-sensed maps of mean moisture content with respect to the underlying variance within quarter sections; (d) determine how well larger-scale (full section to small watershed scale) observed patterns of soil moisture are preserved by the remotely-sensed maps; and (e) characterize the processes controlling soil moisture variability from the quarter-section to the small watershed scale, with implications for the environmental factors which influence spatial-temporal variations in the accuracy and representativeness of the remotely-sensed soil moisture maps. A team of seven researchers (listed below) will conduct this investigation and will be on site for the full duration of the experiment. Site selection and the spatial-temporal frequency of intensive sampling are currently under investigation in collaboration with other SGP investigators. A portable sampling methodology, critical to the feasibility of this effort, is also under study at MSFC with promising results to date. Beyond the implications outlined above, the proposed research will also have significance with respect to: sensor sensitivity and the design of future instruments; the potential utility and success of larger-scale remote sensing (i.e. in the presence of greater heterogeneity); improved understanding of soil moisture variability across spatial-temporal scales and its role in land-atmosphere interactions; and the parameterization of soil moisture and related processes in models of land surface hydrology. Sponsors: NASA, NSF, University of Texas Geology Foundation Participants: Stewart Franks Marcia Branstetter Tel: 512-471-8547 marcia@maestro.geo.utexas.edu Johanna Devereaux Tel: 512-471-8547 jdev@mail.utexas.edu Karen Mohr Tel: 512-471-8547 kmohr@maestro.geo.utexas.edu Jay Famiglietti Tel: 512-471-3824 jfamiglt@maestro.geo.utexas.edu Steve Graham Tel: 512-471-5023 steveg@mail.utexas.edu Matt Rodell Tel: 512-471-5762 mattro@mail.utexas.edu All at: Department of Geological Sciences, University of Texas at Austin Austin, TX 78712, Fax: 512-471-9425 INVESTIGATORS: Ronald L. Elliott, Professor and Gabriel B. Senay, Post-Doctoral Fellow INSTITUTION: Biosystems & Agricultural Engineering Dept. Oklahoma State University Stillwater, OK TITLE: In-Situ Soil Moisture Intercomparisons and Scale-Based Validation of an T/Soil Moisture Model ABSTRACT: Our investigations will be focused on two topics: (1) intercomparisons of soil moisture measurements; and (2) validation of evapotranspiration/soil moisture modeling at various spatial scales. These investigations will depend on ground and remote sensing data that are collected during the SGP97 experiment, as well as measurements that are made on an ongoing basis in Oklahoma. Analyses related to topic (1) will be conducted in the relatively near term, whereas studies of topic (2) will be longer term in nature. (1) The senior investigator has been directly involved in the addition of soil moisture sensors to 60 of the 114 Mesonet sites across Oklahoma. These sensors include a single TDR (time domain reflectometry) probe that provides layered data from five soil depths down to 90 cm, and four heat dissipation devices which are installed at depths of 5, 25, 60, and 75 cm. The TDR measurements are made periodically and provide data on volumetric water content, whereas the heat dissipation sensors are logged continuously and provide data on soil water potential. We not only seek to check the consistency between these two sources of data, but also to develop a soil- and sensor-specific calibration of the heat dissipation sensors to volumetric water content. The more intensive TDR sampling that will be done as part of SGP97 will enable us to expand these calibration data sets for the Mesonet sites in the study area. Furthermore, the surface (and perhaps profile) gravimetric sampling that will be done as part of SGP97 will provide a third, independent set of soil moisture data. With soil bulk density information from the sampling sites, the gravimetric data will be converted to volumetric water content and compared to the in-situ measurements. The OSU investigators will help to support the gravimetric sampling in the northern part of the SGP97 study area. (2) The investigators and their colleagues are developing a GIS-based simulation model for estimating daily latent heat flux (evapotranspiration) and soil moisture at various scales across a heterogeneous landscape. The model is physically based, tracks the soil water balance, and makes use of three data "layers" -- soil, vegetation, and weather. The highest resolution data layers consist of 4-hectare cells, each of which is considered homogeneous. Mesonet sites are well suited for validating the model at "points", but it becomes much more problematic to validate at larger scales. Soil moisture and surface flux measurements from SGP97 will provide a valuable data set for checking the model at various space (and time) scales. This work will be funded through the combined support of the Oklahoma Agricultural Experiment Station and the Oklahoma NSF and NASA EPSCoR programs. Investigator(s)/Institutions(s): Shafiqul Islam, University of Cincinnati Title: Scaling Properties of Soil Moisture Images Abstract: An outstanding research question critical to the integration of remotely sensed soil moisture into global models is how adequately the inherent spatial heterogeneity is represented at scales commensurate with current generation mesoscale and global climate models. To address this question, a framework is needed that can bridge the scale gap between the scale of remote sensors and large scale model resolution which can take into account the role of spatial heterogeneity. Recent research on spatial rainfall and streamflow has shown that they may exhibit scaling-multi scaling characteristics (Gupta and Waymire 1990). Our analysis of remotely sensed soil moisture images from Washita '92 experiment has shown that soil moisture also exhibits multi scaling properties (Hu et al.,1997). We hypothesize that the soil moisture images can be decomposed into large scale feature parts and small scale fluctuation parts. This decomposition will not make any apriori assumption regarding the structure of the soil moisture fields. Our preliminary results suggest the presence of simple scaling for the small-scale fluctuation parts. The limitations imposed by the data have allowed only three levels of decomposition and it is not clear over what range of scales such simple scaling exists. Using SGP97 data, we will explore and hopefully establish a relationship among the multi scaling properties observed in rainfall, soil moisture, and other land surface variables. References: Gupta, V.K. and E. Waymire (1990): "Multiscaling properties of spatial rainfall and river flow distributions", J. Geophys. Res. 95 (D3), 1999-2009. Hu, Z., S. Islam, and Y. Cheng (1997): "Statistical characterization of remotely sensed soil moisture images", in press, Remote Sensing of Environment. Investigator(s)/Institution(s): Shafiqul Islam, University of Cincinnati, Elfatih Eltahir, Massachusetts Institute of Technology Title: Relative Merits of Microwave Measurements of Soil Wetness and Radar Measurements of Rainfall for the Purpose of Estimating Soil Moisture Profile Abstract: Recent studies in land-atmosphere interactions have shown that large scale soil moisture information as well as estimate of the soil water within the soil column is essential for accurate partitioning of surface fluxes. Current microwave measurements of soil moisture provides an excellent estimate of the soil water content within the top few centimeters. For the first time entire United States will be covered by the NEXRAD systems that would provide very detailed spatial information of rainfall. We plan to explore a fusion approach that combines microwave measurements of soil moisture and radar measurements of rainfall within a coupled land-atmosphere model to infer the soil moisture profile. In this experiment, we would also compare and contrast the relative merits of microwave (for soil moisture) and radar (for rainfall) to infer soil moisture profile within a single- and multi-sensor mode. The planned SGP97 data set would be an ideal test bed to examine the validity of this proposed approach of multi-sensor fusion for soil moisture profile estimation. Sponsors: University of Cincinnati and Massachusetts Institute of Technology ----------------------------------------------------------------------- Shafiqul Islam Cincinnati Earth System Science Program Department of Civil and Environmental Engineering University of Cincinnati Phone: (513) 556-1026 P.O. Box 210071 Fax: (513) 556-2599 Cincinnati, Ohio 45221-0071 email: sislam@fractals.cee.uc.edu Investigators/Institutions: Paul Doraiswamy and Craig Daughtry, USDA/ARS, Remote Sensing and Modeling Laboratory, Beltsville, MD Tom Jackson and Bill Kustas USDA/ARS, Hydrology Laboratory, Beltsville, MD Jerry Hatfield, USDA/ARS, Soil Tilth Laboratory, Ames, IA Title of Investigation: Study the techniques for retrieval of biophysical parameters from remote sensing and evaluate models for Leaf Area Index, Biomass and Energy balance of different canopies in the SGP experiment site. Abstract The seasonal vegetation dynamics will be monitored, using Landsat TM and NOAA AVHRR imagery acquired between May through July 1997. Ground measurements of LAW and green biomass will be monitored during the June-July period by Craig Daughtry. Several canopy models estimating surface reflectance (Verhoef, W., 1984), LAI (Clevers, J.G.P.W. et al., 1989 & Rahman H. et al., 1993) and biomass (Moran, M.S. et al., 1995) will be tested for their applicability in three major types of vegetative cover in the SGP study area. Biophysical parameters retrieved from remote sensing using several models will be evaluated. The extrapolation of parameters from field to region scales using models will be investigated for monitoring the vegetation dynamics throughout the summer period. Landsat TM and AVHRR data will be processed to provide good registration accuracy for correlation with ground samples collected through the study period. Soil moisture and surface energy balance modeling to extrapolate measurements from aircraft and flux stations to the surrounding areas will be investigated in collaboration with T. Jackson and W. Kustas. Geospatial statistical analysis of soil, vegetation, and atmospheric parameters measured on the ground will be used in developing models to study techniques for extrapolating parameters from small to large areas. References Clevers, J. G. P. W., (1989), "The application of a weighted infrared-red vegetation index for estimating leaf area index by correcting for soil moisture", Rem. Sens. Environ., 29:25-37. Moran, M.S., Maas, S.J., and Pinter, P.J., Jr. (1995). Combining Remote sensing and modeling for estimating surface evaporation and biomass production. Remote Sensing Reviews. 12:335-353. Verhoef, W., (1984), "Light scattering by leaf layers with application to canopy reflectance modeling: the SAIL model", Rem. Sens. Environ., 16:125-141. Utilizing Data from the Southern Great Plains Experiment with RADARSAT Data Eric F. Wood, Princeton University, Princeton, NJ 08544 T. J. Jackson, USDA ARS Hydrology Lab The goal of our participation in the Southern Great Plains Experiment is to develop improved remote sensing techniques for areal estimation of soil moisture, and to demonstrate that RADARSAT, either alone or in conjunction with other satellite and hydrologic observations, can provide soil moisture fields at regional scales. To date, the application of microwave radar remote sensing to soil moisture estimation has been hampered by several difficulties, including its sensitivity to vegetation and surface roughness, and understanding the relationship between observations from remote sensing instruments and point measurement values. The planned research activities are the following: 1. Field data collection. In discussion with Tom Jackson, we plan to participate and focus our collection at the USDA El Reno site. We are assuming that this site will have a surface flux station so point water and energy balance modeling can be carried out, post experiment. We are also planning on utilizing field scale data collected in the Little Washita and point measurements from the CART-ARM sites. These data will help us extend the research to scales more consistent with regional estimation. 2. Soil moisture retrievals. Test and develop calibration strategies for soil moisture retrieval algorithms for the RADARSAT satellite data using the above field data., and estimate spatial maps of soil moisture. This work will build on research developed under our SIR-C funding. 3. Analyses. Intercompare remotely sensed soil moisture maps derived from RADARSAT with those developed from airborne ESTAR passive microwave sensors, and with field data collected at El Reno, Little Washita and CART-Arm sites 4. Scaling. Study the scaling behavior of both airborne and satellite radar and derived soil moisture fields so as to develop strategies for regional soil estimation with lower resolution data than that collected in the SGP Experiment. The anticipated results of the research include an improved understanding of and estimation abilities for soil moisture at catchment to regional scales, and to understand the relationship between remotely sensed soil moisture and ground observations. Eric F. Wood Department of Civil Engineering Princeton University Princeton, NJ 08544 Tel: 609-258-4675 Fax: 609-258-2799 (efwood@ceor.princeton.edu) Investigator/Institution: Peter J. Wetzel/NASA GSFC Title of investigation: Validation of PLACE land surface model using SGP97 observations Abstract: The SGP97 experiment provides a unique opportunity to validate land surface models on scales ranging from point to regional. As part of the ongoing validation of the PLACE (Wetzel and Boone, 1995) model, data from SGP97 will be applied to provide initial conditions for the model and to validate the model's predictions of soil moisture (Wetzel et al 1996; Boone and Wetzel 1996) and of evaporative fluxes. Eventually it is hoped that a data set can be developed which will be used for validation of other land surface models participating in the Project for Intercomparison of Land surface Parameterization Schemes (PILPS). References: Wetzel, P. J., and A. Boone, 1995: A parameterization for land-atmosphere-cloud exchange (PLACE): Documentation and testing of a detailed process model of the partly cloudy boundary layer over heterogeneous land, J. Climate, 8, 1810-1837. Wetzel, P. J., X. Liang, P. Irannejad, A. Boone, J. Noilhan, Y., Shao, C. Skelly, Y. Xue and Z.-L. Yang, 1996: Modeling vadose zone liquid water fluxes: Infiltration, runoff, drainage, interflow, Global and Planetary Change, 13, 57-71. Boone, A., and P. J. Wetzel, 1996: Issues related to low resolution modeling of soil moisture: Experience with the PLACEmodel, Global and Planetary Change, 13, 161-181. Investigator(s)/Institution(s): Christopher J. Duffy Civil and Environmental Engineering Dept., 212 Sackett Bldg Penn State University University Park, PA 16802 (814) 863-4384 (814) 863-7304 fax cjd@ecl.psu.edu Title of investigation: Hydrogeologic Reconnaissance SG97 Abstract: This investigation will involve field, library and agency (state, federal) research in order to compile available hydrogeologic data for the SG97 study sites. The compiled data will include geologic maps (digital and paper), groundwater level maps, and hopefully a reasonable number of historical well records. Field work will involve 1 week of site reconnaissance during June 97 (to be determined) including photographing all stream gaging stations, soil moisture sites, important landforms, geologic outcrops or other features of hydrologic interest. The hydrogeologic data base along with the site photos will be put on a CD-Rom and made available to all investigators. Christopher Duffy will initially work with Doug Miller who has the soils data compiled. The overall objective is to get at least a baseline of information on groundwater response during the experiment and to get some notion of the historical spatial and temporal variability in groundwater levels. References: A Two-State integral-balance model for soil moisture and groundwater dynamics in complex terrain, WRR, 32(8), 2421-2434, 1996. Ground-Based Visible and Near Infrared Radiometry Karen Humes University of Oklahoma Collaborating with: Bill Kustas and John Preuger (Flux measurements) Craig Daughtry (vegetation sampling and ground radiometry) Ground-based remote sensing measurements will be acquired in conjunction with flux measurements at the El Reno site and vegetation sampling at various sites. These measurements will be used to help develop and validate algorithms for several purposes: a) the estimation of surface fluxes with remotely sensed data; b) atmospheric corrections to satellite and aircraft data; c) the estimation of land cover and biomass from remotely sensed measurements. The radiometers to be used will include the 4-band Exotech radiometers (with bandpasses matching the TM and SPOT sensors) and occasional measurements with the ASD hyperspectral radiometer. Relating 19, 37, and 85 GHz field brightness measurements to SSM/I data during the SGP'97 Hydrology Experiment A.W. England, Jasmeet Judge, Brian Hornbuckle, Ed Kim and David Boprie The University of Michigan, Ann Arbor ABSTRACT We propose to monitor 19, 37, and 85 GHz sky- and ground-brightness and thermal infrared ground-brightness at the ARM SGP'97 site and to relate these observations to contemporaneous SSM/I data. The dominant landcover will be senescent winter wheat or, after the wheat is harvested, wheat stubble. The relatively low canopy column density in either case will allow some sensitivity to surface soil moisture at 19 GHz. Our radiometer system will be on a 10 m tower and will view the winter wheat/stubble at the SSM/I incidence angle of 53o. Data will be collected at half hour intervals for the duration of the experiment. Diurnal vegetation and soil samples will be collected periodically throughout the experiment. SSM/I data will be obtained from NSIDC and will be resampled to the Equal Area SSM/I Earth - grid (EASE - grid) for comparison with the field measurements. We will use our Land Surface Process/Radiobrightness (LSP/R) model to relate brightness at L-, C-, and S-band frequencies, and at the SSM/I frequencies to surface soil moisture and to local stored water. The LSP/R model has been validated in a series of Radiobrightness Energy Balance Experiments (REBEX) for prairie grassland in fall and winter (REBEX-1) and prairie grassland and bare soil in summer (REBEX-4). Our SGP'97 data will be combined with available meteorological and radiant flux data to validate the LSP/R model for winter wheat/stubble. Once validated, the model will be forced by observed weather and downwelling short- and long-wavelength radiance to predict 19, 37, and 85 GHz brightness for each of the dominate terrains within the SGP'97 region. These brightness will be aggregated for each local pixel of EASE-grid according to landcover fractions to yield a pixel brightness that can be compared with the resampled SSM/I data. We are particularly interested in a running comparison during a significant dry down period. -Investigators/Institutions: J. Ian MacPherson, PI, NRC Canada, Jocelyn Mailhot, co-I, AES/MRB, J. Walter Strapp, co-I, AES/MRB, Stephane Belair, co-I, AES/MRB NRC = National Research Council of Canada MRB = Meteorological Research Branch, AES = Atmospheric Environment Service Title: Mesoscale modelling of the convective boundary layer during SGP97 Abstract: The study addresses one of the main objectives of SGP97 "to examine the effect of soil moisture on the evolution of the atmospheric boundary layer and clouds over the southern great plains during the warm season". The investigation will focus on comparisons of detailed observations during SGP97 with mesoscale simulations using the MC2 (Mesoscale Compressible Community) model (Benoit et al. 1997) coupled with advanced land surface schemes, such as ISBA and CLASS (Noilhan and Planton 1989, Verseghy 1991), two models participating in PILPS. The high-resolution (order of a few km) models will be complemented with detailed spatial analyses of soil moisture measured with the ESTAR and SLFMR radiometers. The simulations will be compared with various measurements such as LASE, the Twin Otter aircraft turbulent flux observations, surface and tower measurements, and satellite remote sensing data. This will provide a unique opportunity to investigate various aspects of the structure an devolution of the convective boundary layer (CBL) during SGP97, on a variety of regional and local scales. The study also has some connection with another field study, MERMOZ, having objectives similar to SGP97. MERMOZ took place in Canada during June 1996 and will continue in August 1997, to examine several aspects of the CBL, in particular the influence of soil moisture on CBL evolution and entrainment processes near the CBL top (Mailhot et al. 1997a,b). References: Benoit, R., M. Desgagne, P. Pellerin, S. Pellerin, Y. Chartier, and S. Desjardins, 1997: The Canadian MC2: A semi-Lagrangian, semi-implicit wide-band atmospheric model suited for fine-scale process studies and simulation. Mon. Wea. Rev., (in press) Mailhot, J., and the MERMOZ Scientific Team, 1997a: MERMOZ Project Report. Recherche en prevision numerique, Atmospheric Environment Service, Dorval, Canada, 156 pp. Mailhot, J., R. Benoit, S. Belair, J.W. Strapp, J.I. MacPherson, N.R. Donaldson, J. Goldstein, F. Froude, M. Benjamin, I. Zawadski and R.R. Rogers, 1997b: The Montreal-96 Experiment on Regional Mixing and Ozone (MERMOZ): An overview and some preliminary results. Bull. Amer. Met. Soc. (submitted). Noilhan, J., and S. Planton, 1989: A simple parameterization of land surface processes for meteorological model. Mon. Wea. Rev., 117, 536-549. Verseghy, D.L., 1991: CLASS - A Canadian land surface scheme for GCMs. Soil model. Int. J. Climatol., 11, 111-133. Jocelyn Mailhot, Recherche en Prevision Numerique, Environnement Canada, 2121 Trans-Canada N., Suite 500, Dorval, Quebec, CANADA H9P-1J3. Phone:(514) 421-4760 Fax:(514) 421-2106 e-mail (Internet): Jocelyn.Mailhot@ec.gc.ca Stephane Belair Recherche en Prevision Numerique Environnement Canada, 2121 Trans-Canada N., Suite 500, Dorval, Quebec, CANADA H9P-1J3. Ian MacPherson, Flight Research Laboratory, NRC, Ottawa, ON, K1A 0R6 Walter Strapp, Cloud Physics Research Division, AES, Downsview, ON, M3H5T4
# Controlling the number of samples In the below simple qpsk_tx_uhd flowgraph I am trying to create a slider to control the number of samples but got a bit confused with which parameter I should be looking at: the Num samples in the Random source block, the Samples/symbol in the PSK block, or the Number of points in the Constellation Sink block! Any help with this issue would be highly appreciated. Flowgraph image: A digital modulator block takes symbols (discrete digital values) and produces samples (of an RF signal). Therefore, if you want to transmit, say, 1000 samples, and you have 2 samples/symbol set in your modulator, you must feed 1000/2 = 500 values into the modulator. Change your Random Source block so that its number of samples (values) is the number of samples you want divided by the modulator's samples/symbol setting. You will also need to change the random source's “Repeat” option to "No", otherwise it will repeat those samples forever.
# Chapter 16 - Section 16.4 - Thermal-Energy Balance - Example - Page 296: 16.7 $82^{\circ}F$ #### Work Step by Step We first must find the value of H for the walls and for the glass: $H_{walls} = \frac{A_{walls}\Delta T}{R_{walls}}= \frac{(300) \Delta T}{30}=10 \Delta T$ $H_{glass} = \frac{A_{glass}\Delta T}{R_{glass}}= \frac{(250) \Delta T}{1.8}=139\Delta T$ We divide the total heat loss by the total heat gain to find: $\Delta T = \frac{1 \times 10^4}{149}=67^{\circ}F$ Thus, when it is 15 degrees outside, it is $82^{\circ}F$ inside. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# Homework Help: Relationship between voltage and distance between source of light from a solar panel 1. Apr 21, 2010 ### cgi093 I'm conducting a lab in which I change the distance of a lamp from a solar panel and use a multimeter to measure the resulting voltage. What type of relationship am I supposed to find? My results seem to indicate either a relationship with either a quadratic equation as its model or an inverse power equation as its model. Could someone tell me what would make sense here? My class is learning about electricity and circuits, but we haven't yet covered how the concentration of light is dispersed over distance, how that would affect the energy received by a solar panel, etc. Thanks! 2. Apr 21, 2010 ### cgi093 Re: Relationship between voltage and distance between source of light from a solar pa Anyone know how this would work? 3. Apr 21, 2010 ### zachzach Re: Relationship between voltage and distance between source of light from a solar pa No, but I naturally would think that the inverse square law of light comes into play somewhere. 4. Apr 21, 2010 ### cgi093 Re: Relationship between voltage and distance between source of light from a solar pa Thanks, but since we haven't gone into any more detail with light than diffraction and refraction and stuff like that, I can't really use that too much. Thanks for the input though. Anyone else? 5. Apr 21, 2010 ### zachzach Re: Relationship between voltage and distance between source of light from a solar pa Well maybe you should learn. The inverse square law for light is a very fundamental one and certainly plays into your problem. I personally think it is the only thing that plays into your problem along with how the voltage changes with respect to how much power the panel is receiving. http://hyperphysics.phy-astr.gsu.edu/hbase/vision/isql.html. 6. Apr 21, 2010 ### Hellabyte Re: Relationship between voltage and distance between source of light from a solar pa Well I would guess that you would see the voltage across the panel drop as basically something like $$V \propto \frac{1}{r^2}$$ Think about the radiation from one really tiny amount of time let out from your light bulb. It will leave and propagate in all different directions equally creating a spherical shell of radiation. At further distances away from the bulb this spherical shell will be large and have a large surface area. However the same amount of energy will be stored in the light in that area. Because the surface area of a sphere is $$4\pi r^2$$ our power will get spread out as one over this or $$\propto 1/r^2$$. Here is the wikipedia article on the basic principle behind this, it shows up everywhere. and the picure is a good demonstration of what i was saying to visualize. http://en.wikipedia.org/wiki/Inverse-square_law 7. Apr 21, 2010 ### cgi093 Re: Relationship between voltage and distance between source of light from a solar pa Okay thanks a lot guys. I think this will definitely get me started.
Calculating number of operations in a divide and conquer approach when the input is not an exact power of 2 Here is a divide and conquer approach for finding minimum and maximum elements in an array. MaxMin(i, j, max, min) { //max and min are the references so that we can retain their value when //we return from the function. i and j are the indices of start and //end respectively. if (i=j) then max := min := a[i]; //Small(P) else if (i=j-1) then // Another case of Small(P) { if (a[i] < a[j]) then max := a[j]; min := a[i]; else max := a[i]; min := a[j]; } else { // if P is not small, divide P into sub-problems. // Find where to split the set. mid := ( i + j )/2; // Solve the sub-problems. MaxMin( i, mid, max, min ); MaxMin( mid+1, j, max1, min1 ); // Combine the solutions. if (max < max1) then max := max1; if (min > min1) then min := min1; } } If we analyze the complexity of this algorithm, we see that if n is the power of 2, then it does (3n/2)-2 comparisons. My question is how many comparisons does it do if n is not an exact power of 2. I think it does more than 3n comparisons because at the base level, we are left with 3 elements(which take 3 comparisons) instead of 2 elements(which take just 1 comparison). But I am not sure about my thinking. Any help would be appreciated. • – Anton Trunov Apr 18 '16 at 10:16 • Try to extend the analysis that works for powers of $2$ to general $n$, and see what you get. Also, you can try calculating the number of comparisons for small values of $n$, and then guess the general solution. – Yuval Filmus Apr 18 '16 at 10:31
This content will become publicly available on August 1, 2022 Energy dependence of $$\phi$$ meson production at forward rapidity in pp collisions at the LHC Abstract The production of $$\phi$$ ϕ mesons has been studied in pp collisions at LHC energies with the ALICE detector via the dimuon decay channel in the rapidity region $$2.5< y < 4$$ 2.5 < y < 4 . Measurements of the differential cross section $$\mathrm{d}^2\sigma /\mathrm{d}y \mathrm{d}p_{\mathrm {T}}$$ d 2 σ / d y d p T are presented as a function of the transverse momentum ( $$p_{\mathrm {T}}$$ p T ) at the center-of-mass energies $$\sqrt{s}=5.02$$ s = 5.02 , 8 and 13 TeV and compared with the ALICE results at midrapidity. The differential cross sections at $$\sqrt{s}=5.02$$ s = 5.02 and 13 TeV are also studied in several rapidity intervals as a function of $$p_{\mathrm {T}}$$ p T , and as a function of rapidity in three $$p_{\mathrm {T}}$$ p T intervals. A hardening of the $$p_{\mathrm {T}}$$ p T -differential cross section with the collision energy is observed, while, for a given energy, $$p_{\mathrm {T}}$$ p T spectra soften with increasing rapidity and, conversely, rapidity distributions get slightly narrower at increasing $$p_{\mathrm {T}}$$ p T . The new results, complementing the published measurements at $$\sqrt{s}=2.76$$ s = 2.76 and 7 TeV, allow one to establish the energy dependence more » Authors: ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » Award ID(s): Publication Date: NSF-PAR ID: 10313940 Journal Name: The European Physical Journal C Volume: 81 Issue: 8 ISSN: 1434-6044 1. Abstract The production of $$\pi ^{\pm }$$ π ± , $$\mathrm{K}^{\pm }$$ K ± , $$\mathrm{K}^{0}_{S}$$ K S 0 , $$\mathrm{K}^{*}(892)^{0}$$ K ∗ ( 892 ) 0 , $$\mathrm{p}$$ p , $$\phi (1020)$$ ϕ ( 1020 ) , $$\Lambda$$ Λ , $$\Xi ^{-}$$ Ξ - , $$\Omega ^{-}$$ Ω - , and their antiparticles was measured in inelastic proton–proton (pp) collisions at a center-of-mass energy of $$\sqrt{s}$$ s = 13 TeV at midrapidity ( $$|y|<0.5$$ | y | < 0.5 ) as a function of transverse momentum ( $$p_{\mathrm{T}}$$ p T ) using the ALICE detector at the CERNmore » 2. A bstract The p T -differential production cross sections of prompt and non-prompt (produced in beauty-hadron decays) D mesons were measured by the ALICE experiment at midrapidity ( | y | < 0 . 5) in proton-proton collisions at $$\sqrt{s}$$ s = 5 . 02 TeV. The data sample used in the analysis corresponds to an integrated luminosity of (19 . 3 ± 0 . 4) nb − 1 . D mesons were reconstructed from their decays D 0 → K − π + , D + → K − π + π + , and $${\mathrm{D}}_{\mathrm{s}}^{+}\tomore » 3. A bstract The inclusive production of the J/ ψ and ψ (2S) charmonium states is studied as a function of centrality in p-Pb collisions at a centre-of-mass energy per nucleon pair$$ \sqrt{s_{\mathrm{NN}}} $$s NN = 8 . 16 TeV at the LHC. The measurement is performed in the dimuon decay channel with the ALICE apparatus in the centre-of-mass rapidity intervals − 4 . 46 < y cms < − 2 . 96 (Pb-going direction) and 2 . 03 < y cms < 3 . 53 (p-going direction), down to zero transverse momentum ( p T ). The J/more » 4. Abstract This paper presents the measurements of$$\pi ^{\pm }$$π ± ,$$\mathrm {K}^{\pm }$$K ± ,$$\text {p}$$p and$$\overline{\mathrm{p}} $$p ¯ transverse momentum ($$p_{\text {T}}$$p T ) spectra as a function of charged-particle multiplicity density in proton–proton (pp) collisions at$$\sqrt{s}\ =\ 13\ \text {TeV}$$s = 13 TeV with the ALICE detector at the LHC. Such study allows us to isolate the center-of-mass energy dependence of light-flavour particle production. The measurements reported here cover a$$p_{\text {T}}$$p T range from 0.1 to 20$$\text {GeV}/c$$GeV / c and aremore » 5. Abstract The invariant differential cross section of inclusive$$\omega (782)$$ω ( 782 ) meson production at midrapidity ($$|y|<0.5$$| y | < 0.5 ) in pp collisions at$$\sqrt{s}=7\,\hbox {TeV}$$s = 7 TeV was measured with the ALICE detector at the LHC over a transverse momentum range of$$2< p_{\mathrm {T}}< 17\,\hbox {GeV}/c$$2 < p T < 17 GeV / c . The$$\omega $$ω meson was reconstructed via its$$\omega \rightarrow \pi ^+\pi ^-\pi ^0$$ω → π + π - π 0 decay channel. The measured$$\omega ω production cross section is comparedmore »
# Measuring Performance The first step in accelerating any program is to measure how fast it runs. There are many ways of measuring the speed of programs. At their simplest, you can use a stopwatch (or an automated stopwatch, e.g. the time command that is available on MacOS or Linux). At their most in-depth, you can use full profiling tools to measure how long every function call takes, and see which functions call which functions. In this workshop, we will use something that sits in the middle. Something that is simple enough for day-to-day use, while complex enough that you can get useful information to track your progress when accelerating your code. # timeit The timeit function is built into Python via the timeit module. This is integrated into Jupyter so that you can interactively time any function. For example, start a Jupyter Notebook and define this simple function; def slow_function(): import time time.sleep(1) This function will sleep for one second. So, it should take about one second to run. You can time this function by calling it within a timeit function, e.g. timeit( slow_function() ) I get the output; 1 s ± 1.66 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) (you may see something different) The timeit function has run the function several times (in my case, 7 times). It has measured how long each run took, and then calculates an average and a standard deviation. In this case, the 7 runs took an average of 1 second each, with a 1.66 ms standard deviation. You can use timeit to time any function call in Python. It is a very convenient and quick way to measure how long something takes. Note that the function that is called should be safe to call several times in a row, e.g. it doesn’t have any side effects. In general, it is not good practice to write functions that have side effects. Note also that the timeit call must be the only call in the Jupyter notebook cell. # Exercise We will be using the exercises throughout this workshop to examine a single piece of code. To start, you need to download the code to your computer. To do this, copy and paste the following into a Jupyter notebook cell; import urllib.request url = "https://raw.githubusercontent.com/chryswoods/siremol.org/main/chryswoods.com/accelerating_python/code" filename = "slow.py" urllib.request.urlretrieve(f"{url}/{filename}", filename) This should download the exercise code from the course website, and will write it to the file in your current directory, called slow.py. ## Exercise 1 Try to run the code. How long does it take to execute? To do this on Linux or MacOS you can use the time command, e.g. time python slow.py On Windows powershell you can (probably!) time programs using Measure-Command { python slow.py | Out-Host } although if that doesn’t work, you can use your watch or phone’s stopwatch. ## Exercise 2 There are three functions in this program, which are called in sequence by the script: 1. load_and_parse_data - this loads a percentage of the data for analysis, placing the data into three variables. 2. calculate_scores - this calculates all of the scores from all of the loaded data. 3. get_index_of_best_score - this finds the index of the pattern with the best score, from the calculated scores. These functions are called at the bottom of the script, i.e. if __name__ == "__main__": # Load the data to be processed # Calculate all of the scores scores = calculate_scores(data) # Find the best pattern best_pattern = get_index_of_best_score(scores) # Print the result print(f"The best score {scores[best_pattern]} comes from pattern {ids[best_pattern]}") The code has been written so that it can be loaded as a module, so that each function can be called individually. This means that you can use timeit to time each individual function, e.g. in a Jupyter notebook you can type import slow timeit( slow.load_and_parse_data(5) ) to find out how long it takes to load 5% of the data. Next, load all of the data using (ids, varieties, data) = slow.load_and_parse_data(5) Now use timeit to find out how long the calculate_scores function takes. Next, get all of the scores using scores = slow.calculate_scores(data) Now use timeit to find out how long the get_index_of_best_score function takes. • Does the runtime of each of these three functions sum up to be about the same runtime that you measure in Exercise 1? • Which function is the slowest? • How much quicker would the script run if you could double the speed of the load_and_parse_data function? • How much quicker would the script run if you could double the speed of the calculate_scores function? • How much quicker would the script run if you could double the speed of the get_index_of_best_score function? • Which function should you concentrate on if you want to accelerate the script?
# Judge Unseals Yahoo Shareholder Suit: Severance Poison Pill Key To Complaint While things have been pretty quiet on the Microsoft (NSDQ: MSFT) and Yahoo (NSDQ: YHOO) front (knock on wood: this looks to be a busy week), here’s an interesting note on the legal side of things: According to Yahoo’s own internal documents, reports AP, the expanded severance plan it announced in February would have cost Microsoft an extra $462 million –$2.1 billion over its initial \$44.1 billion offer. The document was unearthed as part of a shareholder lawsuit which, in part, alleges that Microsoft could’ve offered more for the company had it not been for the severance plan. The full document can be found here (.pdf) (via TechTraderDaily). The internal Yahoo emails going over the severance math start on page 49 of the filing. Update: The judge in this case also sent out a letter (.pdf) explaining the decision to open the filing, and it specifically addresses the the severance question: “Defendants argue that such excerpts, taken out of context, will prejudice Yahoo! in its upcoming proxy contest because such partial disclosure will create an incomplete record of the circumstances surrounding the adoption of the Yahoo! severance plans. Though I am cognizant of defendants
Calculates a confidence interval given a g_REML, a g_HPS, or a g_mlm object using either a central t distribution (for a symmetric interval) or a non-central t distribution (for an asymmetric interval). ## Arguments g an estimated effect size object of class g_REML, class g_HPS, or class g_mlm. cover confidence level bound numerical tolerance for non-centrality parameter in qt. symmetric If TRUE (the default), use a symmetric confidence interval. If FALSE, use a non-central t approximation to obtain an asymmetric confidence interval. ## Value A vector of upper and lower confidence bounds. ## Examples data(Laski) Laski_RML <- lme(fixed = outcome ~ treatment, random = ~ 1 | case, correlation = corAR1(0, ~ time | case), r_const = c(1,0,1), returnModel = FALSE) )
# Moscow International School of Physics 2022 24 July 2022 to 2 August 2022 House of International Conferences Europe/Moscow timezone ## Prospects for the search for HN in the CMS experiment using the lepton decay of the Ds meson into μν 30 Jul 2022, 20:00 2h House of International Conferences #### House of International Conferences Dubna, Russia Board: 1 Poster (portrait A1 or landscape A0) Young Scientist Forum ### Speaker Yakov Andreev (MIPT) ### Description One of the ways to detect a sterile neutrino, as well as to measure its mass and strength of connection with an ordinary neutrino, is to search for decays of heavy hadrons, in which the lepton number conservation law is violated. The report discusses the prospects for searching for a heavy sterile neutrino in the decay of a $D_s$ meson into two muons of the same sign and a pion of opposite charge, using data from the CMS experiment at the Large Hadron Collider at CERN, collected in 2018 in proton-proton collisions with an energy of 13 TeV in the center of mass system . ### Primary authors Yakov Andreev (MIPT) Ruslan Chistov (LPI RAS) Kirill Ivanov (MIPT) ### Presentation Materials ###### Your browser is out of date! Update your browser to view this website correctly. Update my browser now ×
Back Abstracts Chapman University Agglomeration and the Extent of the Market    [pdf] Abstract Cities and marketplaces are central to economic development, but we know little about why such agglomerations initially form. I argue that evolutionary forces cause agglomerations to emerge when individuals' desire to spatially coordinate exchange in complex environments. To test this idea, I perform a laboratory experiment where geographically dispersed individuals bring different goods to a location for trade. I find that individuals spontaneously coalesce to reap the gains from exchange, re-agglomerate at the same locations after shocks, and have location choices that aggregate to create a Zipf population distribution. I also find that there is more agglomeration in economies with a larger variety of goods, that being land-tied reduces agglomeration, and that being land-tied magnifies the effect of variety. UCLA Dynamic Matching and Allocation of Tasks    [pdf] (joint work with Kartik Ahuja, Mihaela van der Schaar) Abstract In many two-sided markets, the parties to be matched have incomplete information about their characteristics. Each side has an opportunity to learn (some) relevant information about the other before final matches are made. For instance, clients seeking workers to perform tasks often conduct interviews that require the workers to perform some tasks and thereby provide information to both sides. The performance of a worker in such an interview- and hence the information revealed - depends both on the inherent characteristics of the worker and the task and also on the actions taken by the worker (e.g. the effort expended), which are not observed by the client thus there is moral hazard. Our goal is to derive a dynamic matching mechanism that facilitates learning on both sides before final matches are achieved and ensures that the worker side does not have incentive to obscure learning of their characterisitcs through their actions. We derive such a mechanism that leads to final matchings that achieve optimal performance (revenue) in equilibrium. We show that the equilibrium strategy is long-run coalitionally stable, i.e. there is no subset of workers and clients that can gain by deviating from the equilibrium strategy. Arizona State University May-to-One Dynamic Matching    [pdf] Abstract I study the stability on many-to-one matching markets in dynamic framework with the following features: matching is irreversible, market evolves over time, and each side of the market is restricted by a quota. I showed that in dynamic framework, pairwise stability is not sucient for stability. A new strategic behavior in such markets arises i.e. colleges can manipulate the ultimate matching via earlier matchings. Such incentives require cyclic preferences as well as restricted quota. Absent either of these conditions, one can rely on the results of related one-to-one dynamic market - a useful trick to compute stability in static world. Moreover, when the preferences are aligned, dynamically stable matchings are equivalent to the statically stable ones. University of Michigan Characterizing non-myopic information cascades in Bayesian learning    [pdf] (joint work with Ilai Bistritz, Nasimeh Heydaribeni, Achilleas Anastasopoulos) Abstract We consider an environment where a finite number of players need to decide whether to buy a certain product (or adopt a trend) or not. The product is either good or bad, but its true value is not known to the players. Instead, each player has her own private information on the quality of the product. Each player can observe the previous actions of other players and estimate the quality of the product. A player can only buy the product once. In contrast to the existing literature on informational cascades, in this work players get more than one opportunity to act. In each turn, a player is chosen uniformly at random from all players and can decide to buy or not to buy. His utility is the total expected discounted reward, and thus myopic strategies may not constitute equilibria. We provide a characterization of structured perfect Bayesian equilibria (sPBE) with forward-looking strategies through a fixed-point equation of dimensionality that grows only quadratically with the number of players. In particular, a sufficient state for players\' strategies at each time instance is a pair of two integers, the first corresponding to the estimated quality of the good and the second indicating the number of players that cannot offer additional information about the good to the rest of the players. We show existence of such equilibria and characterize equilibria with threshold strategies w.r.t. the two aforementioned integers. Based on this characterization we study informational cascades and show that they happen with high probability for a large number of players. Furthermore, only a small portion of the total information in the system is revealed before a cascade occurs. Universidad del Valle Schumpeterian Behavior in a CPR Game: Experimental Evidence from Colombian Fisheries Under TURF’s Management    [pdf] (joint work with Daniel Guerrero) Abstract This paper studies the behavior of Pacific-Colombian fishermen in a Common-Pool Resource game. The results show that decision-making depends on fishermen's schooling, sex and last round payoffs. Focusing on individual information, we observe that human capital, measured in years of schooling, has a significant effect on decision-making. Specifically, players with higher schooling adjust their decisions towards on lower levels of harvest, leading closer to the cooperative solution. This behavior could be explained by the better-educated subjects’ improved understanding of the information available to them and possible coordination of efforts due to TURF-based management in the zone. University of Rochester Collusion under persistent shocks    [pdf] (joint work with Vyacheslav Arbuzov, Gustavo Gudino) Abstract We study a repeated Cournot competition model where prices are determined not only by firms\' quantities but also unobservable market shocks (Green and Porter, 1984). Unlike Green and Porter (1984), market shocks are persistent and today’s market condition affects tomorrow’s market condition. With such persistence, a cheating firm can manipulate its rival’s belief about future market conditions. Such belief manipulation creates another channel for the firm to optimally cheat on the opponent. Despite this additional channel, we show that under certain conditions, firms can still collude. Moreover, persistence rather makes it easier to collude. The Ohio State University Selling shares to many budget constrained bidders: Theory and Experiment    [pdf] Abstract Many auctions sell a divisible item that could be sold by shares: shares of a company, mineral rights, computer server capacity, and shares of facilities. If buyers are willing to buy the whole item and have the ability to pay, standard auctions where the highest bidder wins the whole item (ex. First price auction) are known to allocate the item efficiently and raise the highest revenue. However, when bidders have budget constraints, selling shares of the item to many bidders could be more reasonable than selling the whole item to the highest bidder. Our study aims to theoretically and experimentally investigate two formats of auctions of shares, the uniform price auction and the voucher auction (Krishna 2009), which have been suggested and used by practitioners. In particular, we will study bidding behavior and revenue implications of the two auctions with budget constrained bidders, compared to the first price auction, which is the most frequently used standard auction. Theoretical predictions show that under budget-constrained environments, both of the two share auctions can raise more revenue than the first price auction if the number of bidders is high enough. However, the two share auctions have distinctively different patterns in their revenues as budget constraints change. The revenue of the voucher auction is robustly constant regardless of budget constraints, but the revenue of uniform price auction decreases dramatically when the budget constraint gets severe. We conducted several sessions of lab experiment and the outcomes were qualitatively consistent with the theoretical predictions. University of Zielona Góra Interim Correlated Rationalizability in Large Games    [pdf] (joint work with Michael Greinecker, Kevin Reffett and Ł. Woźny) Abstract We provide general theoretical foundations for modeling strategic uncertainty in large distributional Bayesian games of with general type spaces in terms of a version of interim correlated rationalizability. We then focus on the case that payoff functions are supermodular in actions as in much literature on global games. This allows us to identify extremal interim correlated rationalizable solutions with extremal interim Bayes-Nash equilibria. No order structure on types is used. International Institute of Information Technology, Pune Resolving Deadlocks using All-Pay Auctions    [pdf] Abstract This paper proposes a model that can be used to handle deadlocks in different systems. It uses a game theory approach to handle deadlocks. It describes the problem of deadlocks in different systems along with the prerequisite conditions a system needs to satisfy in order to be susceptible to deadlocks. The paper then goes over the many existing methods of dealing with deadlocks in such systems and briefly goes over different algorithms and techniques used in real-world systems to deal with deadlocks. The proposed model is defined and an example is used to demonstrate the performance of the proposed model. The paper concludes by comparing the proposed model with existing models. University of South Florida Dynamic Contracts with Random Monitoring    [pdf] Abstract In contractual relationships where the agent executes numerous independent tasks over the lifetime of the contract, it is often infeasible to evaluate his performance on all tasks that he is assigned. Incentives under moral hazard are instead provided by randomly determining whether or not to monitor each of these tasks. We characterize optimal contracts implemented with such random monitoring in a stochastic dynamic environment where the agent's cost type varies over time. We show that the compensation terms the agent is promised for contingencies where monitoring reveals compliance are as good as those for when no monitoring takes place, and for some cost types are better; these latter types receive a monitoring reward. As time passes and the agent becomes richer, the size of the monitoring reward decreases. Compensation on the equilibrium path exhibits downward rigidity, a feature elicited empirically by earlier literature. Indian School of Business Timely Persuasion    [pdf] (joint work with Zhen Zhou ) Abstract We consider a regime change game but allow the agents to attack within a time window. Attack is irreversible and delayed attack is costly. There could be panic-based attacks, i.e., the agents attack thinking others will attack, even though it is not warranted. We propose a simple dynamic information disclosure policy, called “disaster alert”, which at a given date publicly discloses whether the regime is doomed to fail. We show that a timely alert persuades the agents to wait for the alert and not at- tack if the alert is not triggered, regardless of their private signals, and thus, eliminates panic. Tepper School of Business, Carnegie mellon university Persuasion for the Long Run    [pdf] (joint work with Daniel Quigley) Abstract We examine a persuasion game where concerns about future credibility are the sole source of current credibility. A long-run sender plays a cheap talk game with a sequence of short-run receivers. We characterise optimal persuasion in this setting, relating this to canonical persuasion problems. We show that long-run incentives do not generally substitute for ex-ante commitment to reporting strategies. A patient sender can achieve the same average payoffs as a sender with ex-ante commitment if and only if a) monitoring is perfect; and b) the optimal strategy under commitment induces a partitional information structure. We then show how a ‘review aggregator’ can implement average payoffs and information structures arbitrarily close to those available under ex-ante commitment. We examine such a review aggregator in the context of online markets. We also examine the connection between our ‘review aggregator’ and a 2002 financial legislation on the release of aggregate statistics regarding financial advice. City University of New York, Baruch College Hardness of Learning in Rich Environments and Some Consequences for Financial Markets    [pdf] Abstract This paper examines the computational feasibility of the standard model of learning in finance theory. Surprisingly, I find that the Bayesian update formula at the heart of this model is impossible to compute in all but the simplest scenarios. Specifically, using tools from theoretical machine learning, I show that there is no polynomial implementation of the formula unless the independence structure of variables in the data is common knowledge. Next, I demonstrate that there cannot exist a polynomial algorithm to infer the independence structure of variables; consequently, the overall learning problem is intractable. Using the Bayesian update formula when it is computationally infeasible carries risks, and some of these are explored in the latter part of the paper in the context of financial markets. Especially in rich, high-frequency environments, it implies discarding a lot of useful information, and this can lead to paradoxical outcomes. I illustrate this in a trading example where market prices can never reflect an informed trader's information, no matter how many rounds of trade. This paper thus provides new theoretical motivation for the use of bounded rationality models in the study of trading and market efficiency — the bound on rationality arising from the computational hardness in learning. University of Texas, Austin Strategic exit with information and payoff externalities    [pdf] Abstract I consider a stopping game between two players, where observations related an unknown state of the nature arrive at random. Players not only learn from observing each other, but their payoffs also depend on the presence of the counterpart. I derive a general characterization of an equilibrium in this game. As applications, I consider two stopping time games which can be viewed as models of sponsored research - one is the model where researchers get funded until (if ever) a research project experiences the first failure, the other one is the model where researchers get rewarded if a success is achieved. In either case, the researchers start working on a project of unknown quality. The quality of the project is identified with its ability to generate failures or successes, in the first and second models, respectively. The rate of arrival of success conditioned on the quality of the project is an increasing function of the total time spent on the sponsored research. Observations of failures or successes are public information. I find subgame perfect equilibria in both models and show that in case of two competing researchers, neither equilibrium outcomes, nor cooperative solutions are efficient unless research creates no payoff externalities. In either model, at least one of the researchers experiments inefficiently long, so that a designer of a grant competition would like to stop sponsoring one of the players earlier than in equilibrium. Surprisingly, this result holds in the model where the first success is rewarded no matter whether the laggards are rewarded with a smaller prize or punished. University of Texas, Austin Competing for success? On dangers of product competition    [pdf] Abstract Boeing planned the glitzy unveiling of its new 777X jetliner mid March 2019. Tragic events three days earlier prompted it to cancel the event. Could the crashes in Ethiopia and Indonesia have been avoided had Boeing been not under a competitive pressure from Airbus? The main question of this paper is how long a producer should experiment with a risky new product before introducing it to consumers. We also study how the length of experimentation is affected by competition in a duopoly, and how it depends on positive or negative correlation between the risks the duopolists face. New York University The Excess Method: A Multiwinner Approval Voting Procedure to Allocate Wasted Votes    [pdf] (joint work with Markus Brill) Abstract In using approval voting to elect multiple winners to a committee or council, it is desirable that excess votes—approvals beyond those that a candidate needs to win a seat—not be wasted. The excess method does this by sequentially allocating excess votes to a voter’s as-yet-unelected approved candidates, based on the Jefferson method of apportionment. It is monotonic—approving of a candidate never hurts and may help him or her get elected—computationally easy, and less manipulable than related methods. In parliamentary systems with party lists, the excess method is equivalent to the Jefferson method and thus ensures the approximate proportional representation of political parties. As a method for achieving proportional representation (PR) on a committee or council, we compare it to other PR methods proposed by Hare, Andrae, and Droop for preferential voting systems, and by Phragmén for approval voting. Because voters can vote for multiple candidates or parties, the excess method is likely to abet coalitions that cross ideological and party lines and to foster greater consensus in voting bodies. University of Pittsburgh Communication with Partially Verifiable Information: An Experiment (joint work with Maria Montero, Martin Sefton) Abstract We use laboratory experiments to study communication games with partially verifiable information. In these games, based on Glazer and Rubinstein (2004, 2006), an informed sender sends a two-dimensional message to a receiver, but only one dimension of the message can be verified. We compare a treatment where the receiver chooses which dimension to verify with one where the sender has this verification control. We find significant differences in outcomes across treatments. Specifically, receivers are more likely to observe senders’ best evidence when senders have verification control. However, receivers’ payoffs do not differ significantly across treatments, suggesting they are not hurt by delegating verification control. We also show that in both treatments the receiver’s best reply to senders’ observed behavior is close to the optimal strategy identified by Glazer and Rubinstein. Associate Professor Word of Mouth Communication and Search    [pdf] (joint work with Matthew Leister, Yves Zenou) Abstract Often the most credible source of information about the quality of products is advice from friends. We develop a word-of-mouth search model where information flows from the old to the new generation for an experience good with unknown quality. We study the features of the social network that determine product quality and welfare and characterize the demand-side (under provision of search effort) and supply-side (inefficient entry by firms) factors that result in inefficiencies. We extend our framework to encompass richer communication structures, correlation between the individuals' links between old and new generations, endogenous prices, as well as the possibility for a high-quality firm to seed information of its quality to a particular consumer. The Ohio State University Seller Curation in Platforms    [pdf] Abstract This article explores why market platforms do not screen out low-quality sellers in situations where screening costs are minimal. Consumers in a platform's market must search for a seller whose product is a good match. The presence of low-quality sellers reduces search intensity, softening competition between sellers, increasing equilibrium price and hence the platform's revenue per sale. If the platform's market is sufficiently competitive then it admits a positive proportion of low-quality sellers. Recommending a high-quality seller and search obfuscation are complementary strategies because the low-quality sellers enable the recommended seller to attract many consumers at a high price. University of Warwick When Does Information Determine Market Size? Search and Rational Inattention    [pdf] Abstract I develop a model in which optimal costly information acquisition by individual firms causes adverse selection in the market as a whole. Each firm's information acquisition policy determines which customers they provide to, and that in turn affects the distribution of customers remaining in the market and hence other firms’ incentives. I show that if firms possess the ability to choose any signal of the customer's type, in equilibrium all firms in the market will profit. By contrast, with restricted signal choice, only a limited number of firms can be profitable. In such a setting, the maximum number of profitable firms fails to increase with the number of potential customers. Smooth information acquisition dampens the adverse selection externality due to each firm, while lumpy information acquisition does not. I establish that my results apply to a broad class of information acquisition processes. Yeshiva University Subversive Conversations    [pdf] (joint work with Nemanja Antic, Rick Harbaugh) Abstract We consider the problem of a two-person committee with common interests exchanging information in order to take a decision. The committee faces a constraint in the form of a third player, the regulator, who is uninformed, has a conflict of interest with the committee, listens to the communication between the committee members, fully understands the intended meaning of all messages, and can overrule the committee decision. We identify conditions under which the committee can subvert the regulator’s agenda and implement the same committee-optimal decision rule that it would implement if it could communicate privately. Subversive communication typically takes the form of a back and forth conversation where committee members hide extreme information early in the conversation. Our results provide a theory of conversations based on plausible deniability in the face of possible public outrage. Texas A&M University Worst-Case Analysis for a Leader-follower Partially Observable Stochastic Game    [pdf] (joint work with Yanling Chang) Abstract This paper studies a leader-follower partially observable stochastic game where (i) the two agents are non-cooperative, and (ii) the follower's objective is unknown to the leader and/or the follower is irrational. We determine the leader's optimal value function assuming a worst-case scenario. Motivated by the structural properties of this value function and its computational complexity, we design a viable and computationally efficient solution procedure for computing a lower bound of the value function and an associated policy for the finite horizon case. We analyze the error bounds and show that the algorithm for computing the value function converges for the infinite horizon case. We illustrate the potential application of the proposed approach in a security context for a liquid egg production example. University of Oxford Price Competition in Buyer-Seller Networks    [pdf] Abstract Traditional economic models of competition between sellers assume that each seller has access to the entire set of buyers in the market. However, many economic interactions occur in settings where sellers are only able to sell to a subset of buyers. This paper models differentiated Bertrand competition in a network. A bipartite graph determines the relationship between buyers and sellers, with sellers competing for overlapping consumers on price. For sufficiently small substitutability between goods, there is a unique interior pure-strategy equilibrium where each seller's price is decreasing in their Bonacich centrality in a sellers-only graph which is strategically equivalent to the original bipartite graph. Using this model, it is possible to analyse the result of changes to the competition in the network - for example, seller entry or the introduction of a new buyer. The proposed framework can also be used to find the network structure that maximises consumer surplus and/or seller profit. Singapore Management University Probabilistic Generalized Median Voter Schemes: A Robust Characterization    [pdf] (joint work with Souvik Roy, Soumyarup Sadhukhan, Arunava Sen, Huaxia Zeng) Abstract We study Random Social Choice Functions (or RSCFs) in a standard ordinal mechanism design model. We introduce a new preference restriction, eventual single-peakedness. We first show that a unanimous RSCFs is strategy-proof on the eventually-single-peaked domain if and only if it is a Probabilistic Generalized Median Voter Schemes (or PGMVS), and satisfies a partial-random-dictatorship condition. Next, we show that a strategy-proof PGMVS defined on this domain is decomposable as a mixture of finitely many strategy-proof generalized median voter schemes only if it satisfies a scale-effect condition. We then construct a non-decomposable strategy-proof PGMVS for the case of more than two voters via the negation of the scale-effect condition, and prove the decomposability of all two-voter strategy-proof PGMVSs. Last, we illustrate the salience of our eventual-single-peakedness and the robustness of our PGMVS characterization theorem by generalizing our analysis to the class of connected domains. Columbia University A model of rent seeking and inequality    [pdf] Abstract Social scientists have argued that inequality fosters rent seeking and that rent seeking is likely to reinforce existing inequalities. In this paper, I formalize these interactions by modeling rent seeking in an unequal endowment economy where agents can choose to be rentiers or not. I find that when the cost of rent seeking is exogenous, more inequality fosters a greater proportion of rentiers, which in turn further skews the distribution of resources. I endogenize the cost of rent seeking by assuming that the rentiers pay the cost to a central institution, which chooses the cost per rentier to maximize its revenue. In this setting, the revenue-optimizing cost of rent seeking per rentier increases with more inequality, which results in a lower proportion of rentiers. However, ex-post inequality still increases. The results show how economies can end up with persistent inequality in the presence of rent seeking. Northwestern University, Kellogg Privacy in Bargaining: The Case of Endogenous Entry    [pdf] Abstract I study the role of privacy in bargaining. A seller makes offers every instant, without commitment, to a privately informed buyer. Potential competing buyers(entrants) pay attention to the negotiation and can choose to interrupt it by triggering a bidding war. When bargaining in public (in view of entrants), the seller can, through her choice of offers, manipulate entrants’ beliefs about the buyer. In equilibrium, the seller’s lack of commitment reverses the seemingly intuitive effects of publicity. When entrants prefer a bidding war against low types of the buyer, the seller typically prefers bargaining in private, even though public bargaining enables her to lure in competition against the incumbent buyer National Taiwan University Correlation with Forwarding    [pdf] Abstract I consider the three-player complete information games augmented with pre-play communication. Players can privately communicate with others, but not through a mediator. I implement correlated equilibria by allowing players to authenticate their messages and forward the authenticated messages during communication. Authenticated messages, such as letters with signatures, cannot be duplicated but can be sent or received by players. With authenticated messages, I show that, if a game G has a worst Nash equilibrium α, then any correlated equilibrium distribution in G, which has rational components and gives each player higher payoff than what α does, can be implemented by a pre-play communication. The proposed communication protocol does not require perfect public recording ([Barany, 1987]) and does not publicly expose players’ messages in any stage during communication. University of Rochester Middlemen and Reputation    [pdf] (joint work with Yu Awaya, Zirou Chen, Makoto Watanabe) Abstract We develop a model in which a reputation mechanism allows for a middleman to mitigate information frictions. The middleman can play such a role even without having superior technologies relative to other agents in identifying product quality, or issuing the quality certificate. We establish an equilibrium where the market organized by the middleman can be some times viable and other times collapse. Our theory provides rationale for why in some market specialist agents or brokers/dealers can operate with their reputation to guarantee their asset quality, but sometimes lose the reputation as a trustworthy investment channel, just like market crashes during financial crisis. Johns Hopkins University Competitive Equilibrium Fraud in Markets for Credence-Goods    [pdf] (joint work with Edi Karni) Abstract This is a study of the nature and prevalence of persistent fraud in competitive markets for credence-quality goods. We model the market as a dynamic game of incomplete information in which the players are customers and suppliers and analyze their equilibrium behavior. Customers characteristics, search cost and discount rate, are private information. Customers do not possess the expertise necessary to assess the service they need either ex ante or ex post. We show that there exists no fraud-free equilibrium in the markets for credence-quality goods and that fraud is a prevalent and persistent equilibrium phenomenon. MIT Robust Cooperation with First-Order Information    [pdf] (joint work with Daniel Clark, Drew Fudenberg, Alexander Wolitzky) Abstract We study when and how cooperation can be supported in the repeated prisoner’s dilemma in a large population with random matching and overlapping generation, when players have only first-order information about their current partners: a player’s record tracks information about her past actions only, and not her partners’ past actions (or her partners’ partners’ actions). We also restrict attention to strict equilibria that are coordination-proof, meaning that two matched players never play a Pareto-dominated Nash equilibrium in the stage game induced by their records and expected continuation payoffs. We find that simple strategies can support limit efficiency if the stage game is either “mild” or “strongly supermodular,” and that no cooperation can occur in equilibrium for a near-complementary parameter set. The presence of “supercooperator” records, where a player cooperates against any opponent, is crucial for supporting maximal cooperation when the stage game is “severe.” Pontificia Universidad Católica del Perú INTERACTIVE EPISTEMOLOGY APPLIED TO DRAFTING CONTRACTS. THE PARTIAL DEATH OF FILLING THE CONTRACTUAL GAPS    [pdf] (joint work with Alvaro Cuba Horna) Abstract This paper analyzes the problematic of the contractual gaps from the interactive epistemology. Until now, it is an axiom that every incomplete contract must be filled by a judge or arbitrator. The present paper will try to demonstrate that this axiom is wrong. On the contrary, from interactive epistemology, I will propose a model in which agents share common knowledge but based on an incorrect state of the world. This model will reflect that the agents, in fact, write contracts based on incorrect information. The true state of the world and the information partitions will only be obtained as different events happen in time. Therefore, when the parties resort to a judge or arbitrator to fill a contractual gap, they would be creating a new contract that the parties never had an opportunity to write, because they were always based on a false state of the world. In this sense, the model that will be developed proposes that a judge or arbitrator should only fill a contractual gap when the event is common knowledge. If this were the case, it would have always belonged to the true state of the world, so that the parties would have always had knowledge of that event. For this, government measures will be essential to create common knowledge information in contracts. Only with this mechanism, the agents will be able to write more perfect contracts. Boston University Venture Capital Contracts under Disagreement    [pdf] Abstract I examine an optimal financial contract between an entrepreneur and a venture capitalist. The entrepreneur has an early stage project that is not fully implemented. In particular, the direction that the project should follow has not been decided yet. Both players have different beliefs about the optimal direction. Given that decisions are not contractible, the venture capitalist demands a fraction of control rights and cash-flow rights to participate in the project. After the direction is selected, the venture capitalist can exert costly effort to increase the probability of success of the project; this increment can be small (venture capitalist is not important) or big (venture capitalist is important). I show that the amount of control rights relinquished by the entrepreneur decreases with disagreement unless the venture capitalist is not important in implementation. Harvard University Belief Polarization and News on Social Media    [pdf] Abstract Social media and other online interactions have recently become a major source of news and information about current events, bringing new social learning patterns and new questions about platform design. To study these questions, we develop a framework involving the co-evolution of beliefs and information-sharing behavior. When people do not fully account for others\' sharing decisions when updating beliefs, echo chambers can produce belief polarization. In environments with fake news, introducing a technology letting users fact-check stories at a cost can have paradoxical effects. Depending on cost structures, such technology may generate a form of social confirmation bias that actually increases polarization. A related challenge is maintaining faith in fact checkers: if users think there is a possibility of a biased fact checker, those with strong beliefs will come to believe fact checkers are biased against their belief. Finally, echo chambers and the associated polarization may arise endogenously due to platform incentives. Networks that expose most users to peers with diverse opinions, however, can be better for the platform and its users in the long run. Harvard University Social Learning and Innovation    [pdf] Abstract We study a model of innovation in which new technologies are formed by combining several atomic ideas. These ideas can be acquired by private investment or via social learning. A large number of firms face a trade-off between secrecy, which protects existing intellectual property, and openness, which facilitates social learning. This decision is modeled as a choice of an interaction rate, which determines an underlying learning network. Incentives and, more strikingly, payoffs can jump at phase transitions in this network. In particular, equilibrium outcomes will be below a critical threshold, while welfare is much higher above the threshold. University of Bielefeld Non-cooperative Games with Shapley utilities    [pdf] (joint work with Roland Pongou and Tondji Jean-Baptiste) Abstract We introduce a new class of strategic form games called non-cooperative games with Shapley utilities. We show that any finite game in this class possesses a pure strategy Nash equilibrium. We also provide a monotonicity condition under which any finite non- cooperative game with Shapley utilities admits a pure strategy Nash equilibrium. University of Bonn Relational Contracts: Public versus Private Savings    [pdf] (joint work with Daniel Garrett) Abstract We study relational contracting with risk-averse agents who thus have preferences for smoothing consumption over time. Agents have the ability to save to defer consumption. We compare principal-optimal relational contracts in two settings. The first where the agent's consumption and savings decisions are private, and the second where these decisions are publicly observed. In the first case, the agent smoothes his consumption over time, the agent's effort and payments eventually decrease over time, and the balances on his savings account eventually increase. In essence, the relationship eventually deteriorates with time. In the second case, the relational contract can specify the level of consumption by the agent. The optimal contract calls for the agent to consume earlier than he would like, consumption and balances on the account fall over time, and effort and payments to the agent increase. We suggest that modeling informal/relational incentives on consumption/savings decisions is a pertinent alternative to the approach in existing literature on dynamic moral hazard, in which consumption is often either formally specified by contract or in which the agent can privately save. Saarland University Probabilistic manipulation of sequential voting procedures    [pdf] (joint work with Ritxar Arlegi) Abstract We consider sequential, binary voting procedures in societies where pairwise voting induces a single-peaked social preference. An agenda setter is uncertain about the social preference, has the power to fix the initial seeding in the voting tree and thus, to possibly manipulate the procedures' probabilistic outcomes. In such settings, our results identify the balanced voting procedure with four candidates as non-manipulable in polarized societies and least manipulable in biased societies. Voting procedures in weakly biased societies turn out to be non-manipulable if and only if the number of candidates is two. University of Montreal Competing Pre-match Investments Revisited: A Precise Characterization of the Limits of Bayes-Nash Equilibria in Large Markets    [pdf] Abstract We solve an open problem pertaining to the relationship between competitive and non-cooperative models of pre-match investment. We study an incomplete information version of Peters and Siow's (2002) model of competing pre-marital investments and NTU matching, with finitely many agents and i.i.d. types. Our main results establish a precise characterization of side-symmetric Bayes-Nash equilibrium (BNE) behavior "in the limit," as the market grows large and the empirical type distributions converge to those of an unbalanced continuum economy à la Peters and Siow (2002). The limits of BNE strategies always differ from the (bilaterally efficient) hedonic equilibrium strategies, and we obtain a neat characterization of an equilibrium concept for the continuum model that has a clear strategic foundation. Our analysis relies on a completely novel way of using advanced results from the theory of approximate distributions of order statistics that allows us to characterize equilibrium behavior even though the limit strategies must be discontinuous and the size of the discontinuities is determined by a complex, two-sided interaction. These techniques should be useful for the study of other large Bayesian games with discontinuous payoffs and interacting distributions of outcomes. University of Rochester Imperfect Collusion in Repeated Bertrand Oligopoly: The Role of Transfer in Penalizing Actual Price-Cutter    [pdf] Abstract Cartels often operate under inter-firm transfer scheme to prevent cheating and penalize violators. Theories of collusion have predicted that well-designed transfer scheme will make firms in line with collusive price level by eliminating (possibly secret) price-cutting incentives. In practice, however, cheating indeed often occurs (Genesove and Mullin (2001)) and actual violation of agreement is punished through inter-firm transfer. To fill the gap, I study traditional repeated Bertrand oligopoly model augmented by transfer sub-stage, focusing on the range of discount factors not sufficiently high to achieve the best collusive price. I show that optimal stationary equilibrium has distinguishing features that explain several real-world cartel practices: (i) collusive agreement divides a range of prices into allowable level of cheating punished through transfer and unallowable degree of cheating leading to cartel break-down. As a result, (ii) both occasional violation and adherence to agreement occur on-path and (ii) violator is punished based on agreed-upon penalty scheme. However, (iii) the amount of transfer is limited by self-enforcing constraint such that it is not sufficient to discipline price-cutting at the highest stake: slight price-cutting at monopoly price is regarded as unallowable cheating and must trigger permanent price competition. Finally, transfer plays a role only in intermediate range of discount factors in that it is not needed at all (when discount factor is high) or cannot be self-enforced (when discount factor is low.) As long as firms play symmetric pricing strategy, transfer and price-cutting are essential part of any imperfect collusive equilibrium. University of Rochester Incentives in Equal-Pay-for-Equal-Work Principle    [pdf] (joint work with Yu Awaya) Abstract Equal-pay-for-equal-work illegalizes unfair discrimination of workers’ wages. Public concerns that the practice may enhance moral hazard problems, especially when employer cannot observe effort levels. This paper addresses the issue when employer can evaluate employees' performances only through peers (subjective peer evaluation). More precisely, each employee privately chooses effort level, which generates private signals to his peers. Employer solicits peer evaluations, while the evaluations are not verifiable. Equal-pay-for-equal-work principle forces the same wage across workers, following after any combination of evaluations. We show that employer can still provide incentives to put effort, when signals are more correlated when employees put efforts while less so when some shirk. George Mason University Mechanism Design with Memory and no Money    [pdf] Abstract The paper provides an automated approach to mechanism design problems without money for arbitrary discount factors using dynamic programming and promised utility. We illustrate the approach with problems from the literature - chore allocation or sharing an indivisible good or goods. Additionally, we discuss the relationships between different classes of mechanisms and show that promised utility mechanisms are more general than mappings from histories of finite memory. Virginia Polytechnic Institute & State University Common Belief in Choquet Rationality with an Attitude''    [pdf] (joint work with Burkhard Schipper) Abstract We consider finite games in strategic form with Choquet expected utility. Using the notion of (unambiguously) believed, we define Choquet rationalizability and characterize it by Choquet rationality and common beliefs in Choquet rationality in the universal capacity type space in a purely measurable setting. We also show that Choquet rationalizability is equivalent to iterative elimination of strictly dominated actions (not in the original game but) in an extended game. This allows for computation of Choquet rationalizable actions without the need to first compute Choquet integrals. Choquet expected utility allows us to investigate common belief in ambiguity love/aversion. We show that Choquet rationality and common belief in Choquet rationality \emph{and} ambiguity love/aversion leads to smaller/larger sets of action profiles. University of Dayton Tacit Collusion in Repeated Unit Commitment Auctions    [pdf] Abstract In an infinitely repeated game, I am proposing to study the level and conduct of collusion under two commonly used wholesale electricity market designs. Both market designs are uniform price auctions run by an independent third-party market operator. In a centrally-committed market, generating firms compete by submitting complex offers that reflect the non-convexities of their operating costs. Under a self-committed market design, generating firms compete by submitting simple offers that represent the minimum price at which a firm is willing to produce all of its capacity. The centrally-committed market design also contains a provision by which firms are guaranteed to be made whole on the basis of their submitted offers, whereas no such guarantees exist under the self-committed market design. These two market designs will be examined in the context of an infinitely repeated game to compare how they facilitate collusion. This will be done by examining the optimal penal code and bidder deviation incentives, generating potentially useful regulatory and public policy insights. Preliminary work suggests that the centrally-committed market design common in the United States may be less prone to tacit collusion than the self-committed markets common to much of Western Europe and Australia. University of Chicago Communication with Detectable Deceit    [pdf] (joint work with Christian Salas) Abstract Lies are detectable. We investigate the implications of this fact in a communication game in which players have no common interests and messages are cheap, but deceit is detectable with positive probability. In any informative equilibrium, the lowest types lie, while some higher types tell the truth. Truth-telling arises because lie detection generates an endogenous cost from lying consisting of being confused with the low types who lie. We show that lie detection is strategically different from state detection, in that the latter does not admit informative equilibria. We analyze three extensions. First, we show that more information may be revealed if the sender is given an opportunity to prepare a lie in advance and thereby decrease its detectability. We then allow the sender to make multiple attempts at convincing the receiver, and show that if lie detectability is high, the receiver may benefit from committing to listening to the sender only once. And finally, we analyze a two-sender version of the model, and show that senders will exaggerate their claims only if the state disadvantages them sufficiently. The University of Manchester Matching and Core Stability with General Demand Structures    [pdf] Abstract A "demand-type" is a set of markets with indivisible with a fixed Slutsky matrix - comprising of comparative statics of aggregate demand vectors. I consider extensions of the assignment model to include general demand types. I prove, essentially, that equilibrium exists and the core is nonempty in an each market with general "demand-type" iff the associated Slutsky matrix is unimodular. As unimodularity may include preferences with complementarities, this extend previously known results beyond the case of substitutability. Strong Robustness to Incomplete Information and The Uniqueness of Correlated Equuilibrium    [pdf] (joint work with Ori Haimanko David Lagziel) Abstract We defi ne and characterize the notion of strong robustness to incomplete information, whereby a Nash equilibrium in a game u is strongly robust if, given that each player knows that his payoffs are those in u with high probability, all Bayesian-Nash equilibria in the corresponding incomplete-information game are close { in terms of action distribution { to that equilibrium of u. We prove, under some continuity requirements on payoffs, that a Nash equilibrium is strongly robust if and only if it is the unique correlated equilibrium. We then review and extend the conditions that guarantee the existence of a unique correlated equilibrium in games with a continuum of actions. The existence of a strongly robust Nash equilibrium is thereby established for several domains of games, including those that arise in economic environments as diverse as Tullock contests, Cournot and Bertrand competitions, network games, patent races, voting problems and location games. Boston College Reputation and Screening in a Noisy Environment with Irreversible Actions (joint work with Mehmet Ekmekci and Lucas Maestri) Abstract We introduce a class of two-player dynamic games to study the effectiveness of screening in a principal-agent problem. In every period, the principal chooses either to irreversibly stop the game, or to continue. In every period until the game is stopped, the agent chooses an action. The agent’s type is his private information, and his actions are imperfectly observed. Both players are long-lived and share a common discount factor. We study the limit of the equilibrium outcomes as both players get arbitrarily patient. We show that Nash equilibrium outcomes of the dynamic game converge to the unique Nash equilibrium outcome of an auxiliary two-stage game. Hence, dynamic screening eliminates noise in monitoring, but beyond that, it is ineffective. We calculate the probability that the principal eventually stops the game against each type of the agent. The principal learns some but not all information about the agent’s type. Applications include procurement, promotions and demotions within organizations and venture-capital financing. Rice University Legislative bargaining with coalition and proposer-dependent surplus    [pdf] Abstract I study a distributive model of legislative bargaining in which players differ in how much they contribute to the coalitions led by others (i.e. their productivity) and by how much they amplify the contributions of others to their own coalition (i.e. their organizational skill). The resulting model is a q-quota legislative bargaining game with different proposer-coalition pairs generating surpluses of different size. Given a parametric specification of surplus, I establish the existence and continuity of stationary subgame perfect equilibria. I show that equilibrium payoffs are unique for any productivity vector when players are homogeneous in skill, and when they are not, equilibria may feature delay and non-generic multiplicity. Payoffs and net contributions are monotonic in productivity and skill, and the most productive players are always recruited, while the most skillful are sometimes left out. I demonstrate that organizational skill has a stronger influence on outcomes, although sometimes it is desirable to trade skill for productivity. I also investigate the effect of patience and required majority on bargaining outcomes and their efficiency. JKU Linz The Norm of Reciprocity in Dynamic Employment Relationships    [pdf] Abstract This paper explores how a relational contract establishes a norm of reciprocity and how such a norm shapes the provision of informal incentives. Developing a model of a long-term employment relationship, I show that generous upfront wages that activate the norm of reciprocity are more important when an employee is close to retirement. In earlier stages, direct incentives promising a bonus in exchange for effort are more effective. Then, a longer remaining time horizon increases the employer's commitment. Generally, direct and reciprocity-based incentives reinforce each other and should thus optimally be used in combination. I also show that more competition can magnify the use of reciprocity-based incentives. Moreover, with asymmetric information on the employee's responsiveness to the norm of reciprocity, an early separation of types is generally optimal. This implies that pooling equilibria where “selfish” imitate “reciprocal” types might be less important for explaining increased cooperation with repeated interaction than often proclaimed. Finally, the principal might even benefit from asymmetric information because a firing threat for non-performance is only credible if the employee potentially is not reciprocal. Brown University Bargaining over Contingent Contracts Under Incomplete Information    (joint work with Geoffroy de Clippel and Kareen Rozen) Abstract We provide a non-cooperative justification for the axiomatic bargaining solution under incomplete information developed by Myerson (1984), when there are verifiable types. We study a simple one round simultaneous offer game with a small bargaining friction, although results also extend to infinite horizon war of attrition games. We study equilibria in which offers are accepted, as the friction vanishes. We show that: there are equilibria converging to an interim-efficient limit; for many bargaining problems any interim efficient limit must belong to the axiomatic solution; and for such bargaining problems, imposing consistency on a agent's off equilibrium path beliefs can rule out non interim-efficient limits. Stony Brook University Solutions for Zero-Sum Two-Player Games with Noncompact Decision Sets    [pdf] (joint work with Eugene A. Feinberg, Pavlo O. Kasyanov, and Michael Z. Zgurovsky) Abstract This paper provides sufficient conditions for the existence of solutions for two-person zero-sum games with possibly noncompact decision sets for both players. Payoff functions may be unbounded, and we do not assume any convexity/concavity-type conditions. For such games expected payoff may not exist for some pairs of strategies. The results of the paper imply several classic results, and they are illustrated with the number guessing game. The paper also provides sufficient conditions for the existence of a value and solutions for each player. Universidad del Pacífico Game Theory and the Law: Legal Rationality (Legal Interpretation)    [pdf] (joint work with Guillermo Flores) Abstract The author proposes the utilitarianism and rationality principles through which an individual: (i) analyzes the content of a legal norm; (ii) having analyzed its content, decides whether to comply with or breach the legal norm; and, (iii) having decided to comply with the legal norm, selects the strategy to be used to comply with it, obtaining the greatest possible maximization of its individual utility function. Since the "utility" that the law has for the legislator in social terms may not be equal to the "utility" that the citizen assigns to it in personal terms, it is necessary for the legislator to know the expectations of utility that the citizen has regarding a legal norm before issuing it, to make both concepts of "utility" compatible. Once issued, the legislator should not focus on communicating the "utility" that the legal norm has for himself, but the compatibility between both concepts of "utility". The less evident the compatibility between both concepts of "utility" is for citizens, the greater the level of coercion that will be required. Since each citizen interprets the compatibility and utility of the legal norm in a different way than any other citizen, each citizen will present a level of compliance with the legal norm different from the level of compliance of another citizen. Therefore, the legislator faces a reality in which there will be citizens who comply with it in the exact way that she wanted it to be complied with, citizens who comply with it in a way close to the one she wanted and citizens who will not comply with it. The intention of this article is to propose the game-theoretic principles through which a citizen interprets a legal norm, decides whether to comply with it or fails to comply with it, and at what level to comply or not comply with it, and thus obtain formal answers to the questions presented. University of Hamburg Shadow links    [pdf] (joint work with Ana Mauleon and Vincent J. Vannetelbosch) Abstract We propose a framework of network formation where players can form two types of links: public links are observed by everyone and shadow links are only observed by some players, e.g., neighbors in the network. We introduce a novel solution concept called rationalizable conjectural pairwise stability, which generalizes Jackson and Wolinsky (1996)’s pairwise stability notion to accommodate shadow links. We then study the case when public links and shadow links are perfect substitutes and relate our concept to pairwise stability. Finally, we consider two specific models and show how false beliefs about others’ behavior may lead to segregation in friendship networks with homophily, reducing social welfare. University of Hamburg Strategic transmission of imperfect information: Why revealing evidence (without proof) is difficult    [pdf] Abstract We investigate cheap talk when an imperfectly informed expert knows multiple binary signals about a continuous state of the world. The expert may report either information on each signal separately (direct transmission) or a summary statistics of her signals (indirect transmission) to a decision-maker. We first establish that fully informative equilibria exist if the conflict of interest is small. Otherwise, direct-transmission equilibria are uninformative, as not revealing part of the signals tightens—not loosens—the expert’s incentive compatibility constraint. On the contrary, indirect-transmission equilibria remain partially informative for intermediate conflicts of interest. Furthermore, comparative statics show that a better informed expert may imply less informative equilibrium communication. Finally, we introduce the possibility for the expert to verify her signals. We show that, if the costs of verification are low, a fully informative direct-transmission equilibrium exists regardless of the conflict of interest. Massachusetts Institute of Technology An Evolutionary Justification for Overconfidence    [pdf] (joint work with Kim Gannon, Hanzhe Zhang) Abstract This paper provides an evolutionary justification for overconfidence. Players are pair-wise matched to fight for a resource and there is uncertainty about who wins the resource if they engage in the fight. Players have different confidence levels about their chance of winning although they actually have the same chance of winning in reality. Each player may know or may not know her opponent’s confidence level. We characterize the evolutionary stable equilibrium, represented by players’ strategies and distribution of confidence levels. Under different informational environments, majority of players are overconfident, i.e. overestimate their chance of winning. We also characterize the evolutionary dynamics and the rate of convergence to the equilibrium. University of Mannheim Reputational Cheap Talk vs. Reputational Delegation    [pdf] Abstract I study whether a principal who is uncertain about an agent's motives should keep control and solicit information from the agent or delegate the decision making to the agent if the interactions between the two parties are repeated so that the agent has reputational concerns. I consider a two-period repeated game. In each period, the uninformed principal fi rst decides whether to delegate the decision making to the informed agent who is either good (not biased) or bad (biased). If she does, the agent takes an action himself. If she does not, the agent sends a cheap talk message to the principal who then takes an action. I find that in the second period, the principal is better off by keeping control instead of delegating to the agent. The optimal authority allocation in the fi rst period depends on a prior cut-off. If the prior about the agent being good is above this cut-off, the principal prefers delegation over communication. Otherwise, communication dominates delegation. Dept. of Economics, University of Pennsylvania Informal Risk Sharing with Local Information    [pdf] (joint work with Attila Ambrus, Pau Milan) Toulouse School of Economics Robust Predictions in Dynamic Screening    [pdf] (joint work with Alessandro Pavan, Juuso Toikka) Abstract We characterize properties of optimal dynamic mechanisms using a variational approach that permits us to tackle directly the full program. This allows us to make predictions for a considerably broader class of stochastic processes than can be handled by the “first–order, Myersonian, approach”, which focuses on local incentive compatibility constraints and has become standard in the literature. Among other things, we characterize the dynamics of optimal allocations when the agent’s type evolves according to a stationary Markov processes, and show that, provided the players are sufficiently patient, optimal allocations converge to the efficient ones in the long run. California State University Fullerton Sequential Auctions with Ambiguity    [pdf] (joint work with Heng Liu) Abstract This paper studies sequential sealed-bid auctions with ambiguity about the distribution of valuations and maxmin bidders. We propose equilibrium notions based on the multiple selves approach to deal with the possible time inconsistency that arises with dynamic bidding for maxmin bidders. We find that the equilibrium predictions are robust to different specifications of preferences and characterize the unique symmetric equilibrium. We show that prices are a supermartingale and the seller's revenue from sequential auctions dominates static multi-unit auctions under general conditions. Ambiguity aversion thus provides a unified explanation for the declining price anomaly'' and the wide adoption of sequential auctions in the real-world. Our model delivers rich testable implications: on the practical side, there is a strong link between the degree of ambiguity, measured by the distance between the true distribution of valuations and bidders' worst case belief, and the magnitude of price variations over time; on the technical side, dynamic inconsistency, which can arise for bidders with multiple priors, generates history dependence in bidding strategies. Princeton University Wars of Attrition with Evolving States Abstract I analyze a model of wars of attrition with evolving payoffs. Two players fight over a prize by paying state-dependent flow costs until one player surrenders. The state of the world is commonly observed and evolves over time. The equilibrium is unique and uses threshold strategies: each player surrenders when the state is unfavorable enough to her, while for intermediate states both players strictly prefer to fight on. Taken as a refinement of the model of wars of attrition with complete information, this model makes related but distinct predictions from the standard reputation-based refinements (Abreu and Gul, 2000). The model is versatile and can be tractably extended to study partial concessions, commitment devices and deadline effects. University of Rochester Interbank Trading, Collusion, and Financial Regulation    [pdf] (joint work with Dean Corbae) Abstract We show theoretically and empirically that interbank markets provide a channel for banks to collude in the market for business loans. By lending funds to a competitor, a bank commits not to compete. Interbank interest rates allow banks to split the benefits from such collusion. Using global syndicated loans data, we find that firms paid 31bps higher spread on $239 billion of loans provided by banks that took an interbank loan from a competitor. We compare the decentralized solution with interbank market to the planner's solution and to the decentralized equilibrium without interbank market. The results suggest that restricting interbank trading may increase aggregate welfare. Harvard University Targeting Interventions in Networks [pdf] (joint work with Andrea Galeotti, Sanjeev Goyal) Abstract We study the design of optimal interventions in network games, where individuals' incentives to act are affected by their network neighbors' actions. A planner shapes individuals' incentives, seeking to maximize the group's welfare. We characterize how the planner's intervention depends on the network structure. A key tool is the decomposition of any possible intervention into principal components, which are determined by diagonalizing the adjacency matrix of interactions. There is a close connection between the strategic structure of the game and the emphasis of the optimal intervention on various principal components: In games of strategic complements (substitutes), interventions place more weight on the top (bottom) principal components. For large budgets, optimal interventions are simple -- targeting a single principal component. Duke University Efficient and Envy Minimal Assignment [pdf] (joint work with Atila Abdulkadiroğlu) Abstract In priority-based allocation problems such as school choice there is a trade-off between efficiency and elimination of justified envy. We study the possibility of resolving this trade-off by finding a constrained optimal solution, i.e. an efficient matching with minimal envy. We establish a negative result: finding such a matching is an NP-hard problem and therefore computationally infeasible. The result is robust to various definitions of envy minimality, such as minimizing the number of justified envy instances or the indirect measure of maximizing the sum of match priorities. Despite the computational complexity result we are able to provide a polynomial-time mechanism that is approximately constrained optimal (maximizes match priorities) in the class of sequential dictatorships in large markets. The large market model that we consider Is representative for the school choice problem and therefore the approximation is likely to be good in that setting. Texas A&M University Coalition-Proof Mechanisms Under Correlated Information [pdf] (joint work with Huiyi Guo) Abstract The paper considers two types of mechanisms that are immune from coalitional manipulations: standard mechanisms and ambiguous mechanisms. In finite-dimensional type spaces, I characterize the set of all information structures under which every efficient allocation rule is implementable via an interim coalitional incentive compatible, interim individually rational and ex-post budget-balanced standard mechanism. The requirement of coalition-proofness reduces the scope of implementability under a non-negligible set of information structures. However, when ambiguous mechanisms are allowed and agents are maxmin expected utility maximizers, coalition-proof implementation can be obtained under all most all information structures. Thus, the paper sheds light on how coalition-proofness can be achieved by engineering ambiguity in mechanism rules. University of Warwick Authority and Information Acquisition in Cheap Talk with Informational Interdependence [pdf] Abstract I study the allocation of decision rights in a two-dimensional cheap talk game with informational interdependence and imperfectly informed senders. The Principal allocates decision rights among all players including herself. Delegation is optimal when the expected informational gains outweigh the loss of control due to biased decisions. Delegating one decision leads to informational gains for the Principal when there are negative informational externalities (Levy and Razin, 2007). Partial delegation (of a controversial decision) is thus optimal when externalities are sufficiently strong. I characterize the maximum bias the Principal is willing to tolerate as a function of informational gains. I also analyse agents\\\' incentives for information acquisition. An agent invests in information when the expected utility gains from revealing it compensate its costs. Truthful communication is a necessary condition for information acquisition, but its influence on beliefs must also be sufficiently large. This implies centralization is always optimal when information costs are high. Endogenous information acquisition allows agents to specialize, which enhances communication incentives because it rules out contradictory information. Finally, I show that delegation leads to ex-post specialization: decision-makers typically receive more information about the more relevant state as compared to centralization. UTS INTERDISTRICT SCHOOL CHOICE: A THEORY OF STUDENT ASSIGNMENT [pdf] (joint work with Fuhito Kojima, Bumin Yenmez) Abstract Interdistrict school choice programs—where a student can be assigned to a school outside of her district—are widespread in the US, yet the market-design literature has not considered such programs. We introduce a model of interdistrict school choice and present two mechanisms that produce stable or efficient assignments. We consider three categoriesof policy goals on assignments and identify when the mechanisms can achieve them. By introducing a novel framework of interdistrict school choice, we provide a new avenue of research in market design. Penn State University Sequential Mechanisms With ex post Participation Guarantees [pdf] (joint work with Itai Ashlagi and Constantinos Daskalakis) Abstract We study optimal screening mechanisms for selling multiple products to a buyer who learns her value for a different product at each period. A mechanism may screen types over time or be static (screen types only in the last period), but must assign the buyer a non-negative utility ex post. We observe that there exists an optimal mechanism that determines the allocation of a product as soon as the buyer learns her value for that product. This observation allows us to solve for optimal mechanisms recursively, and to provide several structural properties of optimal mechanisms. We show that static mechanisms are sub-optimal if the buyer first learns her values for products that are ex ante less valuable. Under this condition, the ability to bundle products is less profitable than the ability to screen types dynamically. Penn State University Consumer-Optimal Market Segmentation [pdf] (joint work with Nima Haghpanah and Ron Siegel) Abstract Consumer surplus in a market is affected by how the market is segmented. We study the maximum consumer surplus across all possible segmentations of a given market served by a multi product monopolist. We characterize markets for which the maximum consumer surplus equals a first best benchmark (i.e., maximum total surplus minus minimum profit). The first best benchmark is achievable whenever the seller does not find it profitable to screen types by offering multiple bundles, highlighting a novel impact of screening. We also characterize markets for which consumer surplus can be increased compared to the unsegmented market, and show that these markets are generic. We construct a simple segmentation that improves consumer surplus in these markets. Ben-Gurion University of the Negev Generalized Coleman-Shapley Indices and Total-Power Monotonicity [pdf] Abstract I introduce a new axiom for power indices, which requires the total (additively aggregated) power of the voters to be nondecreasing in response to an expansion of the set of winning coalitions; the total power is thereby reflecting an increase in the collective power that such an expansion creates. It is shown that total-power monotonic indices that satisfy the standard semivalue axioms are probabilistic mixtures of generalized Coleman-Shapley indices, where the latter concept extends, and is inspired by, the notion introduced in Casajus and Huettner (2018). Generalized Coleman-Shapley indices are based on a version of the random-order pivotality that is behind the Shapley-Shubik index, combined with an assumption of random participation by players. Wuhan University Truthful Intermediation with Monetary Punishment [pdf] (joint work with Ruben Juarez) Abstract A mechanism chooses an allocation of the resource to intermediaries based on their reported ability to transmit it. We discover and describe the set of incentive compatible mechanisms when a monetary punishment to intermediaries who misreport their ability is possible. This class depends on the punishment function and the probability of punishment. It expands previous characterizations of incentive compatible mechanisms when punishment was no available. Furthermore, when the planner has the ability to select the punishment, we provide the minimal punishment necessary to achieve incentive compatibility and the corresponding class of rst-best mechanism. For any punishment, we discover the optimal mechanism for the planner. The Ohio State University Epistemic Experiments: Utilities, Beliefs, and Irrational Play [pdf] Abstract Inspired by the epistemic game theory framework, I elicit subjects' preferences over outcomes, beliefs about strategies, and beliefs about beliefs in a variety of simple games. I find that the prisoners' dilemma and the traditional centipede game are both Bayesian games, with many non-selfish types. Many players choose strategies that are clearly inconsistent with their elicited beliefs and preferences. But these instances of irrationality'' disappear when the game is made sequential and the player moves second, suggesting that irrationality is driven by the presence of strategic uncertainty. University of Texas at Dallas Signaling through Bayesian persuasion [pdf] Abstract This paper considers a Bayesian persuasion model in which the sender has private information about the payoff-relevant state prior to choosing an experiment. The set of payoff-relevant states is fi nite and the sender'’s payoff is continuous and strictly increasing in the receiver'’s expectation of the state. It is shown that if full disclosure of the payoff-relevant state is weakly detrimental for the sender under any common prior between the sender and the receiver, then a single-crossing property of the sender’'s expected payoff across sender types and experiments arises. This single crossing condition leads to the selection of separating equilibria by forward induction refi nements, i.e., the sender’'s choice of experiment signals his type. The sender’'s payoff function being concave (and there being no value of persuasion) is stronger than the condition required for this outcome to occur. University Bonn Large Elections with Endogenous Information [pdf] Abstract This paper studies majority elections with large electorates when each voter can acquire information about the election alternatives at a cost. I allow voters to have conflicting interests. I describe all equilibria for all cost regimes: in the polar case when cost are 'high', the equilibrium outcome is Downsian meaning that the outcome that is preferred by the majority under the prior belief is elected. In the other extreme, when information is costless, the equilibrium outcome is full-information-equivalent, meaning that the outcome that is prefered by the majority under full information is elected. As a main result, I show that when cost are 'intermediate' there are equilibria where the majority group fails to coordinate and the minority-preferred outcome is elected with probability 1. More generally, I show that the equilibrium outcome in the non-Downsian equilibria maximizes a weighted utilitarian welfare function that satisfies the Pigou-Dalton principle of fairness. Connected with this fairness principle, I observe that the minority-preferred outcome is only elected in instances when this is utilitarian. This contrasts to the literature on special interest groups and lobbies (Olson, '65; Tullock, '83) which suggests unjust manipulability of political outcomes through small groups with large stakes. Bar Ilan University Valuing Information by Repeated Signals [pdf] (joint work with Ehud Lehrer) Abstract A decision maker who needs to choose an action for a state-dependent payoff but does not know the true state is offered information structures with noisy signals. Which should be preferred? The classical answer is to use the Blackwell informativeness ordering, which is an extremely partial ordering. We permit the decision maker to conduct multiple sequential queries of an information structure, with each query reducing the expected error in distinguishing between the states, towards identifying the true state. Comparing information structures by the reduction in state distinction error per query, utilising concepts from information theory such as Chernoff information and large deviations theory, we obtain a total ordering that monotonically extends the Blackwell ordering. Moreover, our ordering is objective' in the sense of being calculable from the information structures themselves, independently of priors or of specific decision problems, and yields a simple operational interpretation. Using the same underlying construction of repeated signals and large deviations theory, the analysis is extended to states changing under i.i.d. and Markov processes. Total orderings of information structures with decision theoretic justification are again obtained, but these do depend on the initial hypotheses adopted by a decision maker. Bielefeld University The Transmission of Continuous Cultural Traits in Endogenous Social Networks (joint work with Fen Li) Abstract We study a OLG model of transmission of continuous cultural traits across generations in an endogenous social network. Children learn their cultural trait from their parents and their social environment. Parents want their children to adopt a cultural trait that is similar to their own and engage in the socialization process of their children by forming new links or deleting connections. Changing links from the inherited network is costly, but having many links is benefitial. Studying the dynamics of cultural traits and networks, we find that polarization may obtain when extremist subgroups disconnect from the society. This is observed if costs of network changes and benefits from integration are low. For intermediate cost, convergence of all traits to one of the extremist's trait may occur. Large costs and/or large benefits from interactions always imply convergence to a moderate consesus. University of Auckland Strategic Games from an Observer's Perspective [pdf] (joint work with Elon Kohlberg and John W. Pratt) Abstract We investigate the implications of considering an outside observer of a game who believes that, no matter how many times she sees the outcome of similar games she will not be able to give beneficial advice to any player. We argue for a particular specification of what the formal definition of this intuitive statement should be. We then show that such an observer should believe that the players are playing a correlated equilibrium, though she may be uncertain exactly which correlated equilibrium they are playing. Since the set of correlated equilibria is convex her beliefs themselves actually constitute a correlated equilibrium. We further show that if the observer believes that there is nothing connecting'' the players in the game beyond what is explicitly described in the rules of the game the observer must believe that the players are playing a Nash equilibrium, though, again, she may be uncertain which Nash equilibrium they are playing. Collegio Carlo Alberto Price Setting on a Network [pdf] Abstract Most products are produced and sold by supply chain networks, where an interconnected network of producers and intermediaries set prices to maximize their profits. I show that there exists a unique equilibrium in a price-setting game on a network. The key distortion reducing both total profits and social welfare is multiple-marginalization, which is magnified by strategic interactions. Individual profits are proportional to influentiality, which is a new measure of network centrality defined by the equilibrium characterization. The results emphasize the importance of the network structure when considering policy questions such as mergers or trade tariffs. Boston University Very Biased Political Experts: Cheap Talk, Persuasion and the Political Extremes [pdf] Abstract Many lobbying organisations use paid experts to try to sway policymakers. These experts typically share the organisation’s strong ideological preferences, partly due to self-selection on ideology, and partly due to incentives provided by their employers. If we consider the experts’ recommendations to take the form of cheap talk, then it is natural to think that their biases might undermine any informational content in their messages. Two questions then come to mind: first, can strongly biased experts credibly convey any information, and second, are they able to distort policymakers’ decisions in favour of their bias? Existing cheap talk models predict that communication can occur if and only if the sender’s bias is small. I present a cheap talk model where the two receivers engage in Downsian political competition. I show that, even if the sender (expert) is extremely biased, there can be partial revelation about the location of the median voter. Public messages convey no information, but the expert can privately recommend a policy platform to one politician – essentially acting as a political advisor. Partial revelation is made possible by the constraining force of political competition – policies that are too extreme simply cannot win. Furthermore, the expert is able to distort policy in their favoured direction, by recommending the most distorted policy platform that still guarantees a win in the election. I also compare this cheap talk model to a Bayesian persuasion game, where the expert designs a public experiment. I show that this is equivalent to a relaxed gerrymandering problem, and derive the lowest upper bound on the expert’s utility under persuasion. I discuss implications for campaigning strategies by ideologically biased organisations: in particular, fat-tailed (polarised or extreme) voter preferences lead to greater gains from Bayesian persuasion than small-tailed preferences. IMF College Ranking by Revealed Preference From Big Data: An Authority-Distribution Analysis [pdf] Abstract We apply authority distribution (Hu and Shapley, 2003) to sort out a linear ordering for hundreds of alternatives from revealed preference by millions of consumers. The background context is to rank the US colleges. The preference reveals much broader criteria than those set by popular college rankings, and our approach recognizes the heterogeneity in both the characteristics of colleges and the personal considerations of consumers. Also, we aggregate the spillover effects in the network of college interactions, and this leads to a robust steady-state solution to the counterbalance equilibrium of direct bilateral influence. The solution is likely the most comprehensive and the most objective college ranking, compared with dozens of other ones in the market. The approach can be applied in many other areas, such as ranking sports teams and academic journals and calculating real effective exchange rates. Keywords: college ranking; revealed preference; authority distribution; endogenous weighting; big data; matching JEL Codes: C68, C71, C78, D57, D58, D74 RUTGERS UNIV Incentive Compatible Self-fulfilling Mechanisms and Rational Expectations [pdf] Abstract This paper extends the exact equivalence result between the allocations realized by self-fulling mechanisms and rational expectations equilibrium allocations in Forges and Minelli (1997) to a large finite-agent replica economy where different replicates of the same agent are allowed to receive different private information. The first result states that the allocation realized by any incentive compatible self-fulfilling mechanism is an approximate rational expectations equilibrium allocation. Conversely, the second result states that we can associate with any given rational expectations equilibrium an incentive compatible self-fulfilling mechanism whose equilibrium allocation approximately coincides with the rational expectations equilibrium allocation. ESMT Berlin Marginality, dividends, and the value in games with externalities [pdf] (joint work with André Casajus) Abstract We introduce a notion of marginality for games with externalities. It rests on the idea that a player's contribution in an embedded coalition is measured by the change that results when the player is removed from the game. To evaluate the latter, we use the concept of restriction operators introduced by Dutta et al. (J Econ Theor, 145, 2010, 2380-2411). We provide a characterization result using efficiency, anonymity, and restriction marginality, which generalizes Young's characterization of the Shapley value. An application of our result yields a new characterization of the solution put forth by Macho-Stadler et al. (J Econ Theor, 135, 2007, 339-356) without linearity. Bank of Canada Non-Competing Data Intermediaries [pdf] Abstract I study competition among data intermediaries—data brokers and technology companies that collect consumer data and sell them to downstream firms. Under the assumption that firms use consumer data to extract rents, intermediaries have to compensate consumers for their personal data. I show that competition among intermediaries fails: If they offer high compensation to obtain more consumer data, consumers share their data with multiple intermediaries. This lowers the price of the data in the downstream market and hurts intermediaries. I show that this leads to multiple equilibria with different allocations of data among intermediaries. For example, there is a monopoly equilibrium where a single intermediary extracts the maximum possible surplus, even though the model excludes network externalities or returns to scale. There are also a continuum of equilibria with different degrees of data concentration. I show that data concentration benefits intermediaries and hurts consumers. This has a potential implication on regulating dominant online platforms. UNIVERSIDAD DE CHILE Opinion Polarization under Search for Peers [pdf] (joint work with Axel Böhm, Aris Daniillidis) Abstract We propose a model of discrete time opinion dynamics where every agent searches randomly for a peer to update his opinion. Agents' opinions are assumed to be positions on the [-1,1] interval and their initial distribution is represented by a symmetric probability density. Agents must update their respective opinions once at each period with another agent, which we call a peer. The update is unilateral and takes the average of the two opinions. We assume that updating opinions is costly for the agents and the cost is an increasing function of the distance between the agent and his peer. An agent can reject to update with with the first peer he encounters, due to the cost of updating, and instead do a search to find another peer. This search has a cost c>0. An agent can search as many times as he wishes, each time paying the cost c. A new search is independent of any previous searches and the agent can update only with the last agent he finds at that period. Once all agents find their peers at period t and revise their opinions, we pass to period t + 1 where agents are distributed according to a new distribution. We are interested in how this distribution evolves and converges. If the search cost is taken sufficiently high then agents update with the first peer they find. We show that in this case the density converges to the Dirac delta. If the search cost is taken sufficiently low (that agents do not accept some peers) then the distribution converges to an atomic distribution where the number of atoms and the variance of the limit distribution increase as the cost of search decreases. Bowdoin College The Power of Context in Game-Theoretic Models of Networks: Ideal Point Models with Social Interactions [pdf] (joint work with Mohammad T. Irfan, Tucker Gordon) Abstract Game theory has been widely used for modeling strategic behaviors in networked multiagent systems. However, the context within which these strategic behaviors take place has received limited attention. We present a model of strategic behavior in networks that incorporates the behavioral context, focusing on the contextual aspects of congressional voting. One salient predictive model in political science is the ideal point model, which assigns each senator and each bill a number on the real line of political spectrum. We extend the classical ideal point model with network-structured interactions among senators. In contrast to the ideal point model's prediction of individual voting behavior, we predict joint voting behaviors in a game-theoretic fashion. The consideration of context allows our model to outperform previous models that solely focus on the networked interactions with no contextual parameters. We focus on two fundamental problems: learning the model using real-world data and computing stable outcomes of the model with a view to predicting joint voting behaviors and identifying most influential senators. We demonstrate the effectiveness of our model through experiments using data from the 114th U.S. Congress. King's College London One for all, all for one—von Neumann, Wald, Rawls, and Pareto [pdf] (joint work with Mehmet S. Ismail) Abstract Applications of the maximin criterion extend beyond economics to statistics, politics, philosophy, operations research, and engineering. However, the maximin criterion—be it von Neumann's, Wald's, or Rawls'—draws fierce criticism, in part because of its extremely pessimistic stance. I address the criticisms of the maximin criterion and propose a novel approach, dubbed the optimin criterion, which suggests that we should (Pareto) optimize—rather than maximize—the minimum under a reasonable social contract: Do not harm yourself for the sake of harming others. The optimin criterion (i) addresses criticisms of the maximin criterion, including Harsanyi's and Arrow's; (ii) helps explain experimental deviations from utilitarian concepts such as the Nash equilibrium; and (iii) provides insights into sustaining cooperation in noncooperative games. The optimin criterion not only coincides with (1) Wald's statistical decision theory when Nature is the antagonist, but also generalizes (2) stable matchings in matching models such as college admission problems and the housing market, (3) Nash equilibrium in n-person constant-sum games, and (4) the competitive equilibrium in the Arrow-Debreu economy. Moreover, every Nash equilibrium satisfies the optimin criterion in a suitably defined game. University of Wisconsin-Madison Rational Bubbles and Middlemen [pdf] (joint work with Yu Awaya, Makoto Watanabe) Abstract This paper develops a finite-period model of rational bubbles where trade of an asset takes place through a chain of middlemen. We show that there exists a unique equilibrium, and a bubble can occur due to higher-order uncertainty. Under reasonable assumptions, the equilibrium price is increasing and accelerating during bubbles although the fundamental value is constant over time. Bubbles may be detrimental to the economy; however, bubble-bursting policies affect agents' beliefs and it turns out that they have no effect on welfare. We also demonstrate that the possibility that middlemen obtain more information leads to larger bubbles. Institute of Economics, Academia Sinica Virtual implementation by bounded mechanisms:Complete information [pdf] (joint work with Michele Lombardi) Abstract When there are at least three agents, any social choice rule F is virtually implementable both in Nash as well as in rationalizable strategies, by a bounded mechanism. No tail-chasing" constructions, common in the constructive proofs of the literature, is used to assure that undesired strategy combinations do not form a Nash equilibrium. University of Texas at Austin Competing to persuade a rationally inattentive agent [pdf] (joint work with Mark Whitmeyer) Abstract The standard Bayesian persuasion literature allows senders to design arbitrarily informative signal structures, and assumes that receivers costlessly process all information made available to them. This is an unrealistic assumption in many natural contexts, where agents may rationally choose to stay partly ignorant. We study a model of competitive information disclosure by two senders, with the twist that the receiver is allowed to garble each sender's experiment. The more she garbles, the lower her learning costs are. Interestingly, we find that as long as learning costs are not too low, there is an interval of prior means over which it is an equilibrium for both senders to offer full information. Furthermore, the interval expands as learning costs grow. This result stands in sharp contrast to Wei (2018), who shows that in this framework, providing full information is never optimal when there is a single sender. Intuitively, when there are two senders, information on one of them substitutes for information on the other, and further, learning costs lead the receiver to ignore some information available on each sender. Then starting from a situation of full disclosure, if a sender deviates to restrict her learning, she can compensate for it by using some of the surplus information on the other sender. She thereby maintains the probability of making a correct decision and leaves the deviating sender's payoff unaffected. We thus provide a novel insight into why competition might encourage information disclosure, and apply our results to the disclosure of clinical research outcomes by pharmaceutical companies to prescribing doctors. University of Central Florida Risk Dominance, Beliefs, and Equilibrium [pdf] Abstract The term “risk-dominance” has been precisely defined in a very narrow context, but has been used much more broadly. I provide a brief survey of literature on risk-dominance, and note that risk-dominance is related to the difficulty of coordination. That suggests that it’s not just an equilibrium selection concept, but signifies something outside of equilibrium. I present the “maximum entropy” approach of Jaynes (1957) to forming beliefs given linear constraints, and apply it to first- and second-order beliefs in two-player games in which coordination and payoff-maximization constraints have been loosened; I find that this favors risk-dominant equilibria where those are well-defined, but favors risk-dominant nonequilibria where those are intuitive. By way of explication the model is compared to other models of agents who don’t perfectly maximize their payoffs conditional on other agents’ actions. A conclusion includes some thoughts on modeling and the enterprise of equilibrium selection. The University of Texas at Austin Disclosure of Sequential Evidence [pdf] Abstract I study the disclosure of history, which is modeled as a sequence of hard evidence. A sender sees a history about an unknown state of the world and tries to influence an uninformed receiver's belief. The receiver is uncertain about the length of history, and the sender can conceal dated signals and disclose only the more recent ones. In any equilibrium, a set of most recent signals which yields the maximal difference between number of favorable and unfavorable signals is always disclosed. In addition, the sender sometimes discloses earlier and seemingly less favorable signals, but the receiver's belief is not influenced by these excessive evidence. University of Hawaii Incentive-Compatible Simple Mechanisms [pdf] (joint work with Jung S You) Abstract We consider mechanisms for allocating a fixed amount of divisible resources among multiple agents when they have quasilinear preferences and can only report messages in a finite-dimensional space. We show that in contrast with infinite-dimensional message spaces, efficiency is not compatible with implementation in dominant strategies. However, for the weaker notion of implementation, such as in the Nash equilibrium, we find that a class of VCG-like' mechanisms is the only efficient selection in one-dimensional message spaces. The trifecta in mechanism design, namely efficiency, fairness and simplicity of implementation, is achieved via a mechanism that we introduce and characterize in this paper. Norges Bank Dividend Payouts and Rollover Crises [pdf] (joint work with Plamen T Nenov) Abstract We study dividend payouts when banks face coordination-based rollover crises. Banks in the model can use dividends to both risk shift and signal their available liquidity to short-term lenders, thus, influencing the lenders’ actions. In the unique equilibrium both channels induce banks to pay higher dividends than in the absence of a rollover crisis. In our model banks exert an informational externality on other banks via the inferences and actions of lenders. Optimal dividend regulation that corrects this externality and promote financial stability includes a binding cap on dividends. We also discuss testable implications of our theory. Rutgers University Approximating Nash Equilibrium Via Multilinear Minimax [pdf] Abstract On the one hand we state {\it Nash equilibrium} (NE) as a formal theorem on multilinear forms and give a pedagogically simple proof, free of game theory terminology. On the other hand, inspired by this formalism, we prove a {\it multilinear minimax theorem}, a generalization of von Neumann's bilinear minimax theorem. Next, we relate the two theorems by proving that the solution of a multilinear minimax problem, computable via linear programming, serves as an approximation to Nash equilibrium point, where its multilinear value provides an upper bound on a convex combination of {\it expected payoffs}. Furthermore, each positive probability vector once assigned to the set of players induces a {\it diagonally-scaled} multilinear minimax optimization with a corresponding approximation to NE. In summary, in this note we exhibit an infinity of multilinear minimax optimization problems each of which provides a polynomial-time computable approximation to Nash equilibrium point, known to be difficult to compute. The theoretical and practical qualities of these approximations are the subject of further investigations. University of Western Ontario Fairness versus Favoritism in Conflict Mediation [pdf] (joint work with Charles Zheng) Abstract A mediator proposes to two adversaries a peaceful split of their contested good to avoid a conflict, modeled as an all-pay auction. The proposed split can manipulate the outcome of conflict through influencing the adversaries' posterior beliefs when they reject it. Despite the adversaries being ex ante identical and having equal welfare weights, the socially optimal proposal is either a biased split such that the favored adversary always accepts it, or the equal split. The former outperforms the latter if and only if an adversary's prior probability to be weak in conflict is below an exogenous threshold. Econ department, UWO Knitting and Ironing: Reducing Inequalities via Auctions [pdf] (joint work with Charles. Z. Zheng) Abstract This paper characterizes all the mechanisms that achieve ex ante Pareto optimality via voluntary wealth transfers induced by auctions. Two items, one good, the other bad, are to be assigned to bidders who value money differently, and the taker of the bad is compensated with proceeds from the good. Pareto-improving transfers occur indirectly when bidders who value money less buy the good, and those who value money more are paid to take the bad. We introduce a new concept, two-part operator, to integrate a bidder’s countervailing information rents, one in buying the good, the other in taking the bad. We bisect the optimal mechanism problem, the objective of which is nonlinear, into two linear programmings, solve each via ironing, and knit the two into the solution for the original problem. We find that any Pareto optimum corresponds to the concatenation of two auctions, each determined by a two-part operator derived from such procedures. The optimal mechanism breaks the linkage between the hierarchy of types and hierarchy of surpluses when the budget balance condition is binding. Stony Brook University Dynamic Tournament Model of Private Tutoring Expenditure Abstract How does the hierarchy of colleges affect households’ pre-tertiary private tutoring expenditure? While empirical evidence suggests that the main purpose of private tutoring expenditure is to win the college admission competition, it is observed that parents of students who have higher school performance spend more on the tutoring expenditure. At the same time, even parents of students with poor school performance spend on average 5% of their income on private tutoring expenditure. To answer the given question and to capture the distribution of private tutoring expenditure, I specify an estimable dynamic tournament model which incorporates the college admission competition between households. The model allows for endogenous cutoffs which are determined by the private tutoring decision of N number of households. Using Korean Education Longitudinal Study 2005 which has detailed information on household’s education expenditure, I estimate the dynamic tournament model using simulated maximum likelihood. Based on the structural estimates, I conduct counterfactual experiments related to college hierarchy to see the changes in the distribution of private tutoring expenditure. Massachusetts Institute of Technology Can Rescues by Banks Replace Costly Bail-Outs in Financial Networks? [pdf] Abstract I model rescue formation in financial networks, where interbank obligations create interdependencies in shareholders’ equity. I show that welfare-maximizing networks are symmetrically connected through intermediate levels of interbank liabilities. In coalition formation framework, welfare-maximizing networks eliminate the well-known trade-off between risk sharing and systemic fragility in financial networks. Endogenously arising rescues show that potential contagiousness does not necessarily imply financial instability. Instead, financial stability is indicated by i) potential bankruptcy costs internalized by banks, and ii) loss absorption capacity of the network (i.e., banks’ aggregate capital). The results provide general insights into coalition formation in networks facing systemic threats. Maastricht University The Midpoint Constrained Egalitarian Bargaining Solution [pdf] (joint work with Shiran Rachmilevitch) Abstract A payoff allocation in a bargaining problem is midpoint dominant if each player obtains at least one n-th of her ideal payoff. The egalitarian solution of a bargaining problem may select a payoff configuration which is not midpoint dominant. We propose and characterize the solution which selects for each bargaining problem the feasible allocation that is closest to the egalitarian allocation, subject to being midpoint dominant. Our main axiom, midpoint monotonicity, is new to the literature; it imposes the standard monotonicity requirement whenever doing so does not result in selecting an allocation which is not midpoint dominant. In order to prove our main result we develop a general extension theorem for bargaining solutions that are order-preserving with respect to any order on the set of bargaining problems. The University of Massachusetts Games Where Players Offers Games to Play: A Foundation of Market Design [pdf] Abstract US Federal Agencies rulemakings such as the FCC Broadcast Incentive Auctions require public commenting as stipulated by the Administrative Procedure Act and often involve coalitional bargaining among stakeholders. The current noncooperative game theory doctrine that assumes that the participants take the extensive form of the game as given and only considers individual behaviors would not provide an accurate description of the real-world rulemaking processes. This paper develops a model where participants propose mechanisms before they agree to commit and choose to play the core-selecting mechanism at an equilibrium. This result provides a theoretical foundation of the finding of Roth (1991) that successful mechanisms in the real world are the ones that produce stable matching and also the finding that Vickrey auctions are only rarely used. The University of Massachusetts On the Virtue of Being Regular and Predictable: A Structural Analysis of the Primary Dealer System in the United States Treasury Auctions [pdf] Abstract We analyze the policy question of whether the US Treasury should maintain the current security distribution mechanism of the primary dealer system in the Treasury market to achieve the debt management objective of lowest funding cost over time. We study the data of 3790 auctions of Treasury securities issued between May 2003 and February 2018 (gross total issuance:$100.5 trillion). We identify potential increases in auction high rate volatilities due to decline in primary dealer activities to be a potential policy concern. Then we compare the effectiveness of the primary dealer system, the direct bidding system, and the syndicate bidding system to address this concern using the novel asymptotic approximation method that does not depend on equilibrium selection and normality of bidder values distribution. We find that the primary dealer system achieves signi cantly lower funding cost volatilities while maintaining an equal level of costs, thus contributes to the debt management objective. Maastricht University Persuading Voters With Private Communication Strategies    [pdf] (joint work with P. Jean-Jacques Herings, Dominik Karos) Abstract We consider a multiple receiver Bayesian persuasion model, where a Sender wants to implement a new proposal and Receivers with homogeneous preferences vote either for or against the proposal. Prior to the vote, Sender chooses a communication strategy which sends private correlated signals to Receivers. First, we show that if Receivers vote sincerely Sender can improve upon a public communication strategy in terms of expected utility by employing private signals. However, under the optimal communication strategy, sincere voting is not an equilibrium. In order to overcome this issue, we characterize the set of communication strategies under which sincere voting constitutes a Bayes Nash equilibrium and determine the optimal communication strategy. The University of Nebraska-Lincoln Efficient and Neutral Mechanisms in Almost Ex Ante Bargaining Problems    [pdf] Abstract I consider two-person bargaining problems in which mechanism is selected at the almost ex ante stage--when there is some positive probability that players may have learned their private types--and the chosen mechanism is implemented at the interim stage. For these problems, I define almost ex ante incentive efficient mechanisms and apply the concept of neutral optima. I show that those mechanisms may not be ex ante incentive efficient. This paper suggests that ex ante incentive efficient mechanisms are not robust to a perturbation of the ex ante informational structure at the time of mechanism selection. The University of Nebraska-Lincoln A Noncooperative Foundation of the Neutral Bargaining Solution    [pdf] Abstract This paper studies Myerson's neutral bargaining solution for a class of Bayesian bargaining problems in which the solution is unique. For this class of examples, I consider a noncooperative mechanism-selection game. I find that all of the interim incentive efficient mechanisms can be supported as sequential equilibria. Further, standard refinement concepts and selection criteria do not restrict the large set of interim Pareto-undominated sequential equilibria. I provide a noncooperative foundation of the neutral bargaining solution by characterizing the solution as a unique coherent equilibrium allocation. Virginia Tech Equilibrium configurations in the heterogeneous model of signed network formation    [pdf] Abstract In a model of signed network formation as proposed by Hiller (2017), this paper studies the possible Nash equilibrium configurations. I characterize the conditions under which complete networks or segregation into two uneven groups can be sustained in the equilibrium in the case of homogeneous agents. I also specify the Nash equilibria in the case of heterogeneous agents. In the model with four agents and two types, I find four categories of possible network configurations. Strong (weak) player refers to a player who has a greater (lower) exogenous intrinsic strength. The first Nash equilibrium configuration obtains when everyone is friend with each other. The second Nash equilibrium configuration is such that players of the same type coalesce. In the third configuration, one of the players is being bullied by the others. In the fourth configuration, there exist three groups consisting respectively of two strong players, one weak player, and one strong player. I further generalize the first and second Nash equilibrium configurations to then–player case; and I derive the specific conditions under which they arise in a Nash equilibrium. University of Pittsburgh Characterization, Existence, and Pareto Optimality in Insurance Markets with Asymmetric Information with endogenous and asymmetric Disclosures: Revisiting Rothschild Stiglitz    [docx] (joint work with Joseph Stiglitz, Jungyoll Yun) Abstract We study the Rothschild-Stiglitz model of competitive insurance markets with endogenous information disclosure by both firms and consumers. We show that an equilibrium always exists, (even without the single crossing property), and characterize the unique equilibrium allocation. With two types of consumers the outcome is particularly simple, consisting of a pooling allocation which maximizes the well-being of the low risk individual (along the zero profit pooling line) plus a supplemental (undisclosed and nonexclusive) contract that brings the high risk individual to full insurance (at his own odds). We show that this outcome is extremely robust and Pareto efficient. University of Pittsburgh Mediated Persuasion    [pdf] Abstract We study a game of strategic information design between a sender, who chooses state-dependent information structures, a mediator who can then garble the signals generated from these structures, and a receiver who takes an action after observing the signal generated by the first two players. We characterize sufficient conditions for information revelation, compare outcomes with and without a mediator and provide comparative statics with regard to the preferences of the sender and the mediator. We also provide novel conceptual and computational insights about the set of feasible posterior beliefs that the sender can induce, and use these results to obtain insights about equilibrium outcomes. The sender never benefits from mediation, while the receiver might. The receiver benefits when the mediator’s preferences are not perfectly aligned with hers; rather the mediator should prefer more information revelation than the sender, but less than perfect revelation. Princeton university Information Structures and Information Aggregation in Threshold Equilibria in Elections    [pdf] Abstract I study a model of information aggregation in elections with multiple states of the world and multiple signals. I focus on a particularly simple class of equilibria - threshold equilibria - and completely characterize information aggregation within this class. In particular, I identify conditions on the distributions of the signals that are necessary and sufficient for information aggregation in every sequence of threshold equilibria, as well as simple conditions that are sufficient but not necessary for information aggregation in threshold equilibria. I also identify (generic) conditions that are necessary and sufficient for information not to be aggregated in any sequence of threshold equilibria. As a consequence, my analysis provides sufficient conditions for the existence of a sequence of equilibria that does not aggregate information. Harvard University A Perfectly Robust Approach to Multiperiod Matching Problems    [pdf] Abstract Many two-sided matching situations involve multiperiod interaction. Traditional cooperative solutions, such as stability and the core, often identify unintuitive outcomes (or are empty) when applied to such markets. As an alternative, this study proposes the criterion of perfect alpha-stability. An outcome is perfect alpha-stable if no coalition prefers an alternative assignment in any period that is superior for all plausible market continuations. Behaviorally, the solution combines foresight about the future and a robust evaluation of contemporaneous outcomes. A perfect alpha-stable matching exists, even when preferences exhibit inter-temporal complementarities. A stronger solution, the perfect alpha-core, is also investigated. Extensions to markets with arrivals and departures, transferable utility, and many-to-one assignments are proposed. Pennsylvania State University On Dynamic Pricing    [pdf] (joint work with Ilia Krasikov, Rohit Lamba) Abstract This paper studies a canonical model of dynamic price discrimination- when firms can endogenously discriminate amongst consumers based on the timing of information arrival and/or the timing of purchase. A seller and buyer trade repeatedly. The buyer's valuation for the trade is private information and it evolves over time according to a renewal Markov process. The seller offers a dynamic pricing contract which options a sequence of forwards. As a first step, we show that this relatively simple dynamic pricing contract achieves. We then show that this contract is (a) the optimum when a single object is sold at a fixed time and (b) the optimum under strong monotonicity in the repeated sales model. The full optimum however may use buybacks which our dynamic pricing instruments do not allow. Moreover, we show that the optimum is backloaded and provide a theoretical bound for a fraction of optimal revenue which can be extracted by the seller from using our mechanism. The construction of the mechanism and bounds is then extended to multiple players to study repeated auctions. At every step of the analysis a mapping is established between the pricing model (indirect mechanisms) and general direct mechanisms. In this process, novel tools are developed to study dynamic mechanism design when global incentive constraints bind. Tsinghua University Hierarchical Bayesian Persuasion    [pdf] (joint work with Zhonghong Kuang, Jaimie W. Lien, Jie Zheng) Abstract We study a hierarchical Bayesian persuasion game with a sender, a receiver and several potential intermediaries, generalizing framework of Kamenica and Gentzkow (2011, AER). The sender must be persuasive through the hierarchy of intermediaries in order to reach the final receiver, whose action affects all players’ payoffs. The intermediaries care not only about the true state of world and the receivers action, but also about their reputations, measured by whether the receivers action is consistent with their recommendation. We characterize the perfect Bayesian equilibrium for the optimal persuasion strategy, and show that the persuasion game has multiple equilibria but a unique payoff outcome. Among the equilibria, two natural persuasion strategies on the hierarchy arise: persuading the intermediary who is immediately above one’s own position, and persuading the least persuadable individual in the hierarchy. As major extensions of the main model, we analyze scenarios in which intermediaries have private information, the endogenous reputation of intermediaries, and when intermediaries have an outside option. We also discuss as minor extensions, the endogenous choice of persuasion path, parallel persuasion, and costly persuasion. The results provide insights for settings where persuasion is prominent in a hierarchical structure, such as corporate management, higher education admissions, job promotion, and legal proceedings. University of Economics Prague Observing Actions in Bayesian Games    [pdf] (joint work with Dominik Grafenhofer) Abstract We study Bayesian coordination games where agents receive noisy private information over the game's payoff structure, and over each others' actions. If private information over actions is precise, we find that agents can coordinate on multiple equilibria. If private information over actions is of low quality, equilibrium uniqueness obtains like in a standard global games setting. The current model, with its flexible information structure, can thus be used to study phenomena such as bank-runs, currency crises, recessions, riots, and revolutions, where agents rely on information over each others' actions. Singapore Management University Maskin Meets Abreu and Matsushima    [pdf] (joint work with Yi-Chun Chen, Yifei Sun, and Siyang Xiong) Abstract We study the classical Nash implementation problem due to Maskin (1999), but allow for the use of lottery and monetary transfer as in Abreu and Matsushima (1992,1994). We therefore unify two well-established but somewhat orthogonal approaches in implementation theory. We show that Maskin monotonicity is a necessary and sufficient condition for mixed-strategy Nash implementation by a finite (albeit indirect) mechanism. In contrast to previous papers, our approach possesses the following appealing features simultaneously: finite mechanisms (with no integer or modulo game) are used; mixed strategies are handled explicitly; neither transfer nor bad outcomes are used in equilibrium; our mechanism is robust to information perturbations; and the size of off-the-equilibrium transfers can be made arbitrarily small. Finally, our result can be extended to infinite/continuous settings and ordinal settings. Lehigh University Mediated Talk: An Experiment (joint work with Andreas Blume and Wooyoung Lim) Abstract Theory suggests that mediation has the potential to improve information sharing. This paper experimentally investigates whether and how this potential can be realized. It is the first such study in a cheap-talk environment. We find that mediation encourages players to use separating strategies. Behavior gravitates toward pooling with direct talk and toward separation with mediated talk. This difference in behavior translates into a moderate payoff advantage of mediated over direct talk. There are systematic departures from the equilibrium prediction, characterized by over-communication by senders in the initial rounds of direct talk, stable under-communication by senders under mediated talk, and over-interpretation (attributing too much information to messages) by receivers under both direct and mediated talk. Rutgers University The Multilinear Minimax Relaxation of Bimatrix Games and Comparison with Nash Equilibria via Lemke-Howson    [pdf] (joint work with Bahman Kalantari) Abstract It is known that Nash Equilibrium computation is PPAD-complete, first by Daskalakis, Goldberg, and Papadimitriou for 4 or more players, then by the same authors for 3 players, and even for the bimatrix case by Chen and Deng. On the other hand, Dubey showed that Nash equilibria of games with smooth payoff functions are generally Pareto-inefficient. In particular, it means that it is possible that a strategy, possibly mixed, that is not a Nash equilibrium will admit a higher payoff for both players than a Nash equilibrium. Kalantari has described a multilinear minimax relaxation (MMR) that provides an approximation to a convex combination of expected payoffs in any Nash Equilibrium via linear programming. In this paper, we study this relaxation for the bimatrix game, with payoff matrices normalized to values between 0 and 1, solving its corresponding LP formulation and comparing its performance to the Lemke-Howson algorithm. We also give a game theoretic interpretation of MMR formulation for the bimatrix game which involves a meta-player. Our relaxation has the following theoretical advantages: (1) It can be computed in polynomial time; (2) For at least one player, the computed MMR payoff is at least as good any Nash Equilibrium payoff; (3) There exists a computable convex scaling of the payoff matrices so that the corresponding expected payoffs are equal. Computationally, we have compared our approach with the state-of-the-art implementation of the Lemke-Howson algorithm. In problems up to 150 actions, apparently the guaranteed computational limit of Lemke-Howson, we observe the following advantages: (i) MMR outperformed Lemke-Howson in time complexity; (ii) In about 80% of the cases the MMR payoffs for both players are better than any Nash Equilibria; (iii) in the remaining 20%, while one player's payoff is better than any Nash Equilibrium payoff, the other player's payoff is only within a relative error of 17%. In summary, MMR is a strong relaxation for Nash. Economics Institute of the Czech Academy od Sciences Preferences, Beliefs, and Strategic Plays in Games    [pdf] (joint work with Rudolf Kerschbamer and Jianying Qiu) Abstract We examine strategic plays in games while controlling for distributional preferences and beliefs. We elicit players’ distributional preferences before they play a series of two-person strategic games. We also elicit players’ belief about the opponent’s strategies. Our control of distributional preferences does not rely on any particular parametric forms; it is rather based on revealed preferences. The payoff vectors in strategic games are the same as the payoff vectors in the distributional preferences task. This allows us to examine whether preferences elicited in a static scenario - dictator gamelike situations - predict choices in strategic games. The first-order beliefs combined with the payoff features in some of normal form games allows us to examine how beliefs might enter preferences directly, as suggested by psychological game theories. Finally, since players in strategic games know their opponent’s choices in distributional preferences tests, our design allows us to examine whether this information is used in making own choice. In particular, we explore indirect reciprocity, i.e. do players behave nicely to people who are nice to others? Experimental results show that rational equilibrium prediction performs no better than randomness, whereas there is a strong consistency of choices in distributional preferences task with the choices in strategic games, both at the population level and at the individual level. We also find supporting evidence that beliefs could enter preferences directly. Finally, there is some evidence that people are nice to people who are nice to others. Bielefeld University; University of Paris 1 Anti-conformism in the threshold model of collective behavior    [pdf] (joint work with Michel GRABISCH, Fen LI) Abstract We provide a first study of the threshold model, where both conformist and anti-conformist agents coexist. The paper is in the line of a previous work by the first author (Grabisch et al., 2018), whose results will be used at some point in the present paper. Our study bears essentially in answering the following question: Given a society of agents with a certain topology of the network linking these agents, given a mechanism of influence for each agent, how the behavior/opinion of the agents will evolve with time, and in particular can it be expected that it converges to some stable situation, and in this case, which one? Also, we are interested by the existence of cascade effects, as this may constitute a undesirable phenomenon in collective behavior. We divide our study into two parts. In the first one, we basically study the threshold model supposing a fixed complete network, where every one is connected to every one, like in the work of Granovetter (1978). We study the case of a uniform distribution of the threshold, of a Gaussian distribution, and finally give a result for arbitrary distributions, supposing there is one type of anti-conformist. In a second part, the graph is no more complete and we suppose that the neighborhood of an agent is random, drawn at each time step from a distribution. We distinguish the case where the degree (number of links) of an agent is fixed, and where there is an arbitrary degree distribution. Wuhan University The folk theorem for repeated games with time-dependent discounting    [pdf] Abstract This paper defines a general framework to study infinitely repeated games with time-dependent discounting, in which we distinguish and discuss both time-consistent and time-inconsistent preferences. To study the long-term properties of repeated games, we introduce an asymptotic condition to characterize the fact that players become more and more patient, that is, the discount factors at all stages uniformly converge to 1. Two types of folk theorem's are proven under perfect observations of past actions and without the public randomization assumption: the asymptotic one, i.e. the equilibrium payoff set converges to the individual rational set as players become patient, and the uniform one, i.e. any payoff in the individual rational set is sustained by a single strategy profile which is an approximate subgame perfect Nash equilibrium in all games with suffciently patient discount factors. As corollaries, our results of time-inconsistency imply the corresponding folk theorem's with the quasi-hyperbolic discounting. Yale University Logical Differencing in Network Formation Models under Non-Transferable Utilities    [pdf] (joint work with Wayne Yuan Gao, Sheng Xu) Abstract This paper considers a semiparametric model of dyadic network formation under nontransferable utilities. Such dyadic links arise frequently in real-world social interactions that require bilateral consent but by their nature induce additive non-separability. In our model we show how two-way fixed effects (corresponding to unobserved individual heterogeneity in sociability) can be canceled out without requiring additivity. The approach uses a new method we call logical differencing. The key idea is to construct an observable event involving the intersection of two mutually exclusive restrictions on the fixed effects, while these restrictions are obtained by taking the logical contraposition of multivariate monotonicity. Based on this identification strategy we provide consistent estimates of the network formation model. Finite-sample performance is analyzed in a simulation study. An empirical illustration using the risk-sharing data of Nyakatoke is presented. Motivated by the empirical findings, we discuss how to differentiate homophily versus assortativity. Cowles Foundation Dynamic Obstruction    [pdf] (joint work with German Gieczewski, Christopher Li) Abstract We study a model of policy experimentation by an incumbent politician who seeks to be reelected, but faces the prospect of obstruction by the opposing party. In the main variant of the model, the incumbent initiates a policy reform as early as possible if initial support is moderate, delays its implementation if support is high, and does not attempt it at all if support is low. The prospect of obstruction may dissuade the incumbent from initiating a policy reform, but does not change the timing conditional on a reform being initiated, and the opposition party ramps up its obstruction as the next election approaches. McGill University Comparative Statics of Product Disclosure Statements    [pdf] (joint work with Anastasia Burkovskaya, Jian Li) Abstract Different ways of framing the same information have impact on the final consumer decisions, implying that firms should pay close attention to how the product information is presented to the consumer. This paper investigates the implications of State Aggregation Subjective Expected Utility (SASEU) agent's behavior for Product Disclosure Statement (PDS) of an insurance company. An agent is SASEU if she is not neutral to information frames. We analyze the changes in the insurer profit from aggregating different sections of the current PDS together and provide (1) quantitative results in the case when consumer preferences are known, and (2) monotone comparative statics characterized by simple properties of the agent's event aggregation functional. The Chinese University of Hong Kong Strategic Post-exam Preference Submission in the School Choice Game    [pdf] (joint work with Vladimir Mazalov, Artem Sedakov, Jaimie W. Lien, and Jie Zheng) Abstract We consider a college admissions problem in which students take an exam before submitting their college applications under heterogeneous abilities and homogeneous preferences. Under an exam-based admissions procedure, students play an application game with each other under knowledge of their own score while being uncertain over other students’ scores. We provide a framework and solve for the equilibria of this game, which are in the class of threshold strategies with respect to one’s own score. In some situations, a result can occur in equilibrium such that students with mediocre performance apply to and are accepted at better colleges than their higher performing peers. It can be understood as a type of bluffing strategy over one’s exam score which corresponds to other students avoiding the better college out of fear of an undesirable admissions outcome. Such strategies may result in socially inefficient matches between students and colleges. Colgate University Optimal Information Design for Reputation Building    [pdf] Abstract Conventional wisdom holds that ratings and review platforms serve consumers best when they reveal the maximum amount of information to consumers at all times. This paper shows within a stylized model how this may not be true. The channel is that partial information may incentivize reputation-minded firms to invest more in quality. Committing to publish all reviews can lead to a "cold start problem," where there is a failure to attract early adopters, thereby shutting down the source of information. To find a solution to this problem, I use a dynamic Bayesian Persuasion model in which a long-run firm with a persistent type interacts with a sequence of short-run consumers. When the platform designs the public information policy to maximize total consumer welfare, there is a policy with three phases that converges to optimal as reviews become frequent. In the first phase, the platform reveals reviews with an interior probability and consumers learn about the firm. In the second phase, the consumers observe all reviews, and the firm always produces high quality. Finally, in the third phase, new reviews are hidden entirely and the firm produces low quality without damage to its reputation. When the designer has weaker commitment power and may revise its policy at a small cost, a repeated three phase policy is robust to revisions and remains optimal. Michigan State University Convention and Coalitions in Repeated Games    [pdf] (joint work with Nageeb Ali) Abstract We develop a theory of repeated interaction for coalitional behavior. We consider stage-games where both individuals and coalitions may jointly deviate. However, coalition-members cannot commit to long-run behavior (on and off the path), and are farsighted in that they recognize that today's actions influence tomorrow's behavior. We evaluate the degree to which history-dependence of this form can ward off coalitional deviations. If monitoring is perfect, every feasible and strictly individually rational payoff can be supported by history-dependent conventions. By contrast, if players can make secret side-payments to each other, every coalition achieves a coalitional minimax value. South University of science & technology Multipartite Games And Evolutionary Stable Matching    [pdf] Abstract In the matching theory of Gale and Shapley, every bipartite matching game has a stable matching. But a game beyond bipartite may not do so. In this paper, we reshape the universality of stable matching for multipartite games through generalizing the matching game whose players could be either partite agents or their coalitions. A dynamics of matching could be developed through introducing a series of matchings in different generations. The dynamics introduces a notion of evolutionary stable matching with matching refinement by matching stabilization in each generation of its dynamics. In the dynamic theory of matching, every matching game could have an evolutionary stable matching. University of Michigan Robust Predictions in Bargaining with Incomplete Information    [pdf] Abstract This paper studies robust predictions when players may have additional private information that is unknown to an outside analyst in an otherwise standard Coasian bargaining between a seller and a buyer with private values. The robust predictions in the frequent-offer limit depend crucially on the type of information that players may have: (i) when the seller has additional information about the buyer's value and this fact is common knowledge between players, the limiting equilibrium outcomes are always efficient and any surplus division between the seller and the buyer is possible; (ii) when the buyer is uncertain about the seller's information, any feasible and individually rational payoff vectors can be the limiting equilibrium payoffs. The results have direct policy implications regarding markets for information and privacy. Toulouse School of Economics Learning while Trading: Experimentation and Coasean Dynamics    [pdf] Abstract I study dynamic bilateral bargaining with one-sided incomplete information when superior outside opportunities may arrive during negotiations. Gains from trade are ex ante uncertain: in a good-match market environment, outside opportunities are not available; in a bad-match market environment, superior outside opportunities stochastically arrive for either or both parties. The two parties begin their negotiations with the same belief on the type of the market environment. Arrivals are public and learning about the market environment is common. One party, the seller, makes price offers at every instant to the other party, the buyer. The seller has no commitment power and the buyer is privately informed about his own valuation. This gives rise to rich bargaining dynamics. In equilibrium, there is either an initial period with no trade or trade starts with a burst. Afterward, the seller screens out buyer types one by one as uncertainty about the market environment unravels. Delay is always present, but it is inefficient only if valuations are interdependent. Whether prices increase or decrease over time depends on which party has a higher option value of waiting to learn. When the seller can clear the market in finite time at a positive price, prices are higher than the competitive price. This, however, need not be at odds with efficiency. Applications include durable-good monopoly without commitment, wage bargaining in markets for skilled workers, and takeover negotiations. National University of Singapore Optimal Selling Mechanisms with Buyer Price Search    [pdf] (joint work with Jingfeng Lu, Zijia Wang) Abstract We study optimal dynamic selling mechanisms when the buyers are initially and privately endowed with their values of the object on sale, and they can conduct costless search for their second-stage outside prices. Buyers’ optional prices are independent of their values. With private outside prices, second-stage incentive compatibility only requires semi-monotonicity of allocation rule but it is violated by the optimal design under public outside prices; moreover, off-equilibrium-path second stage best strategy cannot be pinned down. Thus, the privateness of the second-stage information matters for optimal designs, and deriving the optimal mechanism with private buyer options requires an innovative method. The revenue-maximizing mechanism with private options is established through conducting a modified Myerson convexification procedure to regularize the buyers’ virtual values in the dimension of the outside prices. The optimal mechanism requires a non-refundable deposit at the first stage and allocates the object to the buyer with the highest nonnegative regularized virtual value. Other buyers take the outside options if and only if outside prices are lower than their values. When there is only one buyer, the seller merely offers a first-stage fixed price if the outside price is the buyer’s private information; however, if the outside price is public, the seller offers a first-stage fixed price combined with a second-stage price that is matched to the buyer’s outside option. École Polytechnique Reputation and Social Learning    [pdf] (joint work with Ekaterina Logina and Konstantin Shamruk) Abstract This paper focuses on the interplay between social learning and reputation dynamics. Taking the herding model of Bikhchandani et al (1992), we endogenize the state of nature as a choice of quality made by the long-run player. We construct a Markov perfect equilibrium in which both cascade regions still exist, and once the beliefs are stuck there, incentives to build reputation vanish. Investment in quality follows an inverse U-shaped pattern: depending on whether the public belief is tilted in favor or against the long-run player, current trust may either destroy or boost up subsequent reputation building. Randomization on behalf of the long-run player tends to slow down social learning, but the public belief eventually reaches one of the two cascade regions. Unlike in the canonical information cascades\' setup, greater private signal precision may reduce the players\' responsiveness to their information. University of Warwick Banking Competition and Stability: The Role of Leverage    [pdf] (joint work with Xavier Freixas) Abstract This paper re-examines the classical issue of the possible trade-offs between banking competition and financial stability, by highlighting the role of bank leverage. We show that when loan market competition reduces entrepreneurs' moral hazard and loan portfolio risks, a bank's insolvency risk increases only if its leverage is sufficiently high. When bank leverage is endogenous, the relationship between competition and stability crucially depends on the financial safety net subsidies that reduce the cost of banks' debt and increase their leverage. Our analysis helps to reconcile seemingly contradictory empirical results on the issue and generates new testable hypotheses. Interdisciplinary Center (IDC), Herzliya Attacking a nuclear facility with a noisy intelligence and Bayesian agents    [pdf] (joint work with Yair Tauman) Abstract We study the role of a noisy intelligence in two Bayesian rival countries. Player 1 (he) wishes to develop a nuclear bomb. Player 2 (she) aims to prevent him from building it by attacking his facilities. Player 1 is asked to open his facility for inspection. If he does not possess the bomb, he can avoid Player 2's potential attack by opening to reveal it. Player 1 incurs a cost for allowing inspections. Player 1's strategies are: B (to build the bomb), NBO (not to build the bomb and open for inspection), or NB (not to build and not open). If 1 refuses to open, 2 can either attack or not attack. If 1 opens, 2 will not attack. Player 2 operates an intelligence system (IS) to spy on Player 1. IS sends either signal b or nb, meaning 1 has the bomb or not, respectively. The precision of IS is alpha, 1/2 It is shown that there exists a unique perfect Bayesian equilibrium with the following characteristics: (i) There exists a threshold c0 s.t. Player 1 with inspection cost below c0 in equilibrium chooses to open his facility for inspection; if the cost exceeds c0, he mixes the strategies B and NB (ii) There exists a threshold alpha0 s.t. Player 1 assigns higher probability on B if Player 1's estimation on IS's precision is below alpha0. In this case, Player 2 ignores the signal and attacks 1 when IS is not too accurate and only follows the signal when IS is relatively accurate. (iii) If Player 1's estimation on IS's precision is above alpha0, Player 1 acts conservatively (assigns lower probability on B) . Player 2 in this case ignores the signal and does not attack 1 if IS is not too precise and follows the signal only if it is relatively accurate HEC Paris Learning in Repeated Routing Games with Symmetric Incomplete Information    [pdf] (joint work with Marco Scarsini and Tristan Tomala) Abstract We consider a model of repeated routing games under symmetric incomplete information with dynamic populations. It consists in a routing game in which costs are determined by an unknown state of the world. At each stage, a demand of ran- dom size routes over the network and equilibrium costs are observed. Our objective is to study how information aggregates according to the equilibrium dynamics and to which extent agents can learn about the state of the world. We define several forms of learning (whether agents eventually learn the state of the world or act as in full information) and present a simple example which shows that in such a framework, with a non-atomic set of players, learning may fail and routing may be inefficient even in the looser sense. This contrasts with the atomic case, in which a folk theorem ensures players can learn the game parameters. In a non-atomic setup, learning cannot be ensured unless there is an additional source of randomness to incentivize exploration of the network. We show that this role can be fulfilled by a variable and unbounded demand size. We prove that under some network topology condition and costs unboundedness, a variable and unbounded demand is sufficient to ensure learning. This result holds whether the state space is finite or not. We additionally provide examples to show these conditions are tight. We finally con- nect our work with the social learning literature and show that if instead of having random demand size, costs are observed with unbounded noise, then learning does not occur in the general case unless some limited recall assumption is made. PUC Chile Debt and information aggregation in financial markets    [pdf] (joint work with Ana Elisa Pereira) Abstract We analyze how the capital structure of a firm affects the information revealed by secondary financial markets. Firms use information contained in market activity to guide real investment decisions. We show that, if markets are sufficiently liquid, excessively high or low levels of debt hinder the informativeness of financial markets and reduce firm value through the feedback from prices to real decisions. In this case, an intermediate level of debt maximizes the value of the firm. When markets are illiquid, intermediate levels of debt dilute incentives for trading on information, and extreme levels of debt are often optimal. Maastricht University Strategy-proofness and perfect mechanisms    [pdf] (joint work with Yu Zhou) Abstract We introduce the notion of a perfect mechanism---a structured pair consisting of (i) a dynamic perfect information game form, and (ii) a convention specifying an honest strategy for each player given his type---to establish the existence of socially optimal ex-post perfect equilibria in a large class of dynamic games where the agents may have extremely limited information about the economy (Theorem 1). Applications include marriage markets with dating apps, labor markets with telephones, and online auctions. Stanford University Learning Through the Grapevine: The Impact of Message Mutation, Transmission Failure, and Deliberate Bias    [pdf] (joint work with Matthew Jackson, Suraj Malladi, David McAdams) Abstract We examine how well someone learns when information from an original sources only reaches them after repeated person-to-person noisy relay (oral or written). We consider three distortions in communication: random mutation of message content, random failure of message transmission, and deliberate biasing of message content. We characterize how many independent chains a learner needs to access in order to accurately learn. With only mutations and transmission failures, there is a sharp threshold such that a receiver fully learns if they have access to more chains than the threshold number, and learn nothing if they have fewer. A receiver learns not only from the content, but also from the number of received messages---which is informative if agents' propensity to relay a message depends on its content. We bound the relative learning that is possible from these two different forms of information. Finally, we show that learning can be completely precluded by the presence of biased agents who deliberately relay their preferred message regardless of what they have heard. Thus, the type of communication distortion determines whether learning is simply difficult or impossible: random mutations and transmission failures can be overcome with sufficiently many sources and chains, while biased agents (unless they can be identified and ignored) cannot. We show that partial learning can be recovered by limiting the number of contacts to whom an agent can pass along a given message, a policy that some platforms are starting to use. Stony Brook University Resource Destruction in Optimal Mechanisms for Bilateral Trade (joint work with Eric Maskin) University of Wisconsin-Madison Bayesian Persuasion with Hidden Motives Abstract Digital media has given people access to vast amounts of information. Much of it is produced by sources whose motives are not clear to the consumer. This lack of transparency affects the way in which people draw inferences from the messages they receive, as well as the value for providing information. I model this as a game between a finite number of senders with a hidden move by nature. Every sender simultaneously chooses a signal and commits to disclosing its message. Then, nature privately chooses which of the signal realizations the receiver gets to observe. Whenever senders partially pool their signals, the sender is uncertain about the informativeness of the message received. This uncertainty may incentivize a sender to provide more information or to ”slack-off” depending on the receiver’s belief about their type. I characterize a sufficient condition for equilibrium to be (essentially) babbling despite the assumed commitment power and even if there are senders whose payoff function is not concave at the prior. Further, increasing the variety of senders with state-independent preferences always reduces the informativeness of the signals they choose. However, this uncertainty un-ravels whenever senders can verify their true motives. This has important implications for decentralized information platforms like social media. They can improve the quality of information by verifying self-reported existence of biases like financial sponsorships or political endorsements without taking a stance on the quality of information directly. Daito Bunka University The Shapley Value of the Lower Game for Partially Defined Cooperative Games    [pdf] (joint work with Jose M. Zarzuelo) Abstract The classical approach to cooperative games assumes that the worth of every coalition is known. However, in the real world problems there may be situations in which the amount of information is limited and consequently the worths of some coalitions are unknown. The games corresponding to those problems are called partially defined cooperative games and surprisingly have not received yet enough attention. Partially defined cooperative games were first studied by Willson (1993). However, this author restricted the attention to partially defined games in which if the worth of a particular coalition is known, then it is also known the worth of all the coalitions with the same cardinality. Moreover, Willson (1993) proposed and characterized an extension of the Shapley value for partially defined cooperative games. This extended Shapley value coincides with the ordinary Shapley value of a full defined game. In this full defined game the coalitions whose worth were known in the original game maintain the same worth, but otherwise they are assigned a worth zero, that seems to be not well justified. Masuya and Inuiguchi (2016) considered partially defined cooperative games which are assumed the superadditivity. Further, it is assumed that the worths of the grand and singleton coalitions are at least known. Then they defined two full defined games which are called lower game and upper game respectively. In this work we propose the Shapley value of the lower game for superadditive partially defined cooperative games. Moreover, we characterize the proposed value using five axioms. Three of them are the well known axioms of efficiency, symmetry and covariance. The fourth one, called the axiom of fairness, is proposed by Myerson (1980). The fifth axiom is a similar version of the axiom of coalitional strategic equivalence which was first considered by Chun (1989) for full defined games. University of South Carolina Fair and Square Contests    [pdf] Abstract What is a fair competition? As a rule, this means that all players operate on a level playing field. In this paper, we address this question for contests. What does it mean that the contest is fair? Participants exert irreversible efforts to win a prize (sometimes prizes) in a contest. Is the contest fair if participants have the same equilibrium winning probabilities? This may be fair, but what if participants have different prize values. In this case, the same winning probabilities can mean that participants have different expected payoffs in the contest. Is that fair? It seems fair to focus on the expected equilibrium payoffs. We consider Tullock’s contests with reimbursements and find a special class of contests in which participants get the same expected equilibrium payoffs, even if their prize values are different. It turns out that the Sad-Loser contest is the only Tullock’s contest with reimbursements in which participants receive the same expected equilibrium payoffs. University of Bonn Bayesian Persuasion With Costly Information Acquisition    [pdf] Abstract A sender choosing a signal to be disclosed to a receiver can often influence receiver's actions. Is persuasion harder when the receiver has additional information sources? Does the receiver benefit from having them? We extend Bayesian persuasion to a receiver's acquisition of costly information. The game can be solved as a standard Bayesian persuasion under an additional constraint: the receiver never costly learns. The threat' of learning hurts the sender. However, the outcome can also be worse for the receiver. We further propose a new solution method not directly relying on concavification, which is also applicable to a standard Bayesian persuasion. Université Paris Diderot, IRIF Incentives in Popularity-based Random Matching Markets    [pdf] (joint work with Hugo Gimbert and Claire Mathieu and Simon Mauras) Abstract Stable matching in a community consisting of N men and N women is a classical combinatorial problem that has been the subject of intense theoretical and empirical study since its introduction in 1962 in a seminal paper by Gale and Shapley [GS62]. In this paper, we use a probabilistic model, based on the popularity model of Immorlica and Mahdian [IM15], to generate the input preference lists. When popularities are uniform on one side and geometric on the other side (the i-th person has popularity λ^i ) then we prove that the expected fraction of participants who have more than one stable partner tends to 0. By [IM15] this implies that, in any stable matching mechanism, the best response of a participant is almost surely the truthful strategy; moreover, the induced game has a Nash equilibrium in which in expectation almost all strategies are truthful. When preference lists are arbitrary on the men’s side and are generated from geometric popularities on the women’s side, we prove that a woman using a non-truthful strategy can improve the rank of her partner in her preference list by at most a constant, in expectation over the preference lists of women. The proof relies on a decomposition of the matching market into blocks of expected constant size, in which a block contains men of similar popularities. When preference lists are uniform then the expected number of stable pairs is asymptotically equivalent to N ln N [Pit92]; when they are arbitrary on one side and uniform on the other side, then the expected number of stable pairs is asymptotically at most N ln N [KMP90]. When preference lists are arbitrary on the men’s side and are generated from (general) popularities on the women’s side then we prove that the expected number of stable pairs is asymptotically at most N ln N . Thus adding correlations between preference via popularities can only decrease the number of stable pairs, asymptotically. Higher School of Economics When Should We Care About Privacy? Information Collection in Games (joint work with Arina Nikandrova) Abstract The amount of information produced every day is staggering and Internet makes a lot of this information available almost for free. We argue that free access to information does not guarantee that it is going to be used for making decisions. More precisely, sufficient conditions for cheap payoff-relevant information not to be collected in a symmetric equilibrium are: (1) sufficiently many people have access to this information, and (2) the usefulness of information to a person highly depends on other people’s actions. Primary examples are elections (when free-riding discourages information collection) and financial markets (when competition is too vigorous). Our conclusion alleviates concerns over making private information available in public domain: publicity might render information useless, thus effectively protecting sensitive information from prying eyes. University of Chicago Booth School of Business Dynamic Project Standards with Adverse Selection Abstract We study a principal-agent relationship in which the agent has private informa- tion about the future profitability of the relationship or a currently operated project, but is biased in favor of continuing the project. When the principal retains liqui- dation rights over the relationship or project and must introduce distortions in the liquidation policy itself in order to elicit the agent\'s private information. The op- timal policy consists of a threshold which, if the profitability falls below, triggers liquidation. When the agent reports a higher growth rate of the projects prof- itability, the optimal threshold will be either decreasing over time and approach the principal\'s first-best level (i.e., the distortions from eliciting the agent\'s information are temporary) or will be increasing and divergent over time (i.e., liquidation at later times takes place at unboundedly inefficient levels). A simple condition on the relative profitability of the project across agent types tells us when the distortions are temporary or permanent. These results are robust to the use of transfers (e.g., wage payments) provided that a limited liability condition is respected for the agent. They are also robust to the use of direct auditing methods to assess profitability. The model provides a tractable way to analyze contractual distortions in the pretense of private information, and in particular, shows that contracts simultaneously front- and back-loaded across a menu of options in the same principal-agent relationship. * Brigham Young University Polarization and Pandering in Common Interest Elections    [pdf] Abstract This paper analyzes candidate positioning in common interest elections, meaning that voter differences reflect private estimates of what is best for society, not idiosyncratic tastes. Centrist candidates have a competitive advantage, but may be bad for welfare. An extreme candidate can still win if truth is on her side, though, so for a variety of model specifications, candidates polarize in equilibrium, even when each wants very badly to win. Indian Institute of Technology Kanpur Toward Controlling Discrimination in Online Ad Auctions    [pdf] (joint work with L. Elisa Celis, Nisheeth K. Vishnoi) Abstract Online advertising platforms are thriving due to the customizable audiences they offer advertisers. However, recent studies show that the audience an ad gets shown to can be discriminatory with respect to sensitive attributes such as gender or ethnicity, inadvertently crossing ethical and/or legal boundaries. To prevent this, we propose a constrained ad auction framework that allows the platform to control the fraction of each sensitive type an advertiser's ad gets shown to while maximizing its revenues. Building upon Myerson's classic work, we first present an optimal auction mechanism for a large class of fairness constraints. Finding the parameters of this optimal auction, however, turns out to be a non-convex problem. We show how this non-convex problem can be reformulated as a more structured non-convex problem with no saddle points or local-maxima; allowing us to develop a gradient-descent-based algorithm to solve it. Our empirical results on the A1 Yahoo! dataset demonstrate that our algorithm can obtain uniform coverage across different user attributes for each advertiser at a minor loss to the revenue of the platform, and a small change in the total number of advertisements each advertiser shows on the platform. Australian National University On the existence of equilibrium in Bayesian games without complementarities    [pdf] (joint work with Rabee Tourky) Abstract In a recent paper Reny (2011) generalized the results of Athey (2001) and McAdams (2003) on the existence of monotone strategy equilibrium in Bayesian games. Though the generalization is subtle, Reny introduces far-reaching new techniques applying the fixed point theorem of Eilenberg and Montgomery (1946, Theorem 5). This is done by showing that with atomless type spaces the set of monotone functions is an absolute retract and when the values of the best response correspondence are non-empty sub-semilattices of monotone functions, they too are absolute retracts. In this paper we provide an extensive generalization of Reny (2011), McAdams (2003), and Athey (2001). We study the problem of existence of Bayesian equilibrium in pure strategies for a given partially ordered compact subset of strategies. The ordering need not be a semilattice and these strategies need not be monotone. The main innovation is the interplay between the homotopy structures of the order complexes that are the subject of the celebrated work of Quillen (1978), and the hulling of partially ordered sets, an innovation that extends the properties of Reny’s semilattices to the non-lattice setting. We also describe some auctions that illustrate how this framework can be applied to generalize the existing results, and extend the class of models for which we can establish existence of equilibrium. As with Reny (2011) our proof utilizes the fixed point theorem in Eilenberg and Montgomery (1946). Hebrew University of Jerusalem Screening Inattentive Agents    [pdf] Abstract Abstract: An important aspect of mechanism design problems is the information to which the agents involved have access. A potential complication is that this information may endogenously depend on which options they are offered. I model this by considering an optimal mechanism design problem in which a principal screens an agent with uncertain value. The agent is inattentive regarding their true value, and decides how to optimally acquire information about it in response to the offered mechanism. I show that the optimal mechanism is characterized by a non-participation belief, which in turn determines the contour of possible beliefs and transfers for every possible probability of allocation (including those not used in the mechanism). For every possible non-participation belief, the mechanism design problem then reduces to one of Bayesian persuasion. The optimal mechanism is then implicitly determined by choosing the optimal non-participation belief. Copenhagen Business School Robust Information Aggregation Through Voting    [pdf] (joint work with Tomás Rodríguez Barraquer and Justin Valasek) Abstract Numerous theoretical studies have shown that information aggregation through voting is often fragile: Since the probability that any agent's vote influences the committee's decision becomes arbitrarily small in a large committee, voting behavior is very sensitive to the payoff structure. For example, when agents face payoffs that condition on their individual vote, then these vote-contingent payoffs, no matter how small, can drive voting behavior in large committees. We consider a general model of voting in large committees with vote-contingent payoffs and characterize the set of payoff vectors k that support equilibria that aggregate information in a robust way. Robust, in the sense that all payoff vectors sufficiently close to k must also support equilibria that aggregate information. Furthermore, we characterize the payoff vectors under which robust information aggregation is the unique equilibrium outcome. We find that robust information aggregation only depends on the ratio of relative payoffs agents receive for voting for the ex-post correct option given that the committee also selects the correct option. However, the uniqueness of the equilibrium that aggregates information depends on payoffs when the committee selects the incorrect option; agents must be punished for voting with the majority side when the committee chooses the incorrect option. MIT Media Capture: A Bayesian Persuasion Perspective    [pdf] (joint work with Arda Gitmez and Pooya Molavi) Sao Paulo School of Economics - FGV Information Design with Recommender Systems    [pdf] (joint work with Caio Lorecchio) Abstract An uninformed long-run sender restricted to public communication rules, such as recommender systems and rating systems, faces a sequence of short-lived receivers. Each receiver must decide whether or not to invest in a project of fixed, but unknown, quality. The sender seeks to maximize investment, but is uninformed about the project's quality and learning must be elicited through experimentation by the receivers. We show that the optimal rule is a simple recommendation (two ratings). In contrast, if learning is independent of the agents' actions, the designer's payoff increases with the number of ratings. We provide conditions under which simple rules approximate the Bayesian persuasion payoff. Universidad Pablo de Olavide Optimal Management of Evolving Hierarchies    [pdf] (joint work with Jens Leth Hougaard; Juan D. Moreno-Ternero; Lars Peter Østerdal) Abstract We study the optimal management of evolving hierarchies, which abound in real-life phenomena. An initiator invests into finding a subordinate, who will bring revenues to the joint venture and who will invest herself into finding another subordinate, and so on. The higher the individual investment (which is private information), the higher the probability of finding a subordinate. A transfer scheme specifies how revenues are reallocated, via upward transfers, as the hierarchy evolves. Each transfer scheme induces a game in which agents decide their investment choices. We consider two optimality notions for schemes: initiator-optimal and socially-optimal schemes. We show that the former are schemes imposing to each member a full transfer to two recipients (the predecessor and the initiator) with a constant ratio among the transfers. We show that the latter are schemes imposing full transfers to the immediate predecessors. US Army Noisy and Silent Games of Timing with Detection Uncertainty and Numerical Estimates    [pdf] (joint work with David B. Bednarz, Nicholas A. Krupansky, Bernhard von Stengel) Abstract In previous work, Bednarz (2016), we described the interactions between a mobility player, who is trying to maximize the chances that he makes it from point A to point B with one chance to refuel, and a terrain player who is trying to minimize that probability by placing an obstacle somewhere along the path from A to B. This relates to the literature of games of timing. In this paper, we generalize the game of timing studied previously to include the possibility that the players' actions are known to their adversary. In other words, we examine both noisy and silent versions of the game. In addition to this, one player may have an imperfect ability to detect their adversary. This situation is known as detection uncertainty and was first studied in Sweat (1971). Here, we extend those results to compare noisy and silent versions of this game of timing with detection uncertainty and obtain numerical estimates of the optimal strategies using the sequence form VonStengel (1996). Northwestern University Social Value of Information in Networked Economies    [pdf] Abstract This paper studies the social value of information in economies with heterogeneous interactions. Agents play a coordination game facing a trade-off in optimizing their actions to an unknown state and to other agents' actions. The benefits from coordination can vary across agents and are described by an interaction matrix whose (i, j)-th entry measures i's coordination motive with j. Agents receive a private and a public signal about the state. We characterize a unique equilibrium of this game via the Katz-Bonacich centrality defined on the interaction matrix, and show that the relative weight on the public signal is strictly increasing in his centrality. Using this characterization, we provide two different insights on the value of information. First, we generalize the Morris and Shin (2002)'s anti-transparency result: in the beauty contest model, more public information can be detrimental to welfare if and only if the Katz-Bonacich centrality vector is sufficiently large. Second, we study the heterogeneity in the value of information, and show that more private (public) information can hurt agents who have small (large) Katz-Bonacich centralities but benefit others. Finally, we also extend our model to incorporate a semipublic signal and anti-coordination motives. Brown University DETERMINANTS OF THE COLLEGE EARLY ADMISSIONS MARKET CONFIGURATION    [pdf] Abstract Most of the top private selective colleges in the US offer early admission programs. Two formats are predominant: Restrictive Early Action (REA) and Early Decision (ED). Both programs allow students to apply to only one college and receive an official admission decision before the regular admissions process. REA and ED are different in that the former doesn't convey a binding enrollment commitment from the student upon admission, allowing her to apply in the regular process to other colleges, while the latter forces the student to enroll if admitted early. We construct a college admissions model that allows for the endogenous decision of which type of early program to offer. The model can explain the relationship between some stylized facts about the market, taking some as assumptions and some as consequences: early applicants are wealthier, on average, and are more likely to be admitted than regular applicants. Also, the colleges that offer REA are disproportionately less budget and more capacity constrained, have traits that are more attractive for students, are more popular in the application process and are more selective. In the model, if early applicants are wealthier, the benefit of comparing financial offers is high, college A is likely to overbid college B in aid, and college A is capacity constrained while college B is budget constrained, then there exists an equilibrium where college A offers REA and college B offers ED. Under such an early market configuration, a profitable wealth screening device arises for both colleges: students with a high financial need benefit the most from comparing financial offers and thus, due to its commitment nature, they are unlikely to apply early to an ED program if there is a REA program available. Under this situation, college B benefits from attracting and capturing a relatively wealthier population, while college A benefits from attracting high quality, high financial need students that avoid applying early to B. ETH Zurich Feedback effects in the experimental double auction with private information    [pdf] (joint work with Nunez Duran, Pradelski) Abstract Controlled laboratory and online experiments in economics have confirmed that the continuous double auction for nondurables rapidly approximates competitive equilibrium under private information. Interestingly, this convergence regularly occurs asymmetrically through rising prices. Here, we stress-test this finding by varying various fundamental constituents of the market institution (regarding price rule, market asymmetry, and equilibrium structure) with particular focus on the role of order-book feedback, that is, which parts of the order book (i.e., bids, asks, realized prices) are available to market participants. We provide an empirical foundation for convergence with asymmetries, even in markets that are markedly set up against it, that is, in terms of equilibrium structure and lack of feedback. Stanford University A Systematic Test of the Independence Axiom    [pdf] (joint work with Ritesh Jain) Abstract We investigate the Independence Axiom---a central tenet of expected utility theory. We design a lab experiment to test this axiom on the entire probability simplex. This method allows us to study both the certainty effect and the reverse certainty effect. Our results suggest that the Independence Axiom is violated systematically across the entire simplex, but violations are much more common in the direction opposite the conventional "certainty effect." The nature of violations is more consistent with the reverse certainty effect as opposed to the accepted experimental knowledge of the certainty effect. Our experiment contributes to the existing literature by studying the Independence Axiom on the entire simplex and is one of the first to document the prevalence of the reverse certainty effect. We contribute game theory in understanding how individuals maximize their utility under risk. University of South Carolina Asymmetric Contests and the Effects of a Cap on Bids (joint work with Alexander Matros) Abstract We study asymmetric all-pay auctions where the prize has the same value for all players, but players might have different cost functions. We provide sufficient conditions for existence and uniqueness of the conventional mixed-strategy equilibrium when the cost functions are right-continuous. Applications for all-pay auctions with various caps placed on the bids are discussed. We also discuss how different types of caps placed on bids affects the revenue for the seller. Nazarbayev University Harmful Screening in Competitive Markets    [pdf] (joint work with Irina Kirysheva) Abstract We consider a model where competitive firms commit to prices and screen con- sumers. Surprisingly, we find that while screening allows firms to avoid inefficient trades, it results in excessive rejections and can reduce welfare. We characterize market equilibria and show that inefficiencies arise when there are few firms and the social value of screening is low. Yale Aiming for the goal: contribution dynamics of crowdfunding    [pdf] (joint work with Joyee Deb, Kevin R. Williams) University of the Basque Country Characterization of efficient networks in a connections model with decreasing returns technology    [pdf] (joint work with Federico Valenciano) Abstract We consider a network-formation model where the strength or quality of a link depends on the amount invested in it and is determined by a link-formation technology, i.e. an increasing differentially and strictly concave function which is the only exogenous ingredient in the model. The revenue from investments in links is the information that the nodes receive through the network. The structures of the only efficient networks are characterized. University Paris Dauphine A solution for stochastic games    [pdf] (joint work with Luc Attia and Miquel Oliu-Barton) Abstract "Stochastic games have a value'' was the five-word abstract chosen by Mertens and Neyman (1981) to announce the existence of the uniform values for stochastic games, a model introduced by Shapley (1953) as an extension both to matrix games and to Markov decision processes. Their result was a major accomplishment, as it meant a very robust notion of solution for stochastic games. Since then, the problem of characterizing the value has remained unsolved. The main contribution of this paper is to settle this question, based on a reduction of stochastic games to matrix games depending on a real parameter. Northwestern University A result on convergence of sequences of iteration, with applications to best-response dynamics    [pdf] (joint work with Wojciech Olszewski) Abstract The result which says that the sequence of iterations xk+1 = f(xk) converges if f : [0, 1] → [0, 1] is an increasing function, has numerous applications in elementary economic analysis. I generalize this simple result to some mappings f : S ⊂ [0, 1]n→ S. The applications of this result include, but are not limited to, the convergence of best-response dynamics in the general version Crawford and Sobel (1982) model. Northwestern University Equilibrium Existence in Games with Ties    [pdf] (joint work with Wojciech Olszewski and Ron Siegel) Abstract We prove the existence of equilibria for a class of games with discontinuous payoffs. Our class of games includes: (a) a general version of all-pay contests, (b) first-prize auctions with interdependent values, and (c) Hotelling models with incomplete information. University of California, Santa Barbara Computing Optimal Taxes in Atomic Congestion Games    [pdf] (joint work with Rahul Chandan, Dario Paccagnan, Bryce L. Ferguson, Jason R. Marden) Abstract While selfish behaviour often results in sub-optimal system operation, taxation mechanisms have been proposed to improve the overall efficiency of the system. In this work we focus on the class of atomic congestion games, and show how to compute taxation mechanism that optimize the resulting worst-case efficiency while being robust against network modifications (network-agnostic). Specifically, we first show how to determine the price of anarchy of a given network-agnostic taxation mechanism through the solution of a tractable linear program. Second, we prove that optimal network-agnostic taxation mechanisms are linear maps from the set of latency functions to the set of tolls. Finally, we leverage these results to compute optimal network-agnostic taxation mechanisms and accompany them with a corresponding price-of-anarchy certificate. Our solution differs from those existing in the literature in that the optimal taxes are determined without any information about the specific game instance at hand, significantly reducing the computational burden. At the same time, their performance is almost identical to those derived with full information. City University of New York To What Extent is a Group an Individual?    [pdf] Abstract We consider the issue of regarding a group as an agent. In his book The Intentional Stance, Daniel Dennett considers the issue of entities which can be regarded as agents. These are entities whose behavior we are able to predict (somewhat) by asking, What does it want? What does it know? What is it able to do?\" Such questions are already difficult when we are dealing with other human beings. They become more tricky when we are dealing with a group as an agent and face difficult questions of defining its wishes and its possible actions. We start by pointing out that a set like Democrats, or Muslims, does not satisfy the requisite conditions, at least not fully and point to some insights. MIT Graphon games: A statistical framework for network games and interventions    [pdf] (joint work with Francesca Parise, Asuman Ozdaglar) Abstract In this paper, we introduce the new class of graphon games to describe strategic behavior in heterogeneous populations of infinite size. As a first contribution, we investigate properties of the Nash equilibrium of this newly defined class of games, including existence, uniqueness and comparative statics. As a second contribution, we illustrate how graphon games can be used to approximate strategic behavior in sampled network games, which are games where players interact according to a network that is randomly sampled from the graphon, and we derive precise bounds for the distance between graphon and sampled network game equilibria in terms of the population size. As a third contribution, we show that it is possible to design almost optimal interventions for sampled network games by relying on the graphon model. This procedure results in simple intervention policies that are robust to stochastic variations and can be applied to multiple network realizations. EIEF Robust Predictions in Dynamic Policy Games    [pdf] (joint work with Juan Pablo Xandri) Abstract Dynamic policy games feature a wide range of equilibria. This paper provides a methodology for obtaining robust predictions. We begin by focusing on a model of sovereign debt although our methodology applies to other settings, such as models of monetary policy or capital taxation. The main result of the paper is a characterization of outcomes that are consistent with a subgame perfect equilibrium conditional on the observed history. Our methodology provides observable implications common across all equilibria that we illustrate by characterizing, conditional on an observed history, the set of all possible continuation prices of debt and comparative statistics for this set; by computing bounds on the maximum probability of a crisis; and by obtaining bounds on means and variances. In addition, we propose a general dynamic policy game and show how our main result can be extended to this general environment. Tel Aviv University Bilateral Trade With a Benevolent Intermediary    [pdf] (joint work with Ran Eilat) Abstract We study intermediaries who seek to maximize gains from trade in bilateral negotiations. Intermediaries are players: they cannot commit to act against their objective function and deny some trades they believe to be beneficial -- a commitment that is used by mechanisms to achieve ex-ante optimality. The intermediation game is equivalent to a mechanism design problem with an additional "credibility" constraint, requiring that every outcome be interim-optimal, conditional on available information. Consequently, an interesting information trade-off arises, whereby acquiring fine information makes the trading decision more responsive to the parties' valuations, while coarse information allows more flexibility to credibly deny beneficial trades. We investigate how such intermediaries communicate with the parties and make decisions, and derive some properties of optimal intermediaries. Northwestern University Trust and Betrayals: Reputational Payoffs and Behaviors without Commitment    [pdf] Abstract I introduce a reputation model in which all types of the reputation-building player are rational and are facing lack-of-commitment problems. I study a repeated trust game in which a patient player (e.g., a seller) wishes to win the trust of some myopic opponents (e.g., buyers) but can strictly benefit from betraying them. Her benefit from betrayal is her persistent private information. I provide a tractable formula for the highest equilibrium payoff for every type of that patient player. Interestingly, incomplete information affects this payoff only through the lowest benefit in the support of the prior belief. In every equilibrium that attains this highest payoff, the patient player's behavior depends nontrivially on past play. I establish bounds on her long-run action frequencies that apply to all of her equilibrium best replies. These features of her behavior are essential for her to extract information rent while preserving her informational advantage. I construct a class of such high-payoff equilibria in which the patient player's reputation depends only on the number of times she has betrayed as well as the number of times she has been trustworthy in the past. This captures some realistic features of online rating systems. Stanford University Revenue maximization with heterogeneous discounting: Auctions and pricing (joint work with Jose Correa, Juan Escobar) Abstract We characterize the revenue maximizing mechanism in an environment with pri- vate valuations and asymmetric discount factors. The optimal mechanism combines auctions to encourage competition and dynamic pricing to screen of buyers’ valuations. When buyers are ex-ante symmetric and the seller is more patient than the buyers, the optimal mechanism takes a remarkably simple form. The seller runs a modified second price auction and allocates the item to the highest bid buyer if and only if the second highest bid exceeds the reserve price. The winning buyer pays the second highest bid. If the item is not sold in the auction, the seller posts a price path that depends on the second highest bid. The item is then allocated to the highest bid buyer at a strictly positive time. Our results imply that, for a patient seller, auctions and pricing schemes are complements and caution against the presumption that it is ex-ante optimal to commit not to trade when an auction fails. BME Objective ambiguity    [pdf] Abstract The possibility of gaining advantage in strategic situations by using ambiguity is well documented in the literature. However, so far there is not known any method or procedure to generate objective ambiguity, that is, there is not known such "coin tossing" which produces ambiguous outcomes. In this paper we introduce a procedure which -- like coin tossing in the case of probability distributions -- can generate objective ambiguity. The procedure is based on the random set approach of ambiguity. Universidad de Chile Bounding the Value of Observability in a Dynamic Pricing Problem    [pdf] (joint work with Jose Correa,Gustavo Vulcano) Abstract Research on dynamic pricing has been growing during the last four decades due majorly to its use in practice by a variety of companies as well as the several model variants that can be considered. In this work, we consider the particular pricing problem where a firm wants to sell one item to a single buyer in order to maximize her expected revenue. The firm pre-commits to the price function over an infinite horizon. The buyer has a private value for the item and purchases at the time when his utility is maximized. In our model, the buyer is more impatient than the seller and we study how important is to observe the buyer time arrival in terms of the seller’s expected revenue. We prove that in a very general setting, the expected revenue when the seller observes the buyer’s arrival is at most roughly 3.6 times the expected revenue when the seller does not know the time when the buyer arrives. Argyros School of Business and Economics, Chapman University Innovation, Diffusion and Shelving    [pdf] (joint work with Swapnendu Banerjee and Monalisa Ghosh) Abstract In an oligopoly model with an outside innovator and two asymmetric licensees, we consider a story of technology transfer of a cost reducing innovation. While the innovation reduces the cost of the inefficient firm only, we explore the strategic incentives of the efficient firm to acquire the technology. We find situations where the efficient firm acquires the technology, however shelves it and situations where it does not shelve it and further licenses it to the inefficient firm. We see the impact of technological diffusion (or no diffusion) from innovation on consumer welfare and industry profits. We also find the optimal mode of technology transfer of the innovator in this environment. Carnegie Mellon University Accommodating Cardinal, Ordinal and Mixed Preferences: An Extended Preference Domain for the Assignment Problem    [pdf] Abstract We extend the preference domain of the assignment problem to accommodate ordinal, cardinal and mixed preferences together and thereby allow the mechanism designer to elicit different levels of information about individuals' preferences. Our domain contains preferences over lotteries which are monotonic, continuous and satisfy independence axiom. The stochastic dominance order is the coarsest element of this domain while a vNM preference order is a finest element according to a natural coarseness relation on preferences over lotteries. We characterize this domain in terms of consistent vNM preferences and propose a preference reporting language that enables agents to report preferences from this domain. We then extend the pseudo-market mechanism of Hylland and Zeckhauser (1979) to this domain and show that the family of pseudo-market mechanisms are efficient and weakly envy-free while they fail strategy proofness. We show that the impossibility results concerning the incompatibility of incentive compatibility and efficiency of Zhou (1990) and Bogomolnaia and Moulin (2001) for cardinal and ordinal preferences respectively applies to our domain as well. brpowers@asu.edu An Analysis of Dual Issue Final-Offer Arbitration    [pdf] Abstract We discuss final-offer arbitration where two quantitative issues are in dispute and model it as a zero-sum game. Under reasonable assumptions we both derive a pure strategy pair and show that it is both a local equilibrium and furthermore that it is the unique global equilibrium. brpowers@asu.edu N-Player Final-Offer Arbitration: Harmonic Numbers in Equilibrium    [pdf] Abstract We consider how a mechanism of final-offer arbitration may be applied to a negotiation between N players attempting to split a unit of wealth. The game model is defined where the judge chooses a fair split from a Dirichlet distribution. For the case of a uniform probability distribution the equilibrium strategy is found as a function of the Harmonic numbers. Yale University Persuasion through a strategic moderator    [pdf] Abstract We study how intermediation affects information disclosure in a strategic information design problem. A sender publicly commits to an editorial policy to persuade a receiver to take a particular action. The communication channel between the sender and the receiver, however, is controlled by a moderator who verifies and chooses whether to faithfully deliver the realized message. We solve the sender\'s optimal persuasion problem and examine how it varies with the characteristics of the moderator: (1) the receiver strictly prefers to have a moderator who is biased against the action favored by the sender, (2) but not necessarily a more informed moderator. University of Oxford Contradiction-Proof Information Design    [pdf] (joint work with Ansgar Walther) Abstract We study the role of information design in settings where privately informed parties can additionally make strategic disclosures. A committed persuader and an uncommitted, privately informed sender can disclose hard evidence to a decision-maker. Treating the problem as one of design, we fully characterize the range of informational outcomes that can be obtained in equilibrium by means of a general opacity principle. Using the opacity principle, we establish a solution method for a class of optimal design problems with endogenous disclosures, and compare our solutions to the Bayesian Persuasion benchmark without disclosures. For a range of disclosure costs, the presence of voluntary disclosures forces the persuader to provide no less information than the benchmark if the benchmark setting gives high types the greatest incentive to disclose. When intermediate types most want to disclose, optimal persuasion can become less informative than the commitment benchmark. Finally, we apply our results to study optimal financial stress tests, performance reviews and investment advice. University of Illinois at Chicago Measuring the power of the dominant partner among married couple (joint work with John Hardwick) Abstract Suppose N men and N women , based on personal preferences select subsets of acceptable partners. We can associate a zero-one matrix where a one in row i column j means the ith woman and jth man are mutually acceptable to each other. Suppose they are paired and we have, complete matching. Then after the matching we want to quantify the dominant person's relative power. We suggest the associated assignment game as a natural model and develop an algorithm to compute the nucleolus and propose the nucleolus as the measure of the relative power. It can be extended to also maximal matching that allows same sex partners. The algorithm to evaluate the nucleolus exploits many combinatorial theorems of Edmunds- Berge and so on. University of Minnesota Contractual Pricing with Incentive Constraints    [pdf] Indian Institute of Management Bodh Gaya, India A New Algorithm for Student-Optimal Matching    [pdf] (joint work with Prabhat Ranjan and Sanjeet Singh) Abstract College Admission Problem is a matching problem for colleges offering seats for admission to students. Each college has its own preference (merit) list of students and each student also has his/her list of preferred colleges. Many stable matchings can be achieved. One of these matchings is college-optimal, and one is student-optimal. Two different matchings, college-optimal and student-optimal, can be obtained by using two variants of deferred-acceptance (DA) algorithm, college-proposing and student-proposing, respectively. Since student-proposing DA algorithm obtains student-optimal matching, it is used by most of the matching clearing houses. However, student-proposing DA algorithm has limitations that it cannot be applied in a scenario where colleges have different time-lines of merit list announcement and students can opt out of the market. In the context of proposed application for post-graduate admission at management institutions in India, institutions announce merit list at different time points, and outside-market exit option by students is allowed till the start of the course. This paper proposes an algorithm which can convert all stable non-student-optimal matchings to student-optimal matching. Combination of college-proposing DA algorithm and proposed algorithm can be applied in continuous manner. It makes way for institutions to announce result on different time points and students to opt out of the market. A framework can be developed which allows the clearing house to obtain the student-optimal matching after any institution announces its merit list or any student opts out of the market. The proposed framework will have dual benefits; it can be applied on continuous manner that is an advantage of college-proposing DA algorithm along with proposed algorithm, and it obtains student-optimal solution that is favored by the markets. Princeton University Sequencing Naive Social Learning    [pdf] Abstract I extend the DeGroot model to allow for sequential information arrival and show that the sequencing of information affects the final consensus. I identify the optimal and pessimal information release sequences that ensure the highest an lowest attainable consensus respectively, and in doing so, I reveal that there is room for manipulation of the final consensus. I show that a type of endogenous social forgetting of earlier information arises, wherein in the final consensus the relative weights of signals released earlier are lower than that of more recent signals. I further show that in a large society, where the number of agents goes to infinity, the optimal information release sequence remains to a large extent unchanged, where the lowest signal released in round K is higher than the highest signal released in round K-1. Finally, in a large society where the influence of the most influential agent goes to zero, I analyze the robustness of the wisdom of crowds with respect to sequential information release and find that to a large extend wisdom fails when information is released sequentially. Rochester Institute of Technology Selling Reputational Information    [pdf] Abstract This paper studies information provision by a third party in a dynamic model of reputation. An intermediary has monopoly access to information about the past behavior of a long-lived firm and commits to a disclosure policy mapping the firm's histories to distributions over a set of signals. The intermediary then sells signals conveying information about the firm to a sequence of one-period lived agents. The paper characterizes the optimal disclosure policy from the point of view of the intermediary and shows that the intermediary can always attain the payoff from this disclosure policy provided the costs of gathering information are sufficiently small. The policy chosen in the equilibrium is always inefficient, that is, there are alternative policies that generate higher social welfare. School of Business, Stevens Institute of Technology Learning from Failures: Optimal Contract for Experimentation and Production    [pdf] (joint work with Fahad Khalil, Jacques Lawarree) Abstract Before embarking on a project, a principal must often rely on an agent to learn about its profitability. We model this learning as a two-armed bandit problem and highlight the interaction between learning (experimentation) and production. We derive the optimal contract for both experimentation and production when the agent has private information about his efficiency in experimentation. This private information in the experimentation stage generates asymmetric information in the production stage even though there was no disagreement about the profitability of the project at the outset. The degree of asymmetric information is endogenously determined by the length of the experimentation stage. An optimal contract uses the length of experimentation, the production scale, and the timing of payments to screen the agents. Due to the presence of an optimal production decision after experimentation, we find over-experimentation to be optimal. The asymmetric information generated during experimentation makes over-production optimal. An efficient type is rewarded early since he is more likely to succeed in experimenting, while an inefficient type is rewarded at the very end of the experimentation stage. This result is robust to the introduction of ex post moral hazard. National Taiwan University Measuring Freedom in Games    [pdf] Abstract The paper axiomatizes a measure of freedom for game theoretic settings. The central idea of the measure is that freedom is increasing in the degree to which an agent's outcomes are determined by the agent's preferences. The measure is characterized by rational, non-consequentialist preferences of an impartial observer over games endowed with the observer's beliefs over actions. The measure generalizes several measures from the opportunity set based freedom literature to situations where agents interact. This allows freedom to be measured in general economic models and thus derive policy recommendation based on the freedom instead of the welfare of agents. To illustrate this, optimal libertarian income tax progression policies are derived in a production economy with heterogeneous agents. National Taiwan University Procedural Mixture Spaces    [pdf] Abstract This paper introduces procedural mixture spaces as mixture spaces in which it is not necessarily true that a mixture of two identical elements yields the same element. The following representation theorem is proven; a rational, independent, and continuous preference relation over procedural mixture spaces can be represented either by expected utility plus the Shannon entropy or by expected utility under probability distortions plus the Renyi entropy. The entropy components can be interpreted as the utility or disutility from resolving the mixture and therefore as a procedural instead of consequentialist value. University of Arizona A Revealed Preference Approach to Multidimensional Screening Abstract This paper develops a data-driven approach to multidimensional screening. The principal observes a population of decision makers each choose from a finite number of exogenously specified sets of allocations, and her beliefs about the agent's preferences are informed by this data. In my model, there are a multiplicity of preference distributions that are consistent with the principal's observations. Rather than assign privilege to any one distribution, she evaluates mechanisms by computing their worst-case payoff against the set of distributions that are compatible with the choice data. I show that there are circumstances in which the principal can do better than using a mechanism that recreates one of the choice environments in her data set, even when she knows nothing about the agent's preferences beyond what's implied by the data. More broadly, I allow for arbitrary domains of preferences, and identify conditions under which update mechanisms that use only allocations that are vertically differentiated from the allocations in the data are optimal. National Defense University Future Combat Air System Pricing    [pdf] Abstract National Defense University Brief Abstract for Stony Brook Game Theory Conference Optimal Pricing for Future Combat Air System Dr. Tim Russo April 30, 2019 The proposed Future Combat Air System (FCAS) will be an integrated network with individual parts that enable the system to conduct operations and accomplish the mission. The value provided will no longer be so easily attributed to each piece operating independently in support of the larger mission. All the pieces will operate together, share information, and enhance each other’s capabilities. Therefore, pricing is longer straight forward. To find the optimal price-quantity pair, we employ a combination of the economic theories bilateral monopoly, network pricing, and two-part tariffs with modifications as necessary. University of Kansas Monotone Global Games    [pdf] (joint work with Eric Hoffmann and Tarun Sabarwal) Abstract We extend the global games method to finite player, finite action, monotone games. These games include games with strategic complements, games with strategic substitutes, and arbitrary combinations of the two. Our result is based on common order properties present in both strategic complements and substitutes, the notion of p-dominance, and the use of dominance solvability as the solution concept. In addition to being closer to the original arguments in Carlsson and van Damme (1993), our approach requires fewer additional assumptions. In particular, we require only one dominance region, and no assumptions on state monotonicity, or aggregative structure, or overlapping dominance regions. As expected, the p-dominance condition becomes more restrictive as the number of players increases. In cases where the probabilistic burden in belief formation may be reduced, the p-dominance condition may be relaxed as well. We present some examples that are not covered by existing results. University of Guelph Selten's Horse: an Experiment on Sequential Rationality    [pdf] (joint work with Nikolaos Tsakas) Abstract In a seminal paper, Selten (1975) developed the game Selten's Horse to illustrate some aspects of rationality. In our study, we test the equilibrium predctions of Selten's Horse through a laboratory experiment and we find that most of the behaviour is toward an outcome that is in stark contrast to predictions of all existing refinements that adhere to sequential rationality. Some, though not all, behaviour is better explained by the notion of Ideal Reactive Equilibrium (2018), according to which the players behave as if they could remove existing information sets and observe their opponents' actions. In the presence of multiple equilibria, sequentiality of moves is often considered to provide an advantage to the first--mover, in moving towards her most preferred equilibrium. However, we also find strong evidence that players who play last can anticipate such behavior and exploit it by systematically reaching off--equilibrium outcomes that are more favorable to them. Western University Persuading part of an audience    [pdf] Abstract I propose a cheap-talk model in which the sender can use private messages and only cares about persuading a subset of her audience. For example, a candidate only needs to persuade a majority of the electorate in order to win an election. I find that senders can gain credibility by speaking truthfully to some receivers while lying to others. In general settings, the model admits information transmission in equilibrium for some prior beliefs. The sender can approximate her preferred outcome when the fraction of the audience she needs to persuade is sufficiently small. I characterize the sender-optimal equilibrium and the benefit of not having to persuade your whole audience in separable environments. I also analyze different applications and verify that the results are robust to some perturbations of the model, including non-transparent motives as in Crawford and Sobel (1982), and full commitment as in Kamenica and Gentzkow (2011). Urmia University Locating the Sale Agents in Spoke Model through Uniform Distribution of Consumers    [pdf] (joint work with Salah Salimian, Kiumars Shahbazi, Naeimeh Hozouri) Abstract Most manufactures sell their products through sale agents and are not directly engaged with consumers. Therefore, determination of optimal location and optimal number of sale agents is highly significant in their planning. The main objective of this paper is theoretical modeling of sale agents and expansion of location models through a method in such a way that the assumptions are closer to reality and could provide the required conditions for selection of optimal location and optimal number of sale agents. To this end, Spoke model of Chen & Riordan (2007) and Lijesen & Reggiani (2013) has been used. In each street n consumers have been uniformly located. The results showed under what conditions is the city center the optimal location of sale agents and when the city margin is the optimal location and indicated that the cost of launching sale agents is the main factor in making such decision. Moreover, the results showed that the optimal number of sale agents is a function of the number of streets, the customers' valuation of each unit of product, the price of sale agents, the number of consumers on each street, the earned profit by the sale agents and the cost of launching sale agents. Urmia University Location Choice of Firms in an Unequal Length Streets Model: Game Theory Approach (Extension of the Spoke Model)    [pdf] (joint work with Kiumars Shahbazi, Salah Salimian, Naeimeh Hozouri) Abstract Locating is one of the key elements in success and survival of industrial centers and has great impact on cost reduction of establishment and launching of various economic activities. In this study, streets with unequal length model has been used that is the classic extension of Spoke model; however with unlimited number of streets with uneven lengths. The results showed that the spoke model is a special case of streets with unequal length model. According to the results of this study, if the strategy of enterprises and firms is to select both price and location, there would be no balance in the game. Furthermore, increased length of streets leads to increased profit of enterprises and with increased number of streets, the enterprises choose locations that are far from center (the maximum differentiation) and the enterprises' output will decrease. Moreover, the enterprise production rate will incline toward zero when the number of streets goes to infinity and complete competition outcome will be achieved. Urmia University The Expansion of Hotelling Location Model using Triangular Distribution Approach and Types of Consumer (Experienced and Inexperienced)    [pdf] (joint work with Salah Salimian, Kiumars Shahbazi,Jalil Badpeyma, Naeimeh Hozouri) Abstract In this study optimal location has been analyzed assuming two types of experienced and inexperienced consumers, distributed with a triangular distribution density function. The results indicate that demand functions of two firms depend on the acquired desirability of a certain type of food and the number of experienced consumers and the unit Nash equilibrium costs are increasing compared to transportation costs. In addition, with increase in transportation costs, firm 1 approaches the center and firm 2 get away from it. Furthermore, if two firms are located in the same point, they do not demand uniform equilibrium prices and the price of each firm is more sensitive to the location of other first than its location. University of Valencia Security in digital markets    [pdf] (joint work with Amparo Urbano) Abstract This paper contributes to the literature on security in digital markets. We analyze a two-period monopoly market in which consumers have privacy concerns. We make three assumptions about privacy: first, that it evolves over time; second, that it has a value that is unknown by all market participants in the first period; and third, that it may affect market participants' willingness to pay for products. The monopolist receives a noise signal about consumers' average privacy. This signal allows the monopolist to adjust the price in the second period and engage in price discrimination. The monopolist's price in period two acts as a signal to consumers about privacy. This signal, together with consumers' purchase experiences from the first period, determines demand. We address two scenarios: direct investment in security to improve consumers' experiences and investment in market signal precision. IGIDR Corruption in Multidimensional Procurement Auctions under Asymmetry    [pdf] (joint work with Shivangi Chandel and Shubhro Sarkar) Abstract We examine corruption in first- and second-score procurement auctions in an asymmetric bidder setting. We assume that the auction is delegated to an agent who possesses more information about quality than the procurer and is known to be corrupt with some probability. Using this information asymmetry, the corrupt agent asks for a bribe from one of two bidders and promises to manipulate bids in return. We show that the agent approaches the weaker firm for higher levels of bidder asymmetry in both the auction formats. Using a symmetric quasi-linear scoring rule we show that neither the first- nor the second-score auction implements the optimal mechanism, with or without corruption. Our numerical simulations suggest that the buyer prefers the first-score auction when the stronger firm is approached by the agent in the second-score auction. If the weaker firm is favored on the other hand, the buyer switches to the second-score auction if the probability of corruption is high. Finally, our paper highlights the limited manipulation power of the agent in the second-score auction. LUISS The buck-passing game    [pdf] (joint work with Roberto Cominetti, Matteo Quattropani) Abstract We consider a model where agents want to transfer the responsibility of doing a job to one of their neighbors in a social network. This can be considered a network variation of the public good model. The goal of the agents is to see the buck coming back to them as rarely as possible. We frame this situation as a game, called the buck-passing game, where players are the vertices of a directed graph and the strategy space of each player is the set of her out-neighbors. The cost that a player incurs is the expected long term frequency of times she gets the buck. We consider two versions of the game. In the deterministic one the players choose one of their out-neighbors. In the stochastic version they choose a probability vector that determines who of their out-neighbors is chosen. We use the finite improvement property to show that the deterministic buck-passing game admits a pure equilibrium. Under some conditions on the strategy space this is true also for the stochastic version. This is proved by showing the existence of an ordinal potential function. These equilibria are prior-free, that is, they do not depend on the initial distribution according to which the first player having the buck is chosen. Bar Ilan University, Israel Voluntary Disclosure of Bad News in a Dynamic Model    [pdf] (joint work with Ilan Kremer, Andrzej Skrzypacz and Amnon Schreiber) Abstract We examine a dynamic model of voluntary disclosure of private information. In our model, a manager of a firm who may learn the value of the firm interacts with a competitive capital market and maximizes a weighted sum of all period prices. The value of the firm changes over time. In such a model, the expectation of prices does not depend on the disclosure policy of the firm. Our main result shows that there is a unique equilibrium disclosure policy with the following property: in each period before the last period there is a range of values that the manager discloses, and the disclosure of a value in this range results with a price that is lower than the non-disclosure price. Humboldt University Berlin Uncertain Commitment Power in a Durable Good Monopoly    [pdf] Abstract This paper considers dynamic pricing strategies in a durable good monopoly model with uncertain commitment power to set price paths. The type of the monopolist is private information of the firm and not observable to consumers. If commitment to future prices is not possible, the initial price is high in equilibrium, but the firm falls prey to the Coase conjecture later to capture the residual demand. The relative price cut is increasing in the probability of commitment as buyers anticipate that a steady price is likely and purchase early. Pooling in prices may occur for perpetuity if commitment is sufficiently weak. Polling for infinity is also preserved if committing to a high price is endogenously chosen by the firm. Columbia University A Dynamic Model of Reputation-Driven Media Bias    [pdf] Abstract I study how media bias, specifically that which is driven by reputational concerns, changes over time. To this end, I present a dynamic model of reputation-driven media bias. A firm privately learns about an issue in increments and reports to a consumer with each new piece of information. With each new report, the consumer updates her beliefs about the firm's information quality, i.e., the firm's reputation. Firms are forward-looking and thus take into account both their immediate and future reputations when reporting. Nonetheless, I establish that equilibrium reporting behavior is identical for myopic and forward-looking firms. In equilibrium, firms bias their reports, and this bias is shown to be driven by two separate factors. First, firms can appear more reputable by appealing to a consumer's prior bias (the prior effect). Separately, firms with reports that are more consistent across time are viewed more favorably (the consistency effect). Though static models highlight the prior effect, they do not account for the consistency effect, which changes with time. Furthermore, the relative importance of the consistency effect grows over time as the firm accumulates a richer history of reports. ITAM Sequential Expert Advice: Superiority of Closed Door Meetings.    [pdf] (joint work with Parimal Bag) Abstract Two career-concerned experts sequentially give advice to a Bayesian decision maker (D). We find that secrecy dominates transparency, yielding superior decisions for D. Se- crecy empowers the expert moving late to be pivotal more often. Further, (i) only secrecy enables the second expert to partially communicate her information and its high precision to D and swing the decision away from first expert’s recommendation; (ii) if experts have high average precision, then the second expert is effective only under secrecy. These results are obtained when experts only recommend decisions. If they also report the quality of advice, fully revealing equilibrium may exist. Lehigh University Shuffling as a Sales Tactics: An Experimental Study of Selling Expert Advice    [pdf] (joint work with James Dearden, Ernest K. Lai) Abstract This study explores the interaction between a product expert, who offers to sell a product ranking, and an incompletely informed consumer. The consumer considers acquiring the expert's product ranking not only because the expert has superior information about the quality of the products the consumer is considering and knows the consumer's utility function, but also because the expert can directly influence consumer utility of a product by the product's rank. There are multiple equilibria in this setting with strategic information transmission: ones in which the expert ranks products in a manner that is consistent with the consumer's pre-ranking utilities, which depend exclusively on the products themselves, and ones in which the expert does not. We design a laboratory experiment to investigate which equilibrium an expert and consumer play. Across the three treatments we examine, which vary by the consumer's possible pre-ranking utilities, we find evidence that product experts are likely to select a ranking methodology that involves considerable uncertainty about the final product ranking, even though doing so involves ranking products in a manner that is inconsistent with consumer pre-ranking utilities. Peking University Screening with Network Externalities    [pdf] (joint work with Yiqing Xing) Abstract Increasingly many products feature “network externalities": the utility of one's consumption increases in his neighbors' consumption. Although information of network structure is important to the seller, it is often privately known to the buyers. We model a monopoly's (constrained) optimal pricing strategy to “screen” buyer's network information: their susceptibility (out-degree) and influence (in-degree). We characterize the optimal allocation for both the case of directed networks where each buyer's influence and susceptibility are independent, and the case of undirected networks where the two are identical. For directed networks, we show the optimal allocation can only depend on a buyer's susceptibility and is linear in virtual type (of susceptibility) with quadratic intrinsic value. For undirected networks, we disentangle the different effects of influence and susceptibility on optimal allocation and show that with quadratic intrinsic value, the allocation is a linear combination of a buyer's type and virtual type. We contrast the analysis with two benchmarks, the complete information pricing and uniform pricing, to shed light on the value of network information. We also extend the model to accommodate for weak affiliation between a buyer's influence and susceptibility, and the situation where influence and susceptibility are endogenous to the optimal allocation. Princeton University Persuasion via Weak Institutions    [pdf] (joint work with Elliot Lipnowski, Doron Ravid) Abstract A sender (S) publicly commissions a study by an institution to persuade a receiver (R). The study consists of a research plan and an official reporting rule. S privately learns the research's outcome, and also whether she can influence the report. Under influenced reporting, S can privately change the report to a message of her choice. Otherwise, the official reporting rule applies. We geometrically characterize S's highest equilibrium value and examine how optimal persuasion varies with the probability that reporting is uninfluenced – S's "credibility." We identify two phenomena: (1) R can strictly benefit from a reduction in S's credibility; and (2) small decreases in credibility often lead to large payoff losses for S, but this typically will not happen when S is almost fully credible. Federal University of Juiz de Fora, Department of Economics Investment Decision Under Inflation Targeting in Emerging Market Economies    [pdf] (joint work with Silvinha Vasconcelos, Claudio R. F. Vasconcelos, Ricardo B. L. M. Oscar) Abstract This article is aimed at understanding in which conditions emerging market economies (EMEs) can find themselves with a high level of investment in inflation targeting regimes. We extend the game proposed by \cite{asako2017guiding} and introduce a stochastic learning rule through an Agent-based Computational Economics (ACE) model. Entrepreneurs and workers iteratively play an evolutionary game to make investment decisions. Investments are assumed to be complementary. Thereby, the conditions for successfully guiding the EMEs toward the long-run equilibrium in which all players invest at the target inflation rate are: (i) investment must be demand-creating innovation; (ii) Central Bank must have credibility on the announced target inflation rate. Our contributions are twofold: (a) a refinement of dynamic equilibrium to determine the level of investment in the economy for a given inflation targeting rate; (b) greater accuracy on the proportion of agents willing to invest in both physical and human capital, optimizing the implementation of the economic policies. University of Zurich, Department of Economics Experts, Quacks and Fortune-Tellers: Dynamic Cheap Talk with Career Concerns    [pdf] (joint work with Egor Starkov) Abstract The paper studies a dynamic communication game in the presence of adverse selection and career concerns. An expert of privately known competence, who cares about his reputation, chooses the timing of the forecast regarding the outcome of some future event. We find that in all equilibria in a sufficiently general class earlier reports are more credible. Further, any report hurts the expert's reputation in the short run, with later reports incurring larger penalties. Reputation of a silent expert, on the other hand, gradually improves over time. Humboldt University Berlin Efficient Design With Small Informational Size and Maxmin Agents    [pdf] Abstract We study efficient implementation in general mechanism design settings where the incremental impact of any single agent's information given the information of others is small. If agents are Bayesian, McLean and Postlewaite (2015) show that a generalized Vickrey-Clarke-Groves (VCG) mechanism is approximately incentive compatible. We show that if each agent perceives a nontrivial amount of ambiguity, there exist modifications to the generalized VCG transfers that restore incentive compatibility whenever agents are sufficiently informationally small. More generally, we show that if there exists a mechanism that is either (i) approximately efficient and fully incentive compatible or (ii) fully efficient and approximately incentive compatible in a Bayesian environment, then we can construct a mechanism that is both efficient and incentive compatible in an environment with a small amount of ambiguity. Finally, we apply the results to the study of large double auctions. Penn State University Ethics and Talent in Banking    [pdf] (joint work with Anjan Thakor) Abstract This paper develops a theory of optimal ethical standards, capital requirements and talent allocation in banking wherein two types of banks, one being protected by regulatory safety nets ("depositories") and the other not so protected ("shadow banks"), innovate financial products and compete for managerial talent. Ethical violations are "mis-selling" products to customers who would not bene t from them, and they entail financial losses and regulatory penalties for the miscreant bank. Bank capital is shown to be more efficient than a penalty for implementing ethical standards. For any capital level, banks choose higher ethical standards and experience fewer ethical violations when bank managers are more talented. However, banks adopting higher ethical standards experience managerial talent migration to banks with lower standards. In equilibrium, endogenously-determined regulatory capital and ethical standards are higher in depositories than in shadow banks, and this difference is bigger with talent competition than without. Consequently, depositories hire less talented managers and innovate less, implying that prudential bank regulation has unavoidable labor market consequences in financial services. These effects arise despite customers being sophisticated enough to recognize that mis-selling may occur and, hence, do not overpay on average. If customers are naive and do not recognize potential mis-selling and the regulator perceives a cost associated with customer overpayment, socially optimal capital requirements and ethical standards are higher. Glasgow University Adverse implementation    [pdf] (joint work with Alexei Savvateev) Abstract We consider a situation when social planer tries to implement a project that agents don't like. Immediate examples of such situations involve market collusion, tax evasion, pollution control, certification exam cheating, etc. We take a double implementation approach to take into account that the agents have every incentive for collective action in this setup. Our sufficient conditions involve single crossing property of the agents' preferences as well as order semi-invariance and payoff complementarity of proposed incentive schemes. This way the strong Nash equilibrium exists, and, moreover any other possible Nash equilibrium is even better for the social planner. Other applications of our model include serial cost sharing of non-convex public goods, as well as scenarios of collective moral hazard under perfect monitoring and costly enforcement. EPGE-FGV & USP-SP, Brazil Conflict-free and Pareto-optimal allocations in matching markets: A solution concept weaker than the core    [pdf] (joint work with David Castrillo and Marilda Sotomayor) Abstract Abstract In the one-sided assignment game any two agents can form a partnership and decide how to share the surplus created. Thus, an outcome involves a matching and a vector of payo⁄s. In this market, stable outcomes often fail to exist. We introduce the idea of conict-free outcomes: they are individually rational outcomes where no matched agent can form a blocking pair with any other agent, neither matched nor unmatched. We propose the set of Pareto-optimal (PO) conict-free outcomes, which is the set of the maximal elements of the set of conict-free outcomes, as a natural solution concept for this game. We prove several properties of conict-free outcomes and PO conict-free outcomes. In particular, we show that each element in the set of PO conict-free payo⁄s provides the maximum surplus out of the set of conict-free payo⁄s, the set is always non-empty and it coincides with the core when the core is non-empty. We further support the set of PO conict-free outcomes as a natural solution concept by suggesting an idealized partnership formation process that leads to these outcomes. In this process, partnerships are formed sequentially under the premise of optimal behavior and two agents only reach an agreement if both believe that more favorable terms will not be obtained in any future negotiations. Munich Center for Mathematical Philosophy Lying and Lie-Detection in Bayesian Persuasion Games with Costs and Punishments    [pdf] (joint work with Mantas Radzvilas, Todd Stambaugh) Abstract If the aim of pharmaceutical regulators is to prevent dangerous and ineffective drugs from entering the market, the procedures they implement for approval of drugs ought to incentivize the acquisition and accurate reporting of research on the questions of safety and effectiveness. These interactions take the form of Sender-Receiver games, in which Pharmaceutical companies seeking approval of a drug conduct research themselves and report the results to the regulators. Of course, the companies may be inclined to falsify these reports, even in light of the costs and possible penalties for doing so. The main aim of this work is to give a formal model for this kind of interaction and to identify the mechanism that is optimal for the regulatory body, and by proxy, the public, when the costs of information, lying, and the detection of lies are nontrivial. In this model, the Sender incurs costs via noisy acquisition of information by sequential testing, falsification of reports of individual tests, and punitive measures upon detection by the Receiver of falsified reports . Further, the model has an epistemic dimension, in which the Sender believes that the likelihood of being caught lying is increasing in the number of falsified reports. The Receiver is cautious in the sense that she doesn’t rule out the possibility that falsification is a viable strategy for the Sender’s type, and she makes probabilistic inferences about the Sender’s type and strategy from the messages she receives. The ability of the Receiver to detect lies is limited by the costs of her verification procedure. We identify sequential equilibria of the game under multiple constraints on the payoffs, costs, and type structures of the players. Additionally, we identify the report verification strategy that is optimal - if known to the Sender, the strategy minimizes the incentives to falsify reports. Stanford University Likely Existence of Pairwise Stable Networks in Large Network Formation Games    [pdf] Abstract Since its introduction in Jackson and Wolinsky (1996), pairwise stability has been the preponderant equilibrium notion for network formation games in both the theoretical and applied networks literatures. Yet pairwise stable networks do not exist in some network formation settings, and relatively little work has been done to explore how general the problem of nonexistence may be. This paper demonstrates that the known sufficient conditions for existence are both restrictive, in that they rule out features of preferences present in most real-world settings, and fragile, in that even minor violations of the conditions can lead to nonexistence. We then show that nonetheless pairwise stable networks exist with high probability in large network formation games in which agents' preferences are sufficiently uncorrelated and noisy. Finally we show how this result can be used to derive likely existence even in an information sharing network formation game for which explicit representations of agents' preferences are computationally intractable. Massachusetts Institute of Technology Reputation Concerns Under At-Will Employment    [pdf] (joint work with Dong Wei) Abstract We study a continuous-time model of long-run employment relationship with fixed wage and at-will firing; that is, termination of the relationship is non-contractible. Depending on his type, the worker either always works hard, or can freely choose his effort level. The firm does not know the worker’s type and the monitoring is imperfect. We show that, in the unique Markov equilibrium, as the worker’s reputation worsens, his job becomes more insecure and the strategic worker works harder. We further demonstrate that the relationship between average productivity and job insecurity is U-shaped, which is consistent with typical findings in the organizational psychology literature. Wuhan University Perfect and proper equilibria in large games    [pdf] (joint work with Yishu Zeng) Abstract This paper studies pure strategy perfect and proper equilibria for games with non-atomic measure spaces of players and infinitely many actions. A richness condition (nowhere equivalence) on the measure space of players is shown to be both necessary and sufficient for the existence of such equilibria. The limit admissibility of perfect and proper equilibria is also proved. University of Minnesota Reputation for Persuasion    [pdf] Abstract I study the optimal disclosure of information of uncertain quality. Each period, a firm wishing to issue debt hires a credit rating agency to investigate their ability to repay and reveal the results to investors. The credit rating agency obtains a signal of the firm and chooses to reveal some or all of that information to investors. The accuracy of the signal is unknown. A reputation for the quality of the credit rating agency arises as investors learn of the firm through a rating made noisy both by the credit rating agency’s mistakes and withheld information. A rating methodology that is more revealing about the firm is also more revealing about the quality of the credit rating agency after firm uncertainty is resolved. The credit rating agency must balance the incentives of the firm employing them with incentives coming from their own reputation. Contrary to commonly held intuition, I find that reputational concerns typically do not lead to the credit rating agency revealing more information. In some cases it can even lead to less information revelation in equilibrium. University of Minnesota Monitor Reputation and Transparency    [pdf] (joint work with Ivan Marinovic) Abstract We study the disclosure policy of a regulator overseeing a monitor with reputation concerns, such as a bank or an auditor. The monitor faces a manager, who chooses how much to manipulate given the monitor’s reputation. Reputational incentives are strongest for intermediate reputations and uncertainty about the monitor is valuable. Instead of providing transparency, the regulator’s disclosure keeps the monitor’s reputation intermediate, even at the cost of diminished incentives. Beneficial schemes feature random delay. Commonly used ones, which feature immediate disclosure or fixed time delay, destroy reputational incentives. Surprisingly, the regulator discloses more aggressively when she has better enforcement tools. Maastricht University Stronger bonds with less connected agents in stable resource sharing networks    [pdf] Abstract This is a model of network formation in which agents create links following a simple heuristic- they invest their limited resources proportionally more in neighbours who have fewer links. This decision rule captures the notion that when considering social value more connected agents are on average less beneficial as neighbours and it is a useful proxy when the payoffs are difficult to compute. The decision rule also illustrates an externalities effect, whereby an agent's actions also influence his neighbours' neighbours. Besides complete networks and fragmented networks with complete components, the pairwise stable networks produced by this model include many non-standard ones with characteristics observed in real life. Multiple stable states can develop from the same initial structure- the stable networks could have cliques linked by intermediary agents while sometimes they have a core-periphery structure. Standard networks that are usually seen in the literature like the star, circle, line, wheel, biregular graphs and incomplete regular graphs are not stable. Even though the complete networks are most efficient, the observed pairwise stable networks have close to optimal welfare. This limited loss of welfare is due to the fact that when a link is established, this is beneficial to the linking agents, but makes them less attractive as neighbours for others, thereby partially internalising the externalities the new connection has generated. PSU Preferences for Power    [pdf] (joint work with Elena Pikulina) Abstract Power---the ability to determine the outcomes of others---usually comes with various benefits: higher compensation, public recognition, etc. We develop a new game, the Power Game, and use it to demonstrate that a substantial fraction of individuals enjoy the intrinsic value of power: they accept a lower payoff in exchange for power over others, without any additional benefits to themselves. We show that preferences for power exist independently of other components of decision rights. Further, these preferences cannot be explained by social preferences, are stable over time and are not driven by mistakes, confusion or signaling intentions. Using a series of additional experiments, we show that (i) power is related to determining outcomes of others directly as opposed to simply influencing them; (ii) depends on how much freedom the decision-maker has over deciding those outcomes; (iii) is tied to relationships between individuals and not necessarily organizations; and (iv) likely depends on the domain: power is salient in work-place settings but not necessarily in others. We establish that ignoring preferences for power may have large welfare implications. Consequently, our findings provide strong reasons for incorporating power preferences in the study and design of political systems and labor contracts. PSU Correlation Neglect in Student-to-School Matching    [pdf] (joint work with Alex Rees-Jones, Ran Shorrer) Abstract A growing body of evidence suggests that decision-makers fail to account for correlation in signals that they receive. We study the consequences of this behavior for application strategies to schools. In a lab experiment presenting subjects with incentivized school-choice scenarios, we find that subjects generally follow optimal application strategies when schools' admissions decisions are determined independently. However, when schools rely on a common priority---inducing correlation in their decisions---decision making suffers, and students often fail to apply to attractive safety'' options. We document that this pattern holds even within-subject, with significant fractions of participants pursuing different strategies in mathematically equivalent situations that differ only by the presence of correlation. We provide a battery of tests supporting the possibility that this phenomenon is at least partially driven by correlation neglect, and we discuss implications that arise for the design and deployment of student-to-school matching mechanisms. University of Technology Sydney Sophistication and Cautiousness in College Applications    [pdf] (joint work with Yan Song, Xiaoyu Xia) Abstract As in many places in the world, Chinese provinces reformed their college admission mechanisms from the Immediate Acceptance mechanism to the new ones that share the features of the Deferred Acceptance mechanism. In this article, we propose a novel approach to evaluate these reforms in terms of the student welfare by estimating the fractions of three major behavioral types as well as the student preferences. We first show that the reforms would not affect the equilibrium outcome played by rational students, but our data do not support this hypothesis. Motivated by this observation, we extend the model to include the following types of students classified by their strategic sophistication and beliefs: the rational type, the naive type and the cautious type. We identify and estimate the fractions of these types only from the assignment data before and after the policy change, which allows us to analyze the welfare effect of policy changes separately by the behavioral types. University of Texas Rio Grande Valley A Political Reciprocity Mechanism    [pdf] (joint work with Roland Pongou (University of Ottawa), Jean-Baptiste Tondji (University of Texas Rio Grande Valley)) Abstract This paper considers the problem faced by a political authority that has to design a legislative mechanism that guarantees the selection of policies that are stable, efficient, and inclusive in the sense of strategically protecting minority interests. Experimental studies suggest that some of these desirable properties can be achieved if decision-makers (e.g., legislators) are induced to display reciprocal and pro-social behavior. However, the question of how a voting mechanism can be designed to incentivize `selfish" individuals to display such behavior remains unresolved. We propose such a mechanism and find that it is a simplification of legislative procedures used in some democratic societies. Our mechanism satisfies all of the aforementioned properties under mild conditions, and it is easily implementable. In addition, it encourages positive reciprocity and generally protects minorities without having to make use of a supermajority rule as many real-world political institutions do. Finally, a comparative analysis shows that this mechanism has other desirable features and properties that distinguish it from other well-known political procedures. Indian Institute of Management Bangalore Fair Pricing in a Two-sided Market Game    [pdf] Abstract Pricing is one of the important strategic decisions in two-sided markets. A key finding of prior research is that the pricing structure necessitates endogenizing network externalities and adopting a pricing strategy where one side of the market often subsidizes the other side. In effect, the prices charged on one side does not usually reflect the costs incurred to serve that side. This gives rise to a popular opinion that many two-sided market platforms often adopt a pricing structure that is biased against one side and favors the other side. A fundamental question that arises in such a setting is: how much subsidization is fair in two-sided markets? We analyze the question of fair pricing structure in two-sided markets from the point of view of coalitional game theory. Given a two-sided market, we define a related coalitional game which we call a two-sided market game. We analyze the two-sided market game using various fairness-based solution concepts in coalitional game theory. This study has an implication on how competition policy can be applied in two-sided markets. University of Virginia Obvious Manipulations    [pdf] (joint work with Peter Troyan and Thayer Morrill) Abstract A mechanism is strategy-proof if agents can never profitably manipulate, in any state of the world; however, not all non-strategy-proof mechanisms are equally easy to manipulate - some are more “obviously” manipulable than others. We propose a formal definition of an obvious manipulation and argue that it may be advantageous for designers to tolerate some manipulations, so long as they are non-obvious. By do- ing so, improvements can be achieved on other key dimensions, such as efficiency and fairness, without significantly compromising incentives. We classify common non-strategy-proof mechanisms as either obviously manipulable (OM) or not obviously manipulable (NOM), and show that this distinction is both tractable and in-line with empirical realities regarding the success of manipulable mechanisms in practical market design settings. Hitotsubashi University LQG Information Design    [pdf] Abstract A linear-quadratic-Gaussian (LQG) game is an incomplete information game with quadratic payoff functions and Gaussian information structures. It has many applications such as a Cournot game, a Bertrand game, a beauty contest game, and a network game among others. LQG information design is a problem to find the Gaussian information structure from a given collection of feasible information structures that maximizes the expected value of a quadratic function of actions and payoff states when players follow a Bayesian Nash equilibrium. Because the LQG model is tractable enough but not too specific, it can be a good starting point for exploring a general relationship between the optimal information structures and the economic environments. In this problem, the variable to be determined is represented as the covariance matrix of actions and payoff states; the objective function is represented as a Frobenius inner product of a constant symmetric matrix and the covariance matrix; the constraints are represented as linear equalities of the covariance matrix which must be positive semidefinite. This implies that we can formulate LQG information design as semidefinite programming. Thus, we can numerically obtain the optimal information structures by using semidefinite programming solvers, and in some cases, we can analytically characterize them. As an immediate consequence of the formulation, we provide sufficient conditions for optimality and suboptimality of full and no information disclosure. Moreover, we identify the optimal information structures in a couple of special cases. In the case of symmetric LQG games, we characterize the optimal symmetric information structure as a closed-form expression. In the case of asymmetric LQG games, we characterize the optimal public information structure as a closed-form expression. In both cases, we discuss what properties of the constant matrix in the objective function determine the optimal information structures. Virginia Polytechnic Institute Sakshi Upadhyay    [pdf] (joint work with To Join or not to Join: Coalition Formation in Public Good Games) Abstract Commitment devices such as coalitions can increase outcome efficiency in public goods provision. This research studies the role of social preference in a two stage public good game where, in the first stage, heterogeneous agents first choose whether or not to join a coalition then, in the next stage, the coalition votes on whether its members will contribute. We find that individuals with stronger social preferences are more likely to join the coalition and vote for the coalition to contribute to the public good. We further show that higher marginal benefits of contribution leads to more people joining the coalition and contributing to the public good. These results hold whether the coalition’s decision is determined by a majority voting or a unanimous voting rule. The results are also robust to different model specifications. University of Valencia Demand for Privacy, selling consumer information, and consumer hiding vs. opt-out.    [pdf] (joint work with S. Anderson, N. Larson and M. Sanchez) Abstract We consider consumers choosing whether to buy a good, when they know that information about them can be sold to another firm selling another good they might also buy. This causes some consumers to hide their types by not buying the first good, which delivers an endogenous demand for privacy and renders the demand for the second good more inelastic. But it also can give the firm in the first market a greater incentive to harvest consumers to sell to the second firm, and, therefore, the upstream price can go down while increasing the downstream price. We determine whether information selling improves upstream profits, consumer surplus, and total welfare, and we find the consequences of allowing consumers to opt out of having their information sold by the upstream firm. United States Naval Academy What's Love Got To Do With It? Random Search, Optimal Stopping, and Stable Marriage    [pdf] Abstract I study a decentralized large marriage market with incomplete information and heterogenous preferences. Agents play a matching game in which each agent learns about his/her preferences over possible marriage partners via a sequence of random matches. Provisionally matched agents who find each other mutually acceptable marry and drop out of the the search process. I introduce a stability notion which requires that members of pair become mutually aware of each other at some point in the search process prior to their respective marriages to be considered a blocking pair. I obtain results equivalent to Lauermann and Nöldeke, and Burdett and Coles that there is a perfect Bayesian equilibrium in which an agent accepts a match if the private surplus from doing so equals or exceeds an equilibrium threshold; otherwise s/he rejects the match and continues to search for an acceptable spouse. The perfect Bayesian equilibrium has a set of limiting assignments, all of which satisfy the awareness-constrained pairwise stability condition. University of Pittsburgh Delegation in Veto Bargaining (joint work with Navin Kartik and Andreas Kleiner) Abstract We study a canonical problem a Romer-Rosenthal bargaining game in which the veto player’s preferences (specifically, his ideal point) is private information. Our innovation is to not restrict the proposer to a single policy: instead, the proposer can offer a set of policies—a delegation set—from which the veto player can select any one (or choose the status quo). This can also be viewed as screening/mechanism design with a status-quo constraint. We identify conditions under which the optimal delegation set takes certain forms, in particular when it is an interval. We show that is it generically not a single policy but it can be the entire policy set between the status quo and the proposer’s ideal point, meaning that the proposer can do no better than giving the veto player complete autonomy to choose his ideal policy. We show that as the proposer and veto player become more aligned (in a stochastic sense), the veto player is given less discretion (a smaller delegation set), but nevertheless the probability of vetos goes down. Santigo de Cali University Robust Equilibria in Tournaments with Externalities (joint work with Ruben Juarez ) Abstract Agents form coalitions among them, and every agent has preferences over the feasible coalitions in which he belongs. Each formed coalition has power, thus the one with the largest power wins the tournament. The partition of these agents is a no threat equilibrium (NTE) if, whenever a group of agents gains by forming their own coalition, there exists another group of agents that gains by forming their own coalition and harms at least one agent who initially deviated from the partition. We characterize the class of feasible coalitions that guarantees the existence of a NTE partition for all preferences and all powers. Indeed, these sets of feasible coalitions will come from sets of connected coalitions in networks without cycles. Moreover, we prove that there always exist a NTE in the matching problem when couples (or singles) have power in the process. Finally, we show that the characterized class expands more limited versions in which traditional equilibria such as the core do not exist. University of Helsinki Mechanism without Commitment - General Solution and Application to Bargaining Abstract This paper identifies mechanisms that are implementable even when the planner cannot commit to the rules of the mechanism. The standard approach is to require mechanism to be robust against redesign, which often leads to existence problems. The novelty of this paper to require robustness against redesigns that are themselves robust against redesigns that are themselves robust against... . That is, we allow the planner to costlessly redesign the mechanism any number of times, and identify redesign strategies that are both optimal and dynamically consistent. A mechanism design strategy that credibly implements a direct mechanism after all histories is shown to exist. The framework is applied to bilateral bargaining situations. We demonstrate that a welfare maximizing second best mechanism can be implemented even without commitment. University of Texas, Austin Signaling in mean-field games    [pdf] Abstract In this paper, we consider an infinite-horizon discounted dynamic mean-field game where there is a large population of homogenous players sequentially making strategic decisions and each player is affected by other players through an aggregated population state. Each player has a private type that only she observes. Such games have been studied in the literature under simplifying assumption that population state dynamics are stationary. In this paper, we consider non-stationary population state dynamics and present a novel backward recursive algorithm to compute Markov perfect equilibrium (MPE) that depend on both, a player’s private type, and current (dynamic) population state. Ecole Polytechnique Complexity of Strategic Thinking and Robustness of Interim Rationalizability    [pdf] (joint work with Olivier Gossner) Abstract In games of incomplete information, interim rationalizability is the equilibrium concept that stems from the iterative deletion of dominated strategies. It is known that for a fixed game this solution concept can be sensitive to misspecifications of the beliefs and higher order beliefs of a player's type. For a fixed game, we identify all types whose finite order interim rationalizable strategies coincide as one equivalence class. Then we define the complexity of a type as the cardinality of the smallest type space which contains a type in its equivalence class. We interpret this measure as the type's complexity of strategic thinking in a game and show that interim rationalizable strategies are robust to perturbations of players' higher order beliefs as long as they preserve the order of complexity of strategic thinking. Stanford University Unraveling in the Presence of a Secondary Market    [pdf] Abstract Matching markets often unravel, with matches between agents occurring inefficiently early and based on little information. Extensive unraveling has occurred in a variety of industries, ranging from the hiring of investment bankers to the funding of startups by venture capital firms to the drafting of athletes in professional sports. However, initial matchings are not permanent: employees can move to different firms after being hired, startups can partner with different VC firms, and athletes can change teams. This feature has not been incorporated in prior research. I develop a tractable model of screening and matching with a secondary market that allows some firms to hire laterally instead of at the entry-level. A novel phenomenon emerges where early matching arises as a strategic decision by firms to prevent poaching, independent of the talent levels in the labor pool. Traditionally thought to increase efficiency due to rematching, the presence of a transparent secondary market has the opposite effect: it leads to a decrease in welfare as a consequence of the adverse signaling incentive it creates. Nanjing Audit University Strong Stochastic Dominance    [pdf] (joint work with Ehud Lehrer) Abstract We generalize the monotone likelihood ratio property of univariate random variables. We say that one distribution strongly stochastically dominates another if the former is a convex transformation of the latter. The main contribution this paper wishes to introduce is phrasing the equivalence condition in utility theory terms. Several economic applications of this equivalence condition are given. These applications include Bayesian learning and dynamic decision-making under uncertainty, auctions with independent private values, pricing of risky assets, implications to a portfolio's value-at-risk and to production expansions. Duke University, University of Sydney Self-Similar Beliefs in Games with Strategic Substitutes    [pdf] (joint work with Mengke Wang) Abstract This paper studies strategic situations where a population of heterogeneous players are randomly matched with each other to play games with strategic substitutes and players have incomplete information about their opponents' private types. If players hold type-independent beliefs about their opponents' types, then in equilibrium players' actions are monotonic with respect to their types. Since players' private types are often not observable to the analyst, to understand what kind of observable behavior can be explained by this model, a representation result is established for this model when the analyst observes how the population behaves on an aggregate level. Of course, a model with type-independent beliefs may not be justified, since types could be correlated in many applications. Moreover, in experiments where individuals are randomly matched to play games with strategic substitutes, they report systematically heterogeneous conjectures about their opponents' actions: Players who act more aggressively also conjecture that their opponents would act more aggressively. This not only contradicts the type-independent belief model but is also counterintuitive because in games with strategic substitutes, opponents' aggressive behavior discourages players from playing aggressively. A model is then proposed where players have self-similar beliefs. It captures the intuition that higher types believe that their opponents are also of higher types and fits the experimental observations. One important and surprising result is that models with type-independent beliefs and self-similar beliefs are observationally equivalent for many payoff parameters, that is they have identical behavioral implications. University of California San Diego Cheaper Talk    [pdf] (joint work with Peicong Hu) Abstract As the new media gain popularity, the cost for an average person to voice his or her opinion drops substantially. Why would a decision-maker seek for information from social media knowing the information there is largely inaccurate? Does the "age of information" necessarily imply better-informed decision making? Despite the lower trust enjoyed by new media compared to traditional news outlets, why is new media used more in disseminating news while traditional newsroom employment has been declining? In this paper, we introduce a fixed cost of "talking" into the canonical cheap talk model and allow the sender to be potentially imperfectly informed. The main results are as follows: (1) We show that while a better-informed sender can provide information of higher quality, a less-informed sender can have the advantage in quantity in the sense that the latter can be more likely to supply information; (2) We show the effectiveness of communication is not monotonic in the talking cost. This is because a moderate cost can serve to align players’ preferences with respect to ideal actions, but too high a cost disincentivizes talking. The receiver’s payoff drops as the talking cost approaches zero; (3) We show that the sender’s favorite cost can be lower than that of the receiver’s, in which case the sender would opt for a less costly communication technology even if doing so sacrifices his credibility and hence the communication effectiveness. A slightly severe bias may align the players’ preferences with respect to the communication technology. Peking University Strategy Space Collapse: Experiment and Theory    [pdf] (joint work with Zhijian Wang) Abstract To detect the strategy space collapse during the successive eliminating dominated strategy(SEDS), we conduct three matrix forms of the Von Neumann 3-Card Porker game experiments. In addition to the Nash distribution and social cycling in long run, we observed the pulse signals from dominated strategies before their extinction. The result shows that, all these observations, including Nash distribution, social cycling and the pulse signal, can be well explained by evolutionary game dynamics simultaneously and quantitatively. It can be seen that SEDS or the strategy space collapse processes is an area worthy of further exploration. Stony Brook University How the market structure affects the r&d decision when acquisition is possible? (joint work with Sandro Brusco) Abstract Gordron M.Phillips and Alexei Zhdanov (2012) initiate a new point of view that active acquisition market encourages firms to conduct reserach and development (R&D), particularly by small firms in an industry. In their paper, they build a model and provide empircial tests showing that small firms may optimally decide to innovate more when they can be sold out to larger firms, and larger firms may find it is disadvantageous to engage in an R&D race with small firms, as they can obtian access to innovation through acquisition. In this paper, we examine how firms' R&D behaviors will respond, if the demand side of the acquisition market becomes more competitive. And we find that as the number of big firms increases, big firms will invest more in R&D, since they are less likely to outsource innovation due to the incresing demand in the acquisition market. One the other hand, small firms are even more engaged in investing in R&D, since they are more likely to be sold out to larger firms and also because they are likely to gain more bargain power once innovate. University of California, San Diego Plain Consistency and Perfect Bayesian Equilibrium    [pdf] Wuhan University The reform of China’s college admission mechanisms: A empirical (and Experimental) study    [pdf] (joint work with Lijia Wei) Abstract The China’s college admission is the largest matching actives held by a central matching system every year. The reform could be used to test the matching theory about school admission because the State Council of China suddenly requires all the regions in China to improve their admission policies in 2014. Although some regions already start their reform before 2014, Beijing, Shanghai and some other regions refuse to change their admission polices until 2014. This paper empirically analyzes China’s college admission results from 2010 to 2015, which includes the second and third stage of the reform. Our main findings are: (1) The reform dose not increase the total admission rates of candidates. Instead, the reform increase the admission rates of candidates do well in exams but not candidates with poor performs which causes the college admission results more stable. (2) The reform decreases the quit rates of colleges admission which causes the college admission result more efficiency. (3) The very top universities receive much more applications than before, which implies the reform causes more truthful telling when candidates report the preferences. (4) The standard deviant of exam scores of admitted students in very top universities became much smaller than before, which implies the reform causes college admission results less manipulable University of Texas at Austin Bayesian Elicitation    [pdf] Abstract How can a receiver design an information structure in order to elicit information from a sender? Prior to participating in a standard sender-receiver game (in which messages are possibly costly a la Spence), the receiver may commit to any information structure–any degree of transparency. Committing to a less informative signal about the sender's choice affects the endogenous information generation process such that the receiver may thereby secure himself more information. We establish broad conditions under which the problem of designing a receiver-optimal information structure can be reduced to a much simpler problem, committing optimally to a distribution of actions as a function of the sender's message. Moreover, we relate the choice of information structure to inattention and establish conditions under which the optimal degree of inattention is equivalent to the optimal degree of transparency. We apply these results to various situations including those in which the sender has an incentive to feint, as well as a political scenario. University of Vienna Echo Chambers: Social Learning under Unobserved Heterogeneity    [pdf] (joint work with Cole Randall Williams) Abstract In a society with homogeneous individuals, who differ only in private information, rational social learning requires individuals who are confronted with disagreement to learn to agree. In this article, I show that in a society with unobserved heterogeneity in preferences or priors, individuals instead respond to disagreement with a rational form of confirmation bias I call local learning: individuals place greater weight on opinions or behavior that is closer to their own. When individuals choose who to learn from, local learning leads to the development of echo chambers. Columbia University Strategic Exploration    [pdf] (joint work with Qingmin Liu and Yu Fu Wong) Abstract This paper provides a tractable model of strategic exploration in which competing agents search for viable candidates from a large set of alternatives. The model features continuous time and continuous space. We show that the model has an essentially unique equilibrium which has a simple and intuitive characterization. We define distributional strategies for continuous-time games with unobservable actions and prove a representation result for mixed strategies. The model is flexible and we provide several variants that may prove useful in studying search, learning, and experimentation. Warsaw School of Economics, Poland Distributional equilibria in dynamic supermodular games with a measure space of players and no aggregate risk    [pdf] (joint work with Lukasz Balbus, Pawel Dziewulski, Kevin Reffett) Abstract We study a class of discounted infinite horizon stochastic games with strategic complementarities with a continuum of players. We first define our concept of Markov Stationary Distributional equilibrium, that involves a equilibrium action-state distribution and a law of motion of aggregate distributions. We next prove existence of such an equilibrium under different set of assumptions than Jovanovic and Rosenthal (1988) or Bergin and Bernhardt (1992), via constructive methods. Importantly dynamic law of large numbers we develop to study transition of private signals implies no aggregate uncertainty. Our construction, i.e. distributional game specification, equilibrium concept and the exact law of large numbers are critical, to avoid problems in characterizing dynamic complementarities in actions between periods and beliefs reported recently by Mensch (2018). As a result, we are able to dispense with some continuity assumptions necessary to obtain existence. In addition, we provide computable monotone comparative dynamics results for ordered perturbations of the space of stochastic games (see Acemoglu and Jensen (2015)). Finally, we discuss the relation of our result to the recent work on mean-field equilibria in oblivious strategies of Adlakha, Johari, and Weintraub (2015); or Weintraub, Benkard, and Van Roy (2008) and some recent works on large but finite dynamics games (Kalai and Shmaya, 2018) and imagined-continuum equilibrium. We provide numerous examples including social dissonance models, dynamic global games or keeping up with the Joneses economies. Keywords: large games, distributional equilibria, supermodular games, games with strategic complementarities, computation of equilibria, non-aggregative games, law of large numbers JEL classification: C62, C72, C73 Paris School of Economics Managing relational contracts    [pdf] (joint work with Marta Troya Martinez) Abstract Relational contracts are typically modeled as being between a principal and an agent, such as a firm owner and a supplier. Yet in a variety of organizations relationships are overseen by an intermediary such as a manager. Such arrangements open the door for collusion between the manager and the agent. This paper develops a theory of such managed relational contracts. We show that managed relational contracts differ from principal-agent ones in important ways. First, kickbacks from the agent can help solve the manager’s commitment problem. When commitment is difficult, this can result in higher agent effort than the principal could incentivize directly. Second, making relationships more valuable enables more collusion and hence can reduce effort. We also analyze the principal’s delegation problem and show that she may or may not benefit from entrusting the relationship to a manager. Chinese University of Hong Kong Getting Information from Your Enemies    [pdf] (joint work with Tangren Feng) Abstract A decision maker DM needs to choose from two options. DM does not know which option is better, whereas a group of experts do. However, the experts prefer that DM chooses the wrong option. We find that it is possible for DM to extract information from the experts using a mechanism without transfers and make an informed choice that benefits himself and hence harms them. We further analyze the possibility and effectiveness of such information extraction under a variety of incentive compatibility constraints, including dominant strategy IC, ex post IC, and Bayesian IC. We also discuss two extensions. In the first extension, we show that DM can extract information even if his commitment to a mechanism is limited. In the second extension, we show that if DM can Blackwell-garble the information source of the experts then information extraction becomes more effective. University of Oregon Intergenerational Transmission of Preferences and the Marriage Market    [pdf] (joint work with Hanzhe Zhang) Abstract We examine intergenerational transmission of preferences under different organizations of the marriage market. We demonstrate that the number and properties of equilibria depend on the underlying two-sided matching technology. Namely, the equilibria resemble those in a coordination game under random matching and those in an anti-coordination game under assortative matching. The matching technology influences not only who matches with whom but also, more importantly, individual choices that shape future generations’ preferences and choices. We discuss the model’s implications on the evolution of female labor force participation and the effectiveness of government’s campaign to alter preferences. University of Arizona Persuasive Disclosure    [pdf] Abstract This paper studies the general information disclosure model (Grossman, 1981; Milgrom, 1981) relaxing the assumption of monotonicity in preferences. I apply the belief-based approach, which is developed in Bayesian persuasion (Kamenica and Gentzkow, 2011) and applied to cheap talk (Lipnowski and Ravid, 2018), to solve for Perfect Bayesian Equilibrium (PBE) outcomes. I find that under full verifiability and rich language, the PBE outcomes take the form of a combination of those in separated auxiliary cheap talk games with lower bounds for sender payoff. Also, I provide a sufficient condition for the original unraveling result to hold in the general case. Finally, I compare information disclosure with cheap talk and Bayesian persuasion. School of Computer Science and Software, Zhaoqing University, China A Game Theory Approach for Evaluating and Assigning Suppliers in Supply Chain Management    [pdf] (joint work with Dachrahn Wu,Yi-Ming Chen,Yu-Min Chuang) Abstract In this study, we propose a framework for supplier evaluation that incorporates two game theory models designed to advise manufacturer to choose suppliers when the available budget is limited. In the first step, the interactive behaviors between the elements or participants of the manufacturer and the supplier are modeled and analyzed as a two-player and zero-sum game, after which the supplier power value is derived from the mixed strategy Nash equilibrium. The second model uses twelve supplier power values to compute the Shapley value for each supplier, in terms of the thresholds of the majority levels in three manufacturing processes. The Shapley values are then applied to create an allocated set of limited supplier orders. Simulation results show that the manufacturer can use this framework to quantitatively evaluate the suppliers and easily allocate suppliers within three manufacturing processes. Tsinghua University College Admission with Flexible Major Quotas    [pdf] (joint work with Dalin Sheng, Xiaohan Zhong) Abstract In this paper, we develop a college admission mechanism in which the number of seats allocated to each major in a college can adjust in response to students' demand and each major may have its own priority order over students. We show that the mechanism always results in the student-optimal matching among those that satisfy individual rationality, non-wastefulness, and no justified envy. Besides, the mechanism is group strategy-proof for students, respects unambiguous improvement in student standing in the priority orders, and is unanimously preferred by students to a standard deferred acceptance mechanism where each major has a fixed number of seats. Johns Hopkins University A Theory of Multiplexity: Sustaining Cooperation with Multiple Relationships (joint work with Chen Cheng, Wei Huang, Yiqing Xing) Abstract People are embedded in multiple social relations. These relationships are not isolated from each other: the network pattern of an existing relationship is likely to affect the formation of a new relationship. This paper provides a framework to analyze the multiplex of networks. We present a model in which each pair of agents may form more than one relationship. Each relationship is captured by an infinitely repeated prisoner’s dilemma, with endogenous stake of cooperation. We show that multiplexity, i.e. having more than one relationship on a link, boosters incentives as different relationships serve as social collateral for each other. We then endogenize the network formation and ask: when an agent has a new link to add, will she multiplex with a current neighbor, or link with a stranger? We find the following: (1) There is a strong tendency to multiplex, and “multiplexity trap” can occur. That is, agents may keep adding relationships with current neighbor(s), even if it is more compatible to cooper- ate with a stranger. (2) Individuals tend to multiplex when the current network (a) has a low degree dispersion (i.e., all individuals have similar numbers of friends), or (b) is positively assortative. We provide empirical evidence that is consistent with our theoretical findings. Johns Hopkins University Communication with Informal Funding (joint work with Chen Cheng, Jin Li, Yiqing Xing) Abstract We present a model on communication with informal funding. Specifically, on top of the classical Crawford-Sobel (1982) cheap-talk model, in which only the principal (she) can take an action, we allow the agent (he) to take a costly action that is additive to the principal’s, e.g., finance a project using his informal funding. We show that if the principal can choose the cost of informal funding to the agent, there is an optimal cost level, neither too high nor too low, which can implement the principal-preferred state. Then we study the case that the agent’s cost is exogenous. When it is below the principal’s optimal level, the best equilibrium involves no communication, and the project is mostly financed by the informal funding. When the agent’s cost is above the principal’s optimal level, there is a dichotomy of communication and informal funding: up to a threshold of the underlying state, there is Crawford-Sobel style communication and no informal funding is used; beyond that threshold, informal funding is used but there is no further communication. When the principal also pays a cost to the informal funding, communication improves in the cost to the principal. When the cost to the principal is high enough, informal funding serves as a credible threat to the principal, and leads to better communication than the best equilibrium in Crawford-Sobel. Singapore University of Technology and Design Convergence of the Best-response Dynamic in Potential Games    [pdf] Abstract We prove that the continuous-time best-response dynamic from a generic initial point converges to a pure-strategy Nash equilibrium in an ordinal potential game under a minor condition for the payoff matrix. We then study the best-response dynamic defined in a consideration-set game, where players face random strategy constraints with a small probability when playing the underlying game. In the case that the underlying game is a two-player common-payoff game with cheap talk, we show that if one player is under a strategy constraint slightly biased towards the efficient outcome, then the best-response dynamic from a generic initial point must approach to the efficient outcome, regardless of the constraint for the other player. The University of Chicago Social Learning under Information Control    [pdf] Abstract We study to what extent information aggregation in social learning environments is resilient to centralized information control by a principal with a state-independent preference. We consider a population of agents who arrive sequentially and obtain information about the state of the world both from their private signals and by observing information about other agents' actions either exogenously or through the principal. It is shown that information aggregation can be both fragile and resilient depending on the degree of the centralized information control. If the principal has full control over how information is disseminated, information aggregation fails regardless of the social learning environment. However, information aggregation can be achieved if the agents have excess to some exogenous observations of others' actions that are outside of the principal's control. In this case, the learning dynamics becomes more interesting: whether information aggregation can be achieved depends on whether and how the beliefs generated by the private signals are bounded, as well as the type of exogenous observations that agents have. The Ohio State University (Cost-of-) Information Design    [pdf] (joint work with Yaron Azrieli and Shuo Xu) Abstract We introduce the cost of information design problem where a decision maker acquires information subject to a cost selected by a designer. We show that when restricted to the family of posterior-separable cost functions, the designer achieves the same level of utility as in the case where she herself chooses the information for the decision maker. The designer fails to achieve the first best if the family of cost functions are further restricted to be invariant to the labeling of the states. We show in an example that the designer induces full information at zero cost when the cost functions are multiples of average reduction of Shannon entropy. We also introduce competition to the cost of information design problem where two designers simultaneously select the cost of information and the decision maker only acquires information from one of the designers. We show in an example that the designers exhibit Bertrand-like behaviors and the unique equilibrium is for the designers to induce full information at zero cost. Ohio State University Termination fee as a sequential screening device    [pdf] Abstract We consider an intertemporal monopolistic selling environment where the buyers arrive sequentially and the value uncertainty is resolved over time. We show that a contract involving a floor price and a termination fee between the seller and the early arrived buyer can serve as a sequential screening device: an optimistic buyer accepts an offer with high price and high termination fee to avoid fierce future competition; a pessimistic buyer, however, will promise a low price and accept a low break-up fee to avoid “over-purchase” since they expect a high probability that the realized valuation is low. We confirm analytically and numerically that the seller can raise higher revenue from this selling mechanism than that from an optimal static mechanism. This result provides a potential rationale for the use of go-shop negotiation in the M&A market, among other selling procedures with break-up terms. We further demonstrate that the go-shop negotiation, although not fully optimal among all dynamic mechanisms, can be revenue close to the optimal dynamic mechanism in a large class of parametric distributions. University of Chicago Implications of Consumer Data Monopoly    [pdf] Abstract This paper explores the implications of an informational monopoly. An informational monopolist is able to provide consumer data to the producers and facilitate price discrimination. By examining the revenue-maximizing mechanisms for the informational monopolist, this paper shows that the consumer surplus is always entirely extracted away. Furthermore, it characterizes the optimal mechanisms for the informational monopolist, which feature an upper isolation segmentation. The characterization of optimal mechanisms further leads to an equivalence result: In terms of the extracted revenue, the producer profit, the consumer surplus and the volume of trade, an informational monopoly is equivalent to a conglomerate monopoly who has control of both production technology and consumer data. University of Georgia Matching with Complementary Contracts    [pdf] (joint work with Marzena Rostek) Abstract In this paper, we provide existence results for matching environments with complementarities, such as markets for patent licenses, differentiated products, or multi-sided platforms. Our results apply to both nontransferable and transferable utility settings, and allow for multilateral agreements and those with externalities. Additionally, we give comparative statics regarding the way primitive characteristics are combined to form the set of available contracts. These show the impact of various contract design decisions, such as the application of antitrust law to disallow patent pools, on stable outcomes. Boston University Preference, Rationalizability and Robustness in Games with Incomplete Information    [pdf] Abstract This paper defines the notion of interim correlated rationalizability in the very general class of games with incomplete information. Working with the Epstein-Wang universal type space, our framework is not restricted to the conventional subjective expected utility model and general enough to accommodate non-expected utility cases and ambiguity averse players. Interim correlated rationalizability is a natural generalization of the rationalizability concept in the expected utility case and properties of the concept are studied. In particular, our rationalizability concept characterizes rationality and common knowledge of rationality. Furthermore, we investigate robustness to higher order uncertainty. Interim correlated rationalizability is the strongest solution concept satisfying upper hemicontinuity in the universal type space. Moreover, any rationalizable action can be made uniquely rationalizable by perturbing higher order uncertainty. Finally, we figure out structure of rationalizable sets. As is the case for the expected utility model, the rationalizable action profile is generically unique even though ambiguity aversion must weakly enlarge the rationalizable set. Korea University Membership Mechanism Abstract This paper studies an environment in which a seller seeks to sell two different items to buyers. The seller designs a \textit{membership mechanism} that assigns positive allocations to members only. Exploiting the restrictive set, the seller finds a revenue-maximizing incentive compatible mechanism. We first establish the optimal allocation rule for this membership mechanism given a regularity condition for a modified valuation distribution reflecting the set, which provides the existence of a member set and the optimal payment rule. The optimal allocation enables us to compare the membership with separate selling of the two items, suggesting conditions under which the membership dominates the separate selling: interplay between the number of bidders and the degree of the stochastic dominance of valuation distributions. Stony brook University Firm Entry Decline, Market Structure and Dominant Firm’s Productivity    [pdf] Abstract Firm entry decline in the US has created concerns regarding job creation, firm churning, resource reallocation and aggregate productivity. Based on empirical facts regarding concentration and markup trends, this research tries to understand if increasing market concentration (through the productivity increase of large, dominant firms) may cause the entry decline. To quantitatively evaluate the effect, I do this using a firm dynamics model which introduces ”dominant firm vs. competitive fringe” framework into the general equilibrium version of Hopenhayn (1992). I find that an increase in dominant firm’s productivity can explain entry decline of fringe firms. Center for the Study of Rationality, The Hebrew University of Jerusalem Strategic use of seller information in private-value first-price auctions    [pdf] (joint work with Todd R. Kaplan) The Basque Country University Two solutions for bargaining problems with claims    [pdf] (joint work with M. J. Albizuri, B. Dietzenbacher and J. M. Zarzuelo) Abstract A bankruptcy problem is an elementary allocation problem in which claimants have individual claims on a deficient estate. In a bankruptcy problem with transferable utility (O’Neill, 1982), the estate and claims are of a monetary nature. These problems are well-studied, both from an axiomatic perspective and a game theoretic perspective. On the other hand, Chun and Thomson (1992) considered bankruptcy problems with nontransferable utility, where the estate can take a more general shape and corresponds to a set of utility allocations. Thus NTU-bankruptcy problems form a natural generalization of the traditional bankruptcy problems. These authors proposed the "proportional solution" for NTU-bankruptcy problems using an axiomatic approach. In this paper, we propose and characterize two solutions for NTU-bankruptcy problems that are closely related to Nash and Kalai-Smorodinski bargaining solutions. These two characterizations consist of the traditional axioms used by Nash and Kalai and Smorodinski, together with a new axiom called Independence of Higher Claims (IHC). This axiom requires that if in a problem an agent received his/her claim, then he would not receive less if some other agents increased their claims but the estate did not change. New York University On Incentive Compatibility in Dynamic Mechanism Design with Exit Option in a Markovian Environment    [pdf] (joint work with Tao Zhang, Quanyan Zhu) Abstract In this work, we consider a class of dynamic mechanism design framework in a Markovian environment described in (Pavan et al. 2014) and analyze a direct mechanism model of a principal-agent problem in which the agent is allowed to exit at any period of time. The agent privately observes time-varying information, reports the information to the principal by using a reporting strategy, and chooses a stopping time to exit the mechanism. The principal, on the other hand, chooses decision rules consists of an allocation rule and a payment rule. In order to influence the agent's stopping decision, the principal designs a termination transfer rule that is only delivered at the realized stopping time by the agent's stopping rule. We focus on the one-period look-ahead (O-LA) stopping rule and construct the payment rule and termination transfer rule by fixing the allocation rule that satisfies a first order condition of incentive compatibility. We obtain the necessary and sufficient conditions for the implementability of the allocation rule by characterizing one-shot deviation principle and strong monotonicity conditions derived from the cyclical monotone ( Roche, 1987). Michigan State University Bargaining and Reputation with Ultimatums    [pdf] (joint work with Mehmet Ekmekci) Abstract Two players divide a unit pie. Each player is either justified to demand a fixed share and never accepts any offer below that or unjustified to demand any share but nonetheless wants as a big share of the pie as possible. Each player can give in to the other player’s demand at any time, or can costly challenge the other player to an ultimatum to let the court settle the conflict. We solve equilibrium strategies and reputation dynamics of the game when there is no ultimatum (Abreu and Gul, 2000), when the ultimatum is available to one player, and when the ultimatum is available to both players. Several interesting results follow from the analysis. First, equilibrium dynamics involve non-monotonic probabilities of sending ultimatums when the challenge opportunities do not arise frequently: at first both players mix between challenging and not challenging when a challenge opportunity arrives, then one player challenges for sure and the other player does not challenge at all, and at last both players do not challenge and resort to a war of attrition. Second, when the challenge opportunities arise sufficiently frequently for both players and when the prior probabilities of being justified are sufficiently small, neither player can build up his or her reputation, and inefficient and infinite delay in bargaining occurs. Third, an unjustified player does not want to have the challenge opportunity, because it destroys his or her possibility of pretending to be justified and weakens his or her commitment power; on the other hand, a justified player strictly prefers to have the challenge opportunity. Finally, the implications overturn classic results of one-sided reputation in Myerson (1991). Michigan State University Pre-Matching Gambles    [pdf] Michigan State University Overcoming Borrowing Stigma: The Design of Lending-of-Last-Resort Policies    [pdf] (joint work with Yunzhi Hu) Abstract How should the government effectively provide liquidity to banks during periods of financial distress? During the most recent financial crisis, banks avoided borrowing from the Fed’s Discount Window (DW) but bid more in its Term Auction Facility (TAF), although both programs share similar requirements on participation. Moreover, some banks paid higher interest rates in the auction than the concurrent discount rate. Using a model with endogenous borrowing stigma, we explain how the combination of the DW and the TAF increased banks’ borrowings and willingnesses to pay for loans from the Fed. Using micro-level data on DW borrowing and TAF bidding from 2007 to 2010, we confirm our theoretical predictions about the pre-borrowing and post borrowing conditions of banks in different facilities. Finally, we discuss the design of lending-of-last-resort policies. Tel Aviv University Information as Regulation    [pdf] (joint work with Eilon Solan) Abstract We study dynamic inspection problems where the regulator faces several agents. Each agent may benefit from violating certain legal rules, yet by doing so the agent faces a penalty if the violation is detected by the regulator. There is a constraint on the regulator's inspection resources and he cannot inspect all agents simultaneously. The regulator's goal is to minimize the (discounted) number of violations and he has a commitment power. We compare two monitoring structures. Under public monitoring, the inspector publicly announces his observations after each period (i.e., the identity of the inspected agent (if any), and the inspection result); whereas under private monitoring, the inspector conceals his observations. We show that announcing his observations may, in fact, hurt the regulator, and we identify conditions under which this occurs. The University of Chicago Perception Bias in Tullock Contest (joint work with Jaimie Lien, Hangcheng Zhao, Jie Zheng) Abstract Players in a contest setting sometimes hold misperceptions about their winning chances. To understand the effects of such psychological biases on competitive behavior and outcomes, we analyze a two-player Tullock contest with contestants who may have perception biases about the effectiveness of their efforts. In the benchmark model in which only one player has a perception bias, we characterize the unique equilibrium, in which the other player benefits at the biased player\'s expense, and both individual effort and total effort are decreasing in the severity of perception bias in the directions of either underconfidence or overconfidence. If both players have perception biases, multiple equilibria may exist for underconfident contestants, and the monotonic relationship between bias and effort no longer holds. We additionally depart from the benchmark case by allowing players\' valuations of the prize to differ. Our results show a surprising non-monotonic relationship between the total effort and the valuations of the players. The results contribute to the behavioral contest literature by offering a better understanding of how individuals behave under a psychological bias. Boston University Optimal Contracts with Learning from Bad News    [pdf] Abstract I study a continuous-time principal-agent model in which the agent's success is not directly observable and can only be learned from bad news. A public breakdown arrives at some Poisson rate when the agent has not achieved a success. Once a success has been achieved, no breakdowns will ever occur. In the optimal contract where the agent observes his own success, the agent exerts full effort until a success or a breakdown. The principal makes a bonus payment after the report of success with some delay, which can be implemented as a stock option. Before the report, the principal makes no payments at the beginning and offers a constant wage starting from some point. In the optimal contract where the agent does not observe his own success, the effort is frontloaded but inefficient. The reward scheme could take three different forms depending on parameter values. Boston University Dynamic Delegation with Adverse Selection    [pdf] Abstract I study a dynamic model of delegated decision making with adverse selection and imperfect monitoring. Each period a principal may delegate to an agent who has better information. The agent's information is also imperfect and the accuracy of the information depends on the ability of the agent. In the optimal mechanism where the agent's ability is publicly observable, the principal delegates at the beginning and the agent behaves optimally for the principal. Eventually the principal either promises to delegate forever or stops delegating, depending on the history. When the agent's ability is private information, I characterize the optimal mechanism of a two-type model. The principal offers pooling mechanism if both types are relatively high. If both types are relatively low, the principal optimally separates different types of the agent by offering different mechanisms. George Mason University Competition with Indivisibilities and Few Traders    [pdf] (joint work with Cesar Martinelli, Jianxin Wang) Abstract We study minimal conditions for competitive behavior with few agents. We adapt the strategic market game by Dubey (1982), Simon (1984) and Benassy (1986) to an indivisible good environment. We show that the Dubey-Simon-Benassy equivalence of Nash equilibrium outcomes and competitive outcomes holds in this setting. Furthermore, we give sufficient and necessary conditions for all the Nash equilibrium outcomes with active trading to be competitive that can be checked directly by observing the set of competitive equilibria. We test our strategic market game in laboratory experiments under minimal environments that do and do not guarantee competitive outcomes of Nash equilibria with active trading, and compare the performance of a static and a dynamic institution. We find that the dynamic institution achieves higher efficiency than the static one and leads to competitive results when all Nash equilibria outcomes of the market game are competitive. The dynamic institution also allows monopoly, if is present, to extract more surplus than the static institution. Tsinghua University Information Design in Simultaneous All-pay Auction Contests    [pdf] (joint work with Zhonghong Kuang, Hangcheng Zhao, Jie Zheng) Abstract We study the information design problem of the contest organizer in a simultaneous 2-player 2-type all-pay auction contest environment, where players have limited information about own/others types or valuations of the prize. The contest organizer can send a public message to the contestants about the type distribution to persuade them to exert higher effort. We allow the players' ex-ante symmetric type distributions to be correlated, and the information disclosure policy to take the stochastic approach of Bayesian persuasion, which is a generalization of the traditional information disclosure policy. The optimal design, the structure of which depends on the degree of the correlation of players' types, is completely characterized and shown to achieve higher effort than the type-dependent information disclosure policy. Given players' types are private information, if there is a strong positive correlation, the optimal design consists of two posteriors with one representing a perfect positive correlation and the other representing a positive correlation identified by a cutoff condition; if there is a weak positive correlation or negative correlation between types, the optimal design consists of two posteriors with one such that both players being high types is impossible and the other representing a positive correlation identified by the cutoff condition. We also consider the case in which only the designer knows players' types and the case in which the type information is asymmetric between the two players. Welfare comparisons are conducted across different informational setups. Our work is the first study on full characterization of information design for games with two-sided asymmetric information and infinite action space. Stanford University Time preference and information acquisition    [pdf] Abstract In this paper, we study how temporal discounting determines sequential decision making. We analyze decision time distribution induced by all sequential information acquisition strategies that 1) implements a target information structure, 2) satisfies a constraint on flow informativeness of signal. The main result is that decisive Poisson signal creates the most dispersed decision time distribution (in mean preserving spread order), and the pure accumulation of information creates the least dispersed. This implies that for a decision maker with convex discount function, decisive Poisson signal is the optimal learning strategy. Yale University Information Structure and Price Competition (joint work with Mark Armstrong and Jidong Zhou) Abstract This paper studies how product information possessed by consumers (e.g. from product reviews, platform recommendations, etc.) affects competition between firms. We consider symmetric firms which supply differentiated products and compete in prices. Before purchase, consumers observe a private signal of their valuations for various products. For example, the signal might reveal their valuations perfectly or not at all, or only inform them of the ranking of products. We consider a fairly general class of signal structures which induce a symmetric pure-strategy pricing equilibrium, and derive for the signal structure within this class which is optimal for firm or for consumers. The key trade-off is that with more detailed product information consumers are more able to buy their preferred products, but at the same time firms have more market power and charge higher prices. The firm-optimal signal structure induces consumers to view the products as being sufficiently differentiated, while the consumer-optimal information structure induces choosy consumers to buy their preferred product but pools other consumers by providing little information to them in order to intensify price competition. The firm-optimal information structure often does not cause mismatch between consumers and products and so maximizes total welfare, while the consumer-optimal information structure often causes mismatch and does not maximize total welfare. We also derive an upper bound for consumer surplus across all symmetric information structures, which shows that allowing for mixed-strategy pricing equilibria could increase consumer surplus only slightly. New York University Early Selections and Affirmative Actions in the High School Admission Reform in China    [pdf] (joint work with Tong Wang) Abstract In the past decade, high school admissions in China have experienced dramatic changes. One of these changes is adopting a Chinese version of affirmative actions in the admission procedure. The Chinese affirmative action does not involve a fixed type-specific quota system, but rather a flexible adjusted priority-based school choice method that has gained much popularity with time. Specifically, in the admission procedure, several designated students receive a privilege (lump-sum extra scores) in addition to their exam scores. Two popular procedures are used to determine who could receive this privilege: one involves an early selection before the normal admission procedure, and the other adjusts the priority based on the rank-ordered list submitted by schools in the normal admission procedure. In this project, we corroborate that both mechanisms have flaws and generate undesired results. We also propose a strategy-proof and stable mechanism that could eliminate the flaws in real-life admission procedures and reserve the flexibility without posing hard constrained type-specific quotas. Moreover, we combine new administration data with a preference survey from China to test the existing matching mechanisms. Considerable evidence has affirmed that several schools take advantage of the existing mechanism and cause significant welfare loss for their students . Boston University Dynamic Coordination with Informational Externalities    [pdf] Abstract I study observational learning in a two-player investment timing game with coordination. Each player is endowed with one opportunity to make a reversible investment, whose value depends on an ex ante unknown state. Each player learns about the return of the investment project by observing a private signal, and the actions of the other player. The return of the project is realized at the time when the two players coordinate on joint investment. I characterize the unique symmetric equilibrium of this game. The equilibrium exhibits waves of investment in the initial stage, and delayed investment and disinvestment in the continuation play. As the precision of the signal distributions increases, the equilibrium distributions of players’ posterior belief about the state when they invest or disinvest are ranked by first-order stochastic dominance; and the speed of learning increases. NUS Sign equivalent transformation and network games    [pdf] (joint work with Yves Zenou; Junjie Zhou) Abstract Many equilibrium models in economics and operations research can be formulated as Variational Inequalities (VI). In this paper, we introduce an operation called sign equivalent transformation (SET) on VI, which has the property of preserving the set of solutions on any rectangular domain. As applications, we revisit many classical network games in the economics literature, which include games with uni-dimensional or multi-dimensional strategies, games with strategic complementary or substitutes, games with linear or nonlinear best-reply functions, etc. For each of these games, by identifying certain sign equivalence transformations (SET), we are able to transform the original VI problem into a much simpler one. The new VI problem (and not the original one) satisfies an integrability condition, which enables us to reformulate this problem as a minimization programming. As a by-product, we explicitly construct a best-response potential function of the original game, from which various properties of Nash equilibrium, such us existence, uniqueness and stability, can be easily derived. Moreover, we develop and analyze new classes of games played on networks using SET. Lastly, we discuss several applications of SET beyond network games. the Pennsylvania State University Creative Contests --- Theory and Experiment    [pdf] Abstract This paper introduces “creative contests”, in which the criterion for ranking contestants is not fully specified in advance. Examples include architecture contests and logo design contests. Both pure strategy and mixed strategy equilibria might emerge and are characterized by solutions to a system of non-linear and differential equations. I then consider a case where the organizer has private information about his preference and makes strategic decisions about information disclosure. I find that it is beneficial for him to disclose information when bidding cost is low and conceal when bidding cost is high. Lastly, I conduct a lab experiment. Results are largely consistent with model predictions. Universitat Pompeu Fabra Rationalizability, Observability and Common Knowledge    [pdf] (joint work with Antonio Penta) Abstract We study the strategic impact of players' higher order uncertainty over the observability of actions in general two-player games. More speci cally, we consider the space of all belief hierarchies generated by the uncertainty over whether the game will be played as a static game or with perfect information. Over this space, we characterize the correspondence of a solution concept which represents the behavioral implications of Rationality and Common Belief in Rationality (RCBR), where 'rationality' is understood as sequential whenever a player moves second. We show that such a correspondence is generically single-valued, and that its structure supports a robust re nement of rationalizability, which often has very sharp implications. For instance: (i) in a class of games which includes both zero-sum games with a pure equilibrium and coordination games with a unique efficient equilibrium, RCBR generically ensures ecient equilibrium outcomes; (ii) in a class of games which also includes other well-known families of coordination games RCBR generically selects components of the Stackelberg pro les; (iii) if common knowledge is maintained that player 2's action is not observable (e.g., because 1 is commonly known to move earlier, etc.), in a class of games which includes of all the above RCBR generically selects the equilibrium of the static game most favorable to player 1. Back
Browse Questions # A solution is obtained by mixing two solutions of same electrolyte with pH = 5 and pH = 3 respectively. The resulting solution has pH (a) 2.2 (b) 4.0 (c) 8.0 (d) 3.3 Toolbox: • $\rho H = - log [H^+]$ If $\rho H = 5 H^+ = 10^{-\rho H} = 10^{-5}$ If $\rho H = 3 H^+ =10^{-3}$ Total  $[H^+] = \frac{10^{-5} + 10^{-3}}{2}$ = $\frac{.01 \times 10^{-3} + 1 \times 10^{-3}}{2}$ = $\frac{(1 + 0.01) \times 10^{-3}}{2}$ $\therefore [H^+] = \frac{1.01 \times 10^{-3}}{2} = 0.505 \times 10^{-3}$ $\rho H = - log (0.505 \times 10^{-3}) = 3.3$ edited Mar 25, 2014
# MHacks 11 (and the customary apologies) I have been kind of terrible about keeping up this blog. Sorry about that. That said, a lot has happened since the last blog post almost a full year ago. For one thing, I am a college kid now! I also had a great time as a tech intern for Capital One over the summer (something I’m definitely going to be writing about very soon). But let’s get back to this story. MHacks 11 was held at the University of Michigan from October 12 - 14. I convinced my friend, David, to fly up to Ann Arbor all the way from Charlottesville for the event. What follows is the story of how David (UVA ‘19), Renee (also U-M ‘22), and I did our parts in contributing to the eerie convergence between the Simpsons and reality. You don’t need to watch the whole video (though I highly encourage it). Basically, Lisa and a team of programmers write an artificial intelligence (Conrad) to predict the outcomes of social media posts. Thusly inspired, we made a Chrome extension which predicts the reactions a post on Facebook would get. # Step 1: Data Collection Fortunately for us, @minimaxir on Github had a relatively large dataset of public posts from large Facebook pages, and their reactions. The dataset is linked here. I only found out later that he also did our exact same project, but whatever. Anyway, that was data collection. # Step 2: Data Preprocessing We chose only to look at posts with text content (so we ignored shared links, photos, videos, etc.) We also only considered posts less than 1000 characters long, and which had more than 11 non-like reacts. # Step 3: Modelling We spent most of Saturday doing preprocessing, and then eventually realized we still had to actually build a model. We started off by trying a random forest regression with bag of words. Results were not great. After tweaking the hyperparameters around a bit, it became clear results weren’t improving. That’s when we took the leap to using gensim and doc2vec to generate sentence embeddings. This meant that, instead of just using word frequency, we used a much more complex model to encode posts which took into account word order. This yielded much better results, especially after I also normalized the target output. # Step 4: Building the Extension At this point, it was the morning of demos, and while we had a halfway decent model, the chrome extension was non-existent. So came the mad scramble, in which I was literally writing code as we walked to the IM building. I wrote a small Flask API backend, which the Chrome extension was supposed to call. Initially, I didn’t want to deal with the Chrome messaging protocol, so I tried to make the call directly from the content script (which ran as Javascript code on the client with no extra permissions). We ran into a roadblock, where cross origin requests were forbidden. I tried to overcome this by editing my /etc/hosts file to redirect some ancillary Facebook domain to localhost. This almost worked, except Facebook also expected https-only requests, so I was stuck with having to do the correct thing of using Chrome’s messaging protocol. By this point, I’m writing the code while we are standing at our demo table. It turned out to actually be super easy to use the messaging protocol, but that didn’t stop me from screwing up 11 times before I finally got everything working. This was me when I finally got the Chrome extension to work. # Wrap-Up Unfortunately, I have no screenshots of the actual app working, but if you want to run the janky code for yourself, it’s all open source on Github. I learned a lot through this hackathon. It was the first hackathon project that I did with a heavy data science workflow. This hackathon was a blast, and David and Renee were awesome teammates.
# Integrate • April 28th 2010, 01:01 PM JohnDoe Integrate Hello everyone while I was studying integrals i came across with a question I could not think how to act. Here it is $\int(\cos(\ln(x))dx$ How to solve this step by step? Thanks • April 28th 2010, 01:03 PM Substitute $x = e^u$ and then use integration by parts. Integration by parts is the reverse of the product rule. Consider $\int x e^x dx$. To integrate by parts we need to follow the form: $\int u dv = uv - \int v du$. So let $u=x$ and $dv = e^x dx$. Integrating $u$ and differentiating $dv$ gets us $du=dx$ and $v=e^x$. So now plug in the parts, thus making $\int x e^x dx = x e^x - \int e^x dx$ with the answer being $e^x (x-1)$.
# tex input or include another tex file In a tex file, I need to draw a system diagram, so I put it in a separate tex like diagram.tex: \documentclass[tikz]{standalone} \usepackage{tikz} \begin{document} \begin{tikzpicture} \draw (0,0) -- (1,1); \end{tikzpicture} \end{document} and then in main.tex: \documentclass[10pt, a4paper]{article} \usepackage[UTF8]{ctex} \begin{document} \section{sectionA} \include{diagram} % I also tried input the same error. I wish diagram.tex figure could be inserted at this place. \subsection{subsectionA} \section{sectionB} \end{document} besides: running diagram.tex alone in latex gives: ! LaTeX Error: File standalone.cls' not found.Type X to quit or <RETURN> to proceed,or enter new name. (Default extension: cls)Enter file name:! Emergency stop.<read > \usepackage So the question would be how to insert a standalone tex file? And is it possible to run a standalone tex alone? I am also studying tikz, while demos I found are mostly using standalone. • I think the answer is in post : tex.stackexchange.com/questions/32127/standalone-tikz-pictures – Stan Feb 7 '18 at 7:18 • it is possible (but unnecessarily complicated) to make that work but it is much simpler to just have the tikzpicture in a separate file (no \documentclass etc) then you can simply \input it. – David Carlisle Feb 7 '18 at 7:48 • @DavidCarlisle make what work but complicated? – Tiina Feb 7 '18 at 8:16 • \documentclass[tikz]{standalone} load tikz, so it is not need to load it again with \usepackage{tikz}. • in main document you need to load • standalone for stripping out preamble in your diagra file • tikz with necessary tikz libraries i.e. all packages used in included document. \documentclass[tikz]{standalone} \begin{document} \begin{tikzpicture} \draw (0,0) -- (1,1); \end{tikzpicture} \end{document} and \documentclass[10pt, a4paper]{article} \usepackage[UTF8]{ctex} • are you suggesting using input instead of include – Tiina Feb 7 '18 at 8:38 • @Tiina, yes. i assume, that included file should not start on new page (this happens with \include`). – Zarko Feb 7 '18 at 8:43
# Ricci tensor of the orthogonal space 1. May 27, 2013 ### PLuz While reading this article I got stuck with Eq.$(54)$. I've been trying to derive it but I can't get their result. I believe my problem is in understanding their hints. They say that they get the result from the Gauss embedding equation and the Ricci identities for the 4-velocity, $u^a$. Is the Gauss equation they refer the one in the wiki article? Looking at the terms that appear in their equation it looks like the Raychaudhuri equation is to be used in the derivation in order to get the density and the cosmological constant, but even though I realize this I can't really get their result. Can anyone point me in the right direction? Thank you very much $Note:$The reason why I'm trying so hard to prove their result is because I wanted to know if it would still be valid if the orthogonal space were 2 dimensional (aside some constants). It appears to be the case but to be sure I needed to be able to prove it. 2. May 28, 2013 ### Bill_K Yes, the Gauss equation that they're referring to is the same Gauss equation mentioned in the Wikipedia article, relating the Riemann tensor of a surface to its second fundamental form. The second fundamental form, in turn, describes the embedding of the surface and can be expressed in terms of the kinematics of the normal congruence. If you haven't already, I suggest you look up the cited articles, refs 5 and 6 by Ehlers and Ellis, where this relationship is proved. 3. May 28, 2013 ### George Jones Staff Emeritus I agree with Bill. This kind of "legwork" should be almost second nature. Another place to look is section 6.3 "The other Einstein field equations" in the new book "Relativistic Cosmology" by Ellis, Maartens, and MacCallum,
# Math Help - Discriminants 1. ## Discriminants Hi For the question: Find all values of p so that x^2 - 2px + p - 2 = 0 has one positive root and a negative root. What is the condition of the discriminant for a positive and a negative root? Thanx 2. Originally Posted by xwrathbringerx Hi For the question: Find all values of p so that x^2 - 2px + p - 2 = 0 has one positive root and a negative root. What is the condition of the discriminant for a positive and a negative root? Thanx Yoo should be able to use the quadratic formula to get $x = p \pm \sqrt{p^2 - p + 2}$. So you require the values of p that satisfy the inequality $p < \sqrt{p^2 - p + 2}$ (why?) 3. Originally Posted by mr fantastic So you require the values of p that satisfy the inequality $p < \sqrt{p^2 - p + 2}$ (why?) Hmmmm I have no clue. Why exactly does it have to satisfy that specific inequality? 4. Hi The product of the roots (real or complex) of equation $ax^2 + bx + c = 0$ is $\frac{c}{a}$ Therefore $ax^2 + bx + c = 0$ has 2 real roots with one positive and one negative if both conditions are realised : (i) $b^2 - 4ac \geq 0$ (ii) $\frac{c}{a} < 0$ Condition (ii) implies that a and c have different sign ; then their product is negative It means that condition (ii) implies condition (i) and therefore only condition (ii) is sufficient In your specific case $x^2 - 2px + p - 2 = 0$ has 2 real roots with one positive and one negative if $p-2 < 0$ then $p < 2$ 5. Originally Posted by xwrathbringerx Hmmmm I have no clue. Why exactly does it have to satisfy that specific inequality? Look at the solution for x. It has the form $x = p \pm D$, that is, $x = p + D$ and $x = p - D$ where $D = \sqrt{p^2 - p + 2}$. $x = p + D$ is always positive so that's the positive solution. The negative solution therefore has to come from $x = p - D$. So you require $p - D < 0 \Rightarrow p < D \Rightarrow p < \sqrt{p^2 - p + 2}$.
# Tag Info 0 @DeltaLima Help me out please. This formula is not working. couldn't resolve this, can you please give numeric example. I solved it by hand it is not working. unfortunately I could not put up an image here. What is solved putting as an answer, as I can upload as an image in the answer. 0 All this calculated stuff is impressive. I just figure the air density at 80.000 is less dense and it doesn't have enough pressure to force the airspeed indicate up. Being it takes air pressure in the pet tube to cause the airspeed indicator gauge to go up in numbers. Even though it's only shows 200 knots, it's speed over the ground is much faster. So I'm ... 1 This is an addendum to @Peter's accepted answer, which is the correct answer as far as the B737 is concerned. Crossover speed is not an industry standard nomenclature. I suspect this term as it's defined here is restricted to the B737 program. Not every aircraft has a "crossover speed" in its current definition. 1. B737 with single rudder PCU As noted in ... Top 50 recent answers are included
Chapter 5: Waves and periodic motion (C7869116) #toc { border: 1px solid #bba; background-color: #f7f8ff; padding: 1em; font-size: 90%; text-align: center; } #toc-header { display: inline; padding: 0; font-size: 100%; font-weight: bold; } #toc ul { list-style-type: none; margin-left: 0; padding-left: 0; text-align: left; } .toc2 { margin-left: 1em; } .toc3 { margin-left: 2em; } .toc4 { margin-left: 3em; }Table of Contents Last modified: 2881d agoWord count: 1,228 wordsLegend: Key principles // Storyline 1 Periodic motion Periodic motion is motion repeated in regular intervals known as periods. Frequency is the number of occurrences of periodic motion per unit time. Formative learning activity Maps to RK5.B What are characteristics of waves? 2 Wave characteristics Wave is an oscillation that travels through space over time. They are unique because energy is transferred, but there is no permanent displacement of particles. There are three types of waves, including: • Mechanical waves, which is a wave that requires a medium to propagate. There are two types of mechanical waves, including: • Transverse waves, where vibration is perpendicular to the direction the wave is propagating. For example, a vibrating string is a transverse wave • Longitudinal waves, where vibration is parallel and antiparallel to the direction the wave is propagating. For example, a sound wave is a longitudinal wave • Electromagnetic waves, which can propagate through a vacuum, • Matter waves (aka de Broglie waves), which is the related wave of any particle, as all waves exhibit wave-particle duality. It is defined by $\lambda=\dfrac{h}{p}$, where $h$ is Planck’s constant Basic waves can be mathematically described by the equation $u(x,t)=A.sin(kx-\omega t+\phi)$, where wavenumber $k=\dfrac{2\pi}{\lambda}$ (discussed ), and angular frequency $\omega=2\pi f$ . Notable characteristics of the wave include: • Wavelength (lambda, $\lambda$), the distance over which the wave repeats [from peak to peak, or trough to trough], and has the SI unit meters • Frequency ($f$), the number of wavelengths that have been repeated, over one second, and has the units cycles per second, or Hertz • Period ($T$), the reciprocal ($\dfrac{1}{x}$) of frequency $T=\dfrac{1}{f}$, and is the time required for an entire wavelength to cycle, and has the units seconds • Amplitude ($A$), the maximum oscillation of the wave • Phase ($\phi$), the lateral shift. Note that because of the nature of the sine function, $2\pi=360^{\circ}$ which represents an entire wavelength. Thus, the shift of half a wavelength is $\pi=180^{\circ}$ Water waves are surface waves, which are mechanical waves propagating along the interface of different media, in this case, water and air. In shallow water, where the wavelength is much greater than the depth, the velocity of the wave can be approximated as $v_{shallow}=\sqrt{gd}$, where $g$ is the gravitational constant, and $d$ is the depth of water. Thus, in shallow water, velocity is proportional to depth. In deep water, where the wavelength is much smaller than the depth, the velocity of the wave is $v_{deep}=\sqrt{\dfrac{g\lambda}{2\pi}}$. Thus, in deep water, velocity is proportional to wavelength. Note that since velocity is dependent on wavelength [and therefore frequency], that deep water is a dispersive medium. Simple harmonic motion is a type of periodic motion where the restoring force is directly proportional to displacement, $F=-kx$. Using calculus, it can be found that acceleration is $a=-\omega^2 x$, meaning that acceleration is directly proportional to displacement. Mechanical energy in simple harmonic motion is conserved, as it perfectly converts potential energy to kinetic energy, and vice versa. Examples of simple harmonic motion include a mass on a string, a mass on a pendulum (when there is a small angle). For a mass on the spring, the period is $T=2\pi \sqrt{\dfrac{m}{k}}$, where $m$ is the mass attached, and $k$ is the spring constant. For the pendulum, the period is $T=2\pi \sqrt{\dfrac{l}{g}}$, where $l$ is the length of the string, and $g$ is gravitational acceleration. Note that the period of the pendulum is independent of the mass or amplitude (i.e. in this case, how high it swings). The wave velocity is defined as $v=f\lambda$ in a non-dispersive medium. Dispersion is when velocity depends on frequency, thereby causing waves of different frequencies to travel at different speeds. In contrast, in non-dispersive medium, all parts of the wave (in spite of frequency) travel at the same speed. An example of dispersion is the separation of white light into its components shown in the rainbow. Note therefore, that in a non-dispersive medium,  is constant for a given medium. Rewriting the formula as $c=f\lambda$, to emphasize the velocity of the wave is constant , the speed of light. Thus, it is $f$ and $\lambda$ that are inversely proportional. Velocity is thus dependent on the medium’s physical properties, which includes the elastic and inertial components, in accordance with $v=\sqrt{\dfrac{elastic}{inertial}}$. Elasticity will be defined as the property of materials to return to its original shape after deformation. Inertia was  defined as an object’s tendency to remain at its present state of motion. Whereas the elasticity component of a medium contributes to increase velocity, the inertial component will contributes to decrease velocity. For example, in a vibrating string, the velocity is defined as $v=\sqrt{\dfrac{T}{\mu}}$, where $T$ is tension of the string (the elastic component), and $\mu$ is the linear density of the string $\mu=\dfrac{m}{L}$ (the inertial component). Formative learning activity Maps to RK5.A What is periodic motion? # Assessment e-submission (Formative assessments are not assessed for marks. Assessments are made on the unit level. # (MED5118352) : PRIVATE FORUMSStudent helpdesk Purge
### Chocolate There are three tables in a room with blocks of chocolate on each. Where would be the best place for each child in the class to sit if they came in one at a time? ### Four Triangles Puzzle Cut four triangles from a square as shown in the picture. How many different shapes can you make by fitting the four triangles back together? ### Cut it Out Can you dissect an equilateral triangle into 6 smaller ones? What number of smaller equilateral triangles is it NOT possible to dissect a larger equilateral triangle into? # Let Us Divide! ##### Age 7 to 11 Challenge Level: Show us how you could answer the questions using - words? - pictures? - numbers? - objects? - other ways? It's Jola's birthday and she is having a party. She has $24$ cup cakes to share equally between $3$ plates for the party.How many cakes will go on each plate? There are $8$ children coming to the party. They are all going to the cinema. How many cars will they need to take them there? Each car will hold $4$ children and they will each need a driver too. Jola is going to give everyone some chocolate eggs to take home at the end of the party. They fit into egg boxes which hold $6$ eggs each. Will $50$ eggs be enough for each of the $8$ visitors to have a box to take home?
# Sufficient statistics, MLE and unbiased estimators of uniform type distribution Let $X_1, \dots, X_n$ denote a random sample of size n from the probability distribution with pdf: $$f_X(x|\theta_1, \theta_2) = \frac{1}{\theta_2 - \theta_1} \ I(x)_{[\theta_1,\theta_2]} \ I(\theta_1)_{(-\infty,\theta_2)} \ I(\theta_2)_{(\theta_1,\infty)}\;.$$ (1) Find a pair of sufficient statistics for $(\theta_1, \theta_2)$. $\bf{My \ thoughts:}$ This wasn't too bad. I got $(X_{(1)}, X_{(n)})$ for this part (2) Find the maximum likelihood estimator $(\hat{\theta_1}, \hat{\theta_2})$ for $(\theta_1, \theta_2)$. $\bf{My \ thoughts:}$ Thinking I need to use monotone functions since it has 2 parameters and the variables are part of the interval. I believe that $\frac{X_{(1)} + X_{(n)}}{2}$ will become one of my estimators. (3) Show that $\frac{X_{(1)} + X_{(n)}}{2}$ is an unbiased estimator for $\frac{\theta_1 + \theta_2}{2}$. $\bf{My \ thoughts:}$ I think I will need to use Cramer-Rao Lower Bound in some form but not quite sure if that is right . (4) Construct an unbiased estimator for $\theta_2 - \theta_1$. $\bf{My \ thoughts:}$ Very stuck on this part, but I think I can use some information from previous parts to help me. Any help is greatly appreciated. • (2) What does the midrange have to do with either of the interval endpoints? You ought to rethink this one. (3) is trivial because the distribution of the midrange is symmetric under the transformation $x\to \theta_1+\theta_2-x$: this transformation would negate any bias but it does not change the distribution of the midrange, showing its bias is equal to its negative. (4) What is the expectation of the sample range? If it is a function of $n$ and $\theta_2-\theta_1$, you should be able to adjust the sample range to have zero bias. – whuber Mar 11 '13 at 23:32 (2) I believe that $\frac{X_{(1)} + X_{(n)}}{2}$ will become one of my estimators. Why do you believe that, rather than something more directly related to your answer to (1)? What would you use to just estimate the first parameter? What would you use to just estimate the second? (3) My thoughts: I think I will need to use Cramer-Rao Lower Bound in some form but not quite sure if that is right . The question relates to expectation, rather than variance. (4) I suggest you use the sufficient statistics to construct an estimator with good properties, and then find its bias. Then figure out what simple modification to that estimator will have bias 0. Note that for the mle, the likelihood $(\theta_2-\theta_1)^{-n}$ always decreasing in $\theta_2$ and increasing in $\theta_1$, but we must have $\theta_1\leq X_{(1)}\leq X_{(n)}\leq\theta_2$. Also for (3) you need the distribution for each sufficient statistic. As a hint, if the minimum is greater than $t$ then all values are greater than $t$. Similarly, if the maximum is less than $r$ then all values are less than $r$. This will give you the CDF - differentiate to get the pdf and then you can work out the expectations and hence the bias. For (4) if you define $R=\theta_2-\theta_1$ and note that $R$ must be at least as big as the range you have observed $R\geq X_{(n)}-X_{(1)}$. Also $X_{(n)}-X_{(1)}$ is a sufficient statistic for $R$ so the best unbiased estimator must be a function of $r_n=X_{(n)}-X_{(1)}$. The bias you work out in (3) can be used to find the bias for $r_n$
1. ## Algebraic Reprsentation An engineer measured the dimensions for a rectangular site by usinga wooden pole of unknown length x. The length of the rectangular siteis 2 pole measures increased by 3 feet, while the width is 1 pole measure decreased by 4 feet. Write an algebraic representation, in terms of x, for the perimeter of the site. 2. Hey there magentarita, Since the perimeter of the site is rectangular, it is of the form: $P = 2(\text {length } \times {\text { width}})$ From the question... $P = 2((2x + 3) + (x - 4))$ $P = 2(3x - 1)$ $P = 6x - 2 \text{ feet}$ Alternately, the wording of the question could be seen as a little ambiguous for the following reasons... "The length of the rectangular site is 2 pole measures increased by 3 feet... Does this mean, instead, 2(x + 3)? If this is the case, then the perimeter is: $P = 2((2(x + 3)) + (x - 4))$ $P = 2((2x + 6) + (x - 4))$ $P = 2(3x - 2)$ $P = 6x - 4 \text { feet}$ In general math questions, however, I would favour the initial reading. Trust this helps. 3. ## great Originally Posted by MakeANote Hey there magentarita, Since the perimeter of the site is rectangular, it is of the form: $P = 2(\text {length } \times {\text { width}})$ From the question... $P = 2((2x + 3) + (x - 4))$ $P = 2(3x - 1)$ $P = 6x - 2 \text{ feet}$ Alternately, the wording of the question could be seen as a little ambiguous for the following reasons... "The length of the rectangular site is 2 pole measures increased by 3 feet... Does this mean, instead, 2(x + 3)? If this is the case, then the perimeter is: $P = 2((2(x + 3)) + (x - 4))$ $P = 2((2x + 6) + (x - 4))$ $P = 2(3x - 2)$ $P = 6x - 4 \text { feet}$ In general math questions, however, I would favour the initial reading. Trust this helps. Thank you for taking time out to help me.
# Discrete Total Variation of the Normal Vector Field as Shape Prior with Applications in Geometric Inverse Problems An analogue of the total variation prior for the normal vector field along the boundary of piecewise flat shapes in 3D is introduced. A major class of examples are triangulated surfaces as they occur for instance in finite element computations. The analysis of the functional is based on a differential geometric setting in which the unit normal vector is viewed as an element of the two-dimensional sphere manifold. It is found to agree with the discrete total mean curvature known in discrete differential geometry. A split Bregman iteration is proposed for the solution of discretized shape optimization problems, in which the total variation of the normal appears as a regularizer. Unlike most other priors, such as surface area, the new functional allows for piecewise flat shapes. As two applications, a mesh denoising and a geometric inverse problem of inclusion detection type involving a partial differential equation are considered. Numerical experiments confirm that polyhedral shapes can be identified quite accurately. ## Authors • 2 publications • 1 publication • 5 publications • 1 publication • 2 publications • ### Numerical computations of split Bregman method for fourth order total variation flow The split Bregman framework for Osher-Solé-Vese (OSV) model and fourth o... 06/11/2019 ∙ by Yoshikazu Giga, et al. ∙ 0 • ### Second-order Shape Optimization for Geometric Inverse Problems in Vision We develop a method for optimization in shape spaces, i.e., sets of surf... 11/11/2013 ∙ by J. Balzer, et al. ∙ 0 • ### A total variation based regularizer promoting piecewise-Lipschitz reconstructions We introduce a new regularizer in the total variation family that promot... 03/12/2019 ∙ by Martin Burger, et al. ∙ 0 • ### Total Variation Isoperimetric Profiles Applications in political redistricting demand quantitative measures of ... 09/21/2018 ∙ by Daryl DeFord, et al. ∙ 0 • ### Subdivision surfaces with isogeometric analysis adapted refinement weights Subdivision surfaces provide an elegant isogeometric analysis framework ... 04/13/2018 ∙ by Qiaoling Zhang, et al. ∙ 0 • ### A comparative study of structural similarity and regularization for joint inverse problems governed by PDEs Joint inversion refers to the simultaneous inference of multiple paramet... 08/16/2018 ∙ by Benjamin Crestel, et al. ∙ 0 • ### Application of Bounded Total Variation Denoising in Urban Traffic Analysis While it is believed that denoising is not always necessary in many big ... 08/04/2018 ∙ by Shanshan Tang, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1. Introduction The total variation (TV) functional is popular as a regularizer in imaging and inverse problems; see for instance [RudinOsherFatemi1992, ChanGolubMulet1999, BachmayrBurger2009, Langer2017] and [Vogel2002, Chapter 8]. It is most commonly applied to functions with values in or . In the companion paper [BergmannHerrmannHerzogSchmidtVidalNunez2019:1_preprint], we introduced the total variation of the normal vector field  along smooth surfaces : \abs\bnTV(Γ)\coloneqq∫Γ\bigh()\absRiemannian(DΓ\bn)\bxi12+\absRiemannian(DΓ\bn)\bxi221/2\ds. (1) In contrast to the setting of real- or vector-valued functions, the normal vector field is manifold-valued with values in the sphere . In (1), denotes the derivative (push-forward) of , and is an arbitrary orthonormal basis (w.r.t. the Euclidean inner product in the embedding ) of the tangent spaces along . Finally, denotes the norm induced by a Riemannian metric on . It was shown in [BergmannHerrmannHerzogSchmidtVidalNunez2019:1_preprint] that (1) can be alternatively expressed as \abs\bnTV(Γ)=∫Γ\bigh()k21+k221/2\ds, where and are the principal curvatures of the surface. In this paper, we discuss a discrete variant of (1) tailored to piecewise flat surfaces , where (1) does not apply. In contrast with the smooth setting, the total variation of the piecewise constant normal vector field is concentrated in jumps across edges between flat facets. We therefore propose the following discrete total variation of the normal, \abs\bnDTV(Γh)\coloneqq∑Ed(\bn+E,\bn−E)\absE. (2) Here denotes an edge of length between facets, and is the geodesic distance between the two neighboring normal vectors. We investigate (2) in Section 2. It turns out to coincide with the discrete total mean curvature known in discrete differential geometry. Subsequently, we discuss the utility of this functional as a prior in shape optimization problems cast in the form Minimizeℓ(u(Ωh),Ωh)+β\abs\bnDTV(Γh) (3) w.r.t.\ the vertex positions of the discrete shape Ωh with boundary Γh. Here denotes the solution of the problem specific partial differential equation (PDE), which depends on the unknown domain . Moreover, represents a loss function, such as a least squares function. In particular, ( 3) includes geometric inverse problems, where one seeks to recover a shape representing, e.g., the location of a source or inclusion inside a given, larger domain, or the geometry of an inclusion or a scatterer. Numerical experiments confirm that , as a shape prior, can help to identify polyhedral shapes. Similarly as for the case of smooth surfaces discussed in [BergmannHerrmannHerzogSchmidtVidalNunez2019:1_preprint], solving discrete shape optimization problems (3) is challenging due to the non-trivial dependency of on the vertex positions of the discrete surface , as well as the non-smoothness of . We therefore propose in Section 3 a version of the split Bregman method proposed in [GoldsteinOsher2009], an algorithm from the alternating direction method of multipliers (ADMM) class in which the jumps in the normal vector are treated as a separate variable. The particularity here is that the normal vector has values in and thus the jump, termed , is represented by a logarithmic map in the appropriate tangent space. An outstanding feature of the proposed splitting is that the two subproblems, the minimization w.r.t. the vertex coordinates representing the discrete surface and w.r.t. , are directly amenable to numerical algorithms. Although many optimization algorithms have been recently generalized to Riemannian manifolds, see, e.g., [Bacak2014, BergmannPerschSteidl2016, BergmannHerzogTenbrinckVidal-Nunez2019_preprint], the Riemannian split Bregman method for manifolds proposed in this and the companion paper [BergmannHerrmannHerzogSchmidtVidalNunez2019:1_preprint] is new to the best of our knowledge. Its detailed investigation will be postponed to future work. For a general overview of optimization on manifolds, we refer the reader to [AbsilMahonySepulchre2008]. We anticipate that our method can be applied to other non-smooth problems involving manifold-valued total variation functionals as well. Examples falling into this class have been introduced for instance in [LellmannStrekalovskiyKoetterCremers2013, BergmannTenbrinck2018]. An alternative splitting scheme, the so-called half-quadratic minimization, was introduced by [BergmannChanHielscherPerschSteidl2016]. The structure of the paper is as follows. In the following section we provide an analysis of the discrete total variation of the normal (2) and its properties. We also compare it to geometric functionals appearing elsewhere in the literature. In particular, we provide a numerical comparison between (2) and surface regularization for a mesh denoising problem. Section 3 is devoted to the formulation of an ADMM method which generalizes the split Bregman algorithm to the manifold-valued problem (3). In section 4, we describe an inclusion detection problem of type (3), motivated by geophysical applications. We also provide implementation details in the finite element framework . Corresponding numerical results are presented in Section 5. ## 2. Discrete Total Variation of the Normal From this section onwards we assume that is a piecewise flat, compact, orientable surface without boundary, which consists of a finite number of flat facets with straight sided edges between facets. Consequently, can be thought of as a mesh consisting of polyhedral cells with a consistently oriented outer unit normal. We also assume this mesh to be geometrically conforming, i.e., there are no hanging nodes. A frequent situation is that is the boundary mesh of a geometrically conforming volume mesh with polyhedral cells, representing a volume domain . In our numerical example in Section 5, we will utilize a volume mesh consisting of tetrahedra, whose surface mesh consists of triangles; see Figure 1. Since the surface is non-smooth, the definition (1) of the total variation of the normal proposed in the companion paper [BergmannHerrmannHerzogSchmidtVidalNunez2019:1_preprint] for smooth surfaces does not apply. Since the normal vector field is piecewise constant here, its variation is concentrated in spontaneous changes across edges between facets, rather than gradual changes expressed by the derivative . We therefore propose to replace (1) by \abs\bnDTV(Γh)\coloneqq∑Ed(\bn+E,\bn−E)\absE, (4) where denotes an edge of Euclidean length between facets. Each edge has an arbitrary but fixed orientation, so that its two neighboring facets can be addressed as and . The normal vectors, constant on each facet, are denoted by and . Moreover, d(\bn+E,\bn−E)=arccos\bigh()(\bn+E)⊤\bn−E=∢\bigh()\bn+E,\bn−E (5) denotes the geodesic distance on , i.e., the angle between the two unit vectors and ; see also Figure 6. To motivate the definition (4), consider a family of smooth approximations of the piecewise flat surface . The approximations are supposed to be of class such that the flat facets are preserved up to a collar of order , and smoothing occurs in bands of width around the edges. Such an approximation can be constructed, for instance, by a level-set representation of by means of a signed distance function . Then a family of smooth approximations can be obtained as zero level sets of mollifications for sufficiently small . Here is the standard Friedrichs mollifier in 3D and denotes convolution. A construction of this type is used, for instance, in [GomezHernandezLopez2005, BonitoDemlowNochetto2019_preprint]. An alternative to this procedure is the so-called Steiner smoothing, where is taken to be the boundary of the Minkowski sum of with the ball ; see for instance [Sullivan2008, Section 4.4]. Let denote a family of smooth approximations of obtained by mollification, with normal vector fields . Then \abs\bnεTV(Γε)→\abs\bnDTV(Γh)as ε↘0. (6) ###### Proof. Let us denote the vertices in by and its edges by . Since mollification is local, the normal vector is constant in the interior of each facet minus its collar, which is of order . Consequently, changes in the normal vector are confined to a neighborhood of the skeleton. We decompose this area into the disjoint union . Here are the transition regions around edge where the normal vector is modified due to mollification, and are the regions around vertex . On , we can arrange the basis to be aligned and orthogonal to so that ∫IE,ε\bigh()\absRiemannian(DΓε\bnε)\bxi12+\absRiemannian(DΓε\bnε)\bxi221/2\ds=∫IE,ε\absRiemannian(DΓε\bnε)\bxi1\ds holds, which can be easily evaluated as an iterated integral. In each stripe in perpendicular to , changes monotonically along the geodesic path between and , so that the integral along this stripe yields the constant . Since the length of parallel to is up to terms of order , we obtain ∫IE,ε\bigh()\absRiemannian(DΓε\bnε)\bxi12+\absRiemannian(DΓε\bnε)\bxi221/2\ds=d(\bn+E,\bn−E)\bigh[]\absE+\OO(ε). The contributions to from integration over are of order since is of order and the area of is of order . This yields the claim. ∎ ### 2.1. Comparison with Prior Work for Discrete Surfaces The functional (4) has been used previously in the literature. We mention that it fits into the framework of total variation of manifold-valued functions defined in [GiaquintaMucci2007, LellmannStrekalovskiyKoetterCremers2013]. Specifically in the context of discrete surfaces, we mention [Sullivan2006] where the term appears as the total mean curvature of the edge . Here is the exterior dihedral angle, which agrees with , see (5). Consequently, (4) can be written as . Moreover, (4) appears as a regularizer in [WuZhengCaiFu2015] within a variational model for mesh denoising but the geodesic distances are approximated for the purpose of numerical solution. We also mention the recent [PellisKilianDellingerWallnerPottmann2019] where (4) appears as a measure of visual smoothness of discrete surfaces. Particular emphasis is given to the impact of the mesh connectivity. In our study, the mesh connectivity will remain fixed and only triangular surface meshes are considered in the numerical experiments. In addition, we are aware of [ZhangWuZhangDeng2015, ZhongXieWangLiuLiu2018], where ∑E\abs\bn+E−\bn−E2\absE, (7) was proposed in the context of variational mesh denoising. Notice that in contrast to (4), (7) utilizes the Euclidean as opposed to the geodesic distance between neighboring normals and is therefore an underestimator for (4). Once again, we are not aware of any work in which (4) or its continuous counterpart (1) were used as a prior in shape optimization or geometric inverse problems involving partial differential equations. ### 2.2. Properties of the Discrete Total Variation of the Normal In this section we investigate some properties of the discrete total variation of the normal. As can be seen directly from (4), a scaling in which is replaced by for some yields \abs\bn\scaleDTV(\scaleΓh)=\scale\abs\bnDTV(Γh). This is the same behavior observed, e.g., for the total variation of scalar functions defined on two-dimensional domains. Consequently, when studying optimization problems involving (4), we need to take precautions to avoid that degenerates to a point. This can be achived either by imposing a constraint, e.g., on the surface area, or by considering tracking problems in which an additional loss term appears. #### 2.2.1. Simple Minimizers of the Discrete Total Variation of the Normal In this section, we investigate minimizers of subject to an area constraint. More precisely, we consider the following problem. Given a triangulated surface mesh consisting of vertices , edges and facets , find the mesh with the same connectivity, which minimizes∑Ed(\bn+E,\bn−E)\absE% subject to∑F\absF=A0. (8) To the best of our knowledge, a precise characterization of the minimizers of (8) is an open problem and the solution depends on the connectivity; compare the observations in [PellisKilianDellingerWallnerPottmann2019, Section 4]. That is, different triangulations of the same (initial) mesh, e.g., a cube, may yield different minimizers. We also refer the reader to [AlexaWardetzky2011] for a related observation in discrete mean curvature flow. We do have, however, the following partial result. For the proof, we exploit that (4) coincides with the discrete total mean curvature and utilize results from discrete differential geometry. The reader may wish to consult [MeyerDesbrunSchroederBarr2003, Polthier2005, Wardetzky2006, BobenkoSpringborn2007, CraneDeGoesDesbrungSchroeder2013]. The icosahedron and the cube with crossed diagonals are stationary for (8) within the class of triangulated surfaces of constant area and identical connectivity. ###### Proof. Let us consider the Lagrangian associated with (8), \LL(\bx1,…,\bx\nvertices,μ)\coloneqq∑Ed(\bn+E,\bn−E)\absE+μ(∑F\absF−A0). (9) Here denote the coordinates of vertex  and is the total number of vertices of the triangular surface mesh. Notice that the normal vectors , edge lengths and facet areas depend on these coordinates. The gradient of (9) w.r.t.  can be represented as ∇\bxi\LL(\bx1,…,\bx\nvertices,μ)=∑j∈\NN(i)[d(\bn+Eij,\bn−Eij)\absEij+μ2(cotαij+cotβij)](\bxi−\bxj), (10) see for instance [CraneDeGoesDesbrungSchroeder2013]. Here denotes the index set of vertices adjacent to vertex . For any , denotes the edge between vertices  and . Moreover, and are the angles as illustrated in Figure 3. For the icosahedron with surface area , all edges have length . Moreover, since all facets are unilaterial triangles, holds. Finally, the exterior dihedral angles are all equal to . Consequently, the Lagrangian is stationary for the Lagrange multiplier . We remark that (4) and thus (10) is not differentiable when one or more of the angles are zero. This is the case for the cube with crossed diagonals, see Figure 3. However, the right hand side in (10) still provides a generalized derivative of in the sense of Clarke. In contrast to the icosahedron, the cube has two types of vertices. When is the center vertex of one of the lateral surfaces, then and for all . Moreover, since holds, is an element of the generalized (partial) differential of at w.r.t. , independently of the value of the Lagrange multiplier . Now when is a vertex of corner type, we need to distinguish two types of edges. Along the three edges leading to neighbors of the same type, we have a exterior dihedral angle of , length and . Along the three remaining edges leading to surface centers, we have and . Thus for vertices of corner type, it is straightforward to verify that belongs to the generalized (partial) differential of at w.r.t.  if (π√2/2(A0/6)1/2+2μ)⎛⎜⎝111⎞⎟⎠=\bnull holds, which is true for the obvious choice of . ∎ Numerical experiments indicate that the icosahedron as well as the cube are not only stationary points, but also local minimizers of (8). We can thus conclude that the discrete objective (4) exhibits different minimizers than its continuous counterpart (1) for smooth surfaces. In particular, (4) admits and promotes piecewise flat minimizers such as the cube. This is in accordance with observations made in [PellisKilianDellingerWallnerPottmann2019, Section 3.2] that optimal meshes typically exhibit a number of zero dihedral angles. This property sets our functional apart from other functionals previously used as priors in shape optimization and geometric inverse problems. For instance, the popular surface area prior is well known to produce smooth shapes; see the numerical experiments in section 2.2.3 below. #### 2.2.2. Comparison of Discrete and Continuous Total Variation of the Normal In this section we compare the values of (1) and (4) for a sphere , and a sequence of discretized spheres . For comparison, we choose to have the same surface area as the cube in the previous section, i.e., we use as the radius. It is easy to see that since the principal curvatures of a sphere  of radius  are , (1) becomes \abs\bnTV(Γ)=∫Γ\bigh()k21+k221/2\ds=4πr2√2r=4√2πr, which amounts to for the sphere under consideration. To compare this to the discrete total variation of the normal, we created a sequence of triangular meshes  of this sphere with various resolutions using  and evaluated (4) numerically. The results are shown in Table 2. They reveal a factor of approximately between the discrete and continuous functionals for the sphere. To explain this discrepancy, recall that the principal curvatures of the sphere are . This implies that the derivative map has rank two everywhere. Discretized surfaces behave fundamentally different in the following respect. Their curvature is concentrated on the edges, and one of the principal curvatures (the one in the direction along the edge) is always zero. So even for successively refined meshes, e.g., of the sphere, one is still measuring only one principal curvature at a time. We are thus led to the conjecture that the limit of (4) for sucessively refined meshes is the anisotropic, yet still intrinsic measure , whose value for the sphere in Table 1 is . The factor can thus be attributed to the ratio between the - and -norms of the vector . This observation is in accordance with the findings in [PellisKilianDellingerWallnerPottmann2019, Section 1.2]. One could consider an isotropic version of (4) in which the dihedral angles across all edges meeting at any given vertex are measured jointly. These alternatives will be considered elsewhere. #### 2.2.3. Discrete Total Variation Compared to Surface Area Regularization In this section we consider a specific instance of the general problem (3) and compare our discrete TV functional with the surface area regularizer. We begin with a triangular surface mesh of a box and add normally distributed noise to the coordinate vector of each vertex in average normal direction of the adjacent triangles with zero mean and standard deviation times the average edge length. We denote the noisy vertex positions as and utilize a simple least-squares functional as our loss function and consider the following mesh denoising problem, Minimize12∑V\abs\bxV−˜\bxV22+β\abs\bnDTV(Γh) (11) w.r.t.\ the vertex positions \bxV of the discrete % surface Γh. Here the sum runs over the vertices of . For comparison, we also consider a variant Minimize12∑V\abs\bxV−˜\bxV22+γ∑F\absF (12) w.r.t.\ the vertex positions \bxV of the discrete % surface Γh, where we utilize the total surface area as prior. A numerical approach to solve the non-smooth problem (11) will be discussed in section 3. By contrast, problem (12) is a fairly standard smooth discrete shape optimization problem and we solve it using a simple shape gradient descent scheme. The details how to obtain the shape derivative and shape gradient are the same as described in section 4.2 for problem (11). Figure 4 shows the numerical solutions of (11) and (12) for various choices of the regularization parameters and , respectively. The initial guess for both problems is a sphere with the same connectivity as . We can clearly see that our functional (4) achieves a very good reconstruction of the original shape for a proper choice of . By contrast, the surface area regularization requires a relatively large choice of in order to reasonably reduce the noise, which in turn leads to a significant shrinkage of the surface and a rounding of the sharp features. ## 3. Discrete Split Bregman Iteration In this section, we develop an optimization scheme to solve the non-smooth problem (2). To this end, we adapt the well-known split Bregman method to our setting. This leads to a discrete realization of the approach presented in [BergmannHerrmannHerzogSchmidtVidalNunez2019:1_preprint, section 4]. Recall that combining (2) with (3) results in the problem Minimizeℓ(u(Ωh),Ωh)+β∑Ed(\bn+E,\bn−E)\absE (13) w.r.t.\ the vertex positions of Ωh, where are the edges of the unknown part  of the boundary . We will consider a concrete example in Section 4.1. Notice that the second term in the objective in (13) is non-differentiable whenever occurs on at least one edge. Following the classical split Bregman approach, we introduce a splitting in which the variation of the normal vector becomes an independent variable. Since this variation is confined to edges, where the normal vector jumps (without loss of generality) from to , this new variable becomes \bdE=\mylog\bn+E\bn−E∈\tangent\bn+E\sphere2. (14) Here denotes the logarithmic map, which specifies the unique tangent vector at the point such that the geodesic departing from in that direction will reach at unit time. The logarithmic map is well-defined whenever . Moreover, holds; see (27) for more details. Together with the set of Lagrange multipliers , we define the Augmented Lagrangian pertaining to (13) and (14) as \LL(Ωh,\bd,\bb)\coloneqqℓ(u(Ωh),Ωh)+β∑E\absRiemannian\bdE\absE+λ2∑E\absE\bigabsRiemannian\bdE−\mylog\bn+E\bn−E−\bbE2. (15) The vectors and are simply the collections of their entries , three components per edge . Hence, since the tangent space changes between shape updates, the respective quantities have to be parallely transported, which is a major difference to ADMM methods in Euclidean or Hilbert spaces. We state the split Bregman iteration in Section 3. Split Bregman method for (13) 0:  Initial domain 0:  Approximate solution of (13) 1:  Set , 2:  Set 3:  while not converged do 4:     Perform several gradient steps for at to obtain 5: Parallely transport the multiplier estimate on each edge from to along the geodesic from to 6:     Set , see (16) 7:     Update the Lagrange multipliers, i.e., set for all edges 8:     Set 9:  end while We now address the individual steps of section 3 in more detail, i.e., the successive minimization with respect to the unknown vertices of and , followed by an explicit update for the multiplier . Step 4 is the minimization of (15) with respect to the unknown vertex positions of . To this end, we employ a gradient descent scheme, where we compute the sensitivities with respect to those node positions discretely, see Section 4.2 for more details. Following [GoldsteinOsher2009], an approximate minimization suffices, and thus only a certain number of steepest descent steps are performed. After has been updated to , the quantity has to be parallely transported into the new tangent space , see step 5, which is detailed in (28) for more details. Step 6 is the optimization of (15) with respect to , which is a non-smoooth problem. It can be solved explicitly by one vectorial shrinkage operation per edge . Given the data and associated normal field , as well as multiplier parallely transported into , the minimizer of (15) is given by for each edge . Notice that (16) is independent of the previous value and thus a parallel transport of into the updated tangent space is not necessary. Step 7 is the multiplier update for , which is done explicitly via \bb(k+1)E=\bb(k)E+\mylog\bn+,(k+1)E\bn−,(k+1)E−\bd(k+1)E for each edge . ## 4. An EIT Model Problem and its Implementation in In this section we address some details concerning the implementation of Section 3 in the finite element framework  (version 2018.2.dev0), [LoggMardalWells2012:1, AlnaesBlechtaHakeJohanssonKehletLoggRichardsonRingRognesWells2015]. For concreteness, we elaborate on a particular reduced loss function where the state arises from a PDE modeling a geological electrical impedance tomography (EIT) problem with Robin-type far field boundary conditions. We introduce the problem under consideration first and discuss implementation details and derivative computations later on. ### 4.1. EIT Model Problem Electrical impedance tomography (EIT) problems are a prototypical class of inverse problems. Common to these problems is the task of reconstructing the internal conductivity inside a volume from boundary measurements of electric potentials or currents. These problems are both nonlinear and severely ill-posed and require appropriate regularization; see for instance [SantosaVogelius1990, CheneyIsaacsonNewell1999, ChungChanTai2005]. Traditionally, EIT problems are modeled with Neumann (current) boundary conditions and the internal conductivity is an unknown function across the entire domain. In order to focus on the demonstration of the utility of (2) as a regularizer in geometric inverse problems, we consider a simplified situation in which we seek to reconstruct a perfect conductor inside a domain of otherwise homogeneous electrical properties. Consequently, the unknowns are the vertex positions of the interface of the inclusion. As a perfect conductor shields its interior from the electric field, there is no necessity to mesh and simulate the interior of the inclusion. However, we mention that our methodology can be extended also to interface problems, non-perfect conductors and other geometric inverse problems. The perfect conductor is modeled via a homogenous Neumann condition on the unknown interior boundary of the domain . To overcome the non-uniqueness of the electric potential, we employ Robin boundary conditions on the exterior boundary . The use of homogeneous Robin boundary conditions to model the far field is well-established for geological EIT problems; see, e.g., [Helfrich-Schkarbanenko2011]. We use them here also for current injection. The geometry of our model is shown in Figure 5, where is the unknown boundary of the perfect conductor and is a fixed boundary where currents are injected and measurements are taken. We assume that experiments are conducted, each resulting in a measured electric potential , the finite element space consisting of piecewise linear, globally continuous functions on the outer boundary . Experiment # is conducted by applying the right hand side source , which is the characteristic function of one of the colored regions shown in Figure 5. Here, denotes the space of piecewise constant functions. We then seek to reconstruct the interface of the inclusion by solving the following regularized least-squares problem of type (3), Minimize 12r∑i=1∫Γ2\absui−zi2\ds+β\abs\bnDTV(Γ1) (17) s.t. ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩−Δui=0in Ωh,∂ui∂\bn=0on Γ1,∂ui∂\bn+αui=fion Γ2 with respect to the vertex positions of . Here is the computed electric field for source . Hence, the problem features  PDE constraints with identical operator but different right hand sides. As detailed in Section 4.2, we compute the shape derivative of the least-squares objective and the PDE constraint separately from the shape derivative of the regularization term. To evaluate the former, we utilize a classical adjoint approach. To this end, we consider the Lagrangian F(u1,…,ur,p1,…,pr,Ωh)\coloneqqr∑i=1[∫Γ212\absui−zi2\ds+∫Ωh∇pi⋅∇ui\d\bx+∫Γ2pi(αui−fi)\ds] (18) for . The differentiation w.r.t.  leads to the following adjoint problem for : ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩−Δpi=0in Ωh,∂pi∂\bn=0on Γ1,∂pi∂\bn+αpi=−(ui−zi)on Γ2. (19) The above adjoint PDE was implemented by hand. Since all forward and adjoint problems are governed by the same differential operator, we assemble the associated stiffness matrix once and solve the state and adjoint equations via an ILU-preconditioned conjugate gradient method. Provided that and solve the respective state and adjoint equations, the directional derivative of coincides with the partial directional derivative of , both with respect to the vertex positions. In practice, we evaluate the latter using the coordinate derivative functionality of  as described in the following subsection. ### 4.2. Discrete Shape Derivative We now focus on computing the sensitivity of finite element functionals, when mesh vertices of are moved in accordance to with . As discussed in [HamMitchellPaganiniWechsung2018_preprint], a convenient way to compute this within the finite element world is by tapping into the transformation of the reference element to the physical one. Hence, we use the symbol for this object and we obtain it using the coordinate derivative functionality, first introduced in  release 2018.2.dev0. Our split Bregman scheme requires the shape derivative of (15), which is given by \d\LL(Ωh,\bd,\bb)[PΩh(\bVΓ1)]=\dℓ(u(Ωh),Ωh)[PΩh(\bVΓ1)]+\dm(Γ1)[\bVΓ1], (20) where m(Γ1)\coloneqqβ∑E\absRiemannian\bdE\absE+λ2∑E\absE\bigabsRiemannian\bdE−\mylog\bn+E\bn−E−\bbE2 (21) originates from the splitting approach (15). Because our design variable is only, we introduce the extension of to the volume by padding with zeros. Furthermore, a reduction to boundary only sensitivities can also be motivated from considering shape derivatives in the continuous setting, see [BergmannHerrmannHerzogSchmidtVidalNunez2019:1_preprint, Section 3]. The term is computed via the adjoint approach as explained above, \dℓ(u(Ωh),Ωh)[PΩh(\bVΓ1)]=∂ΩhF(u1,…,ur,p1,…,pr,Ωh)[PΩh(\bVΓ1)]. In order to employ this AD functionality, (21) needs to be given as a UFL form, a domain specific language based on Python, which forms the native language of the  framework, see [AlnaesLoggOlgaardRognesWells2014]. Such a UFL representation is easy to achieve if all mathematical expressions are finite element functions. Notice that and in (21) are constant functions on the edges of the boundary mesh representing . We can thus represent them in the so called HDivTrace space of lowest order in . From the directional derivatives (20), we pass to a shape gradient on the surface w.r.t. a scaled scalar product by solving a variational problem. This problem involves the weak form of a Laplace–Beltrami operator with potential term and it finds such that ∫Γ110−4(∇\bWΓ1,∇\bVΓ1)2+(\bWΓ1,\bVΓ1)2\ds=\dℓ(u(Ωh),Ωh)[PΩh(\bVΓ1)]+\dm(Γ1)[\bVΓ1] (22) holds for all test functions . The previous procedure provides us with a shape gradient on the surface alone. In order to propagate this information into the volume , we solve the following mesh deformation equation: find such that ∫Ωh(∇\bWΩh,∇\bVΩh)2+(\bWΩh,\bVΩh)2\ds=0 (23) for all test functions with zero Dirichlet boundary conditions, where is subject to the Dirichlet boundary condition on and on . Subsequently, the vertices of the mesh are moved in the direction of . ### 4.3. Intrinsic Formulation Using Co-Normal Vectors We recall that our functional of interest (2) is formulated in terms of the unit outer normal of the oriented surface . This leads to the term (21) inside the augmented Lagrangian (15). In order to utilize the differentiation capability of  w.r.t. vertex coordinates, we need to represent (21) in terms of an integral. Since the edges are the interior facets of the surface mesh for , and and can be represented as constant on edges as explained above, (21) can indeed be written as an integral w.r.t. the interior facet measure dS on . Then, however, the outer normal vectors appearing in the term are not available. We remedy the situation by observing that the geodesic distance between two normal vectors and on the two triangles and sharing the edge can also be expressed via the co-normal (or in-plane normal) vectors , , as is shown in Figure 6. Indeed, one has \bigabs\mylog\bn+E\bn−E2=\bigabs\mylog\bmu+E(−\bmu−E)2. Since the co-normal vectors are intrinsic to the surface , they are available on while and are not. ## 5. Numerical Results In this section we present numerical results obtained with Section 3 for the geological impedance tomography model problem described in the previous section. The data of the problem are given in Table 3 and the initial guess of the inclusion , as well as the true inclusion, are shown in Figure 7. The state and adjoint state were discretized using piecewise linear, globally continuous finite elements on a tetrahedral grid of minus the volume enclosed by . The mesh has  vertices and  tetrahedra. Regarding the shape optimization problem of Section 3, we perform gradient steps per split Bregman iteration combined with an Armijo linesearch with starting step size of . Also, we stop the whole algorithm, i.e., the outer Bregman iteration, when the initial gradient of the above mentioned shape optimization problem has a norm below in the sense of (22). In Figure 8, we show the results obtained in the noise-free setting (top row) and with noise (bottom row). In the latter case, normally distributed random noise is added with zero mean and standard deviation per degree of freedom of on for each of the  simulations of the forward model (17). The amount of noise is considerable when put in relation to the average range of values for the simulated states, which is ∑ri=1(max\bs∈Γ2zi(\bs)−min\bs∈Γ2zi(\bs))r≈0.34,i=1,…,r. Due to mesh corruption, we have to remesh at some point in the cases with noise. Afterwards, we start again Section 3 with the remeshed as new initial guess. For comparison, we also provide results obtained for a related problem in Figure 8, using the popular surface area regularization with the same data otherwise. For the surface area regularization, is replaced by , where are the facets of . Because the problem is smooth in this case, we apply a shape gradient scheme directly rather than a split Bregman scheme and terminate as soon as the norm of the gradient falls below . The regularization parameters and are selected by hand in each case. Automatic parameter selection strategies can clearly be applied here as well, but this is out of the scope of the present paper. As is expected and well known, the use of surface area regularization leads to results in which the identified inclusion is smoothed out. This can be explained by the observation that the gradient based minimization of the surface area yields a mean curvature flow. By contrast, our novel prior (4) allows for piecewise flat shapes and thus the interface is closely reconstructed in the noise-free situation. Even in the presence of noise, the reconstruction is remarkably good. In particular, the flat lateral surfaces and sharp edges can be identified quite well. ## 6. Conclusions In this paper we introduced a discrete analogue of the total variation prior for the normal vector field as shown in [BergmannHerrmannHerzogSchmidtVidalNunez2019:1_preprint].
# Difference between revisions of "Simple English" A vague definition of a horizon can be accepted as the following: it is a frontier between things observable and things unobservable. Particle horizon.. If the Universe has a finite age, then light travels only a finite distance in that time and the volume of space from which we can receive information at a given moment of time is limited. The boundary of this volume is called the particle horizon. Event horizon. The event horizon is the complement of the particle horizon. The event horizon encloses the set of points from which signals sent at a given moment of time will never be received by an observer in the future. Space-time diagram is a representation of space-time on a two-dimensional plane, with one timelike and one spacelike coordinate. It is typically used for spherically symmetric spacetimes (such as all homogeneous cosmological models), in which angular coordinates are suppressed. References:[1] ## Contents ### Problem 1 Draw a space-time diagram that shows behaviour of worldlines of comoving observers in a 1. stationary universe with beginning 2. expanding universe in comoving coordinates 3. expanding universe in proper coordinates ### Problem 2 Suppose there is a static universe with homogeneously distributed galaxies, which came into being at some finite moment of time. Draw graphically the particle horizon for some static observer. ### Problem 3 How does the horizon for the given observer change with time? ### Problem 4 Is there an event horizon in the static Universe? What if the Universe ends at some finite time? ### Problem 5 The horizon riddle. Consider two widely separated observers, A and B (see Figure). Suppose they have overlapping horizons, but each can apparently see things that the other cannot. We ask: Can B communicate to A information that extends A's knowledge of things beyond his horizon? If so, then a third observer C may communicate to B information that extends her horizon, which can then be communicated to A. Hence, an unlimited sequence of observers B, C, D, E,... may extend A's knowledge of the Universe to indefinite limits. According to this argument A has no true horizon. This is the horizon riddle. Try to resolve it for the static Universe. The horizon riddle: can two observers with overlapping horizons pass information to each other regarding things outside of the other's horizon? ### Problem 6 Suppose observer O in a stationary universe with beginning sees A in some direction at distance $L$ and B in the opposite direction, also at distance $L$. How large must $L$ be in order for A and B to be unaware of each other's existence at the time when they are seen by O? ### Problem 7 Draw spacetime diagrams in terms of comoving coordinate and conformal time and determine whether event or particle horizons exist for: • the universe which has a beginning and an end in conformal time. The closed Friedman universe that begins with Big Bang and ends with Big Crunch belongs to this class. • the universe which has a beginning but no end in conformal time. The Einstein--de Sitter universe and the Friedman universe of negative curvature, which begin with a Big Bang and expand forever, belong to this class. • the universe which has an end but no beginning in conformal time. The de Sitter and steady-state universes belong to this class. • the universe which has no beginning and no ending in conformal time. The Einstein static and the Milne universes are members of this class. Conformal time is the altered time coordinate $\eta=\eta (t)$, defined in such a way that lightcones on the spacetime diagram in terms of $\eta$ and comoving spatial coordinate are always straight diagonal lines, even when the universe is not stationary. ### Problem 8 Draw the spacetime diagram in terms of comoving space and ordinary time or the universe with an end but no beginning in conformal time. ### Problem 9 Formulate the necessary conditions in terms of conformal time for a universe to provide a comoving observer with • a particle horizon • an event horizon ### Problem 10 Consider two galaxies, observable at present time, $A$ and $B$. Suppose at the moment of detection of light signals from them (now) the distances to them are such that $L_{det}^{A}<L^{B}_{det}$. In other words, if those galaxies had equal absolute luminosities, the galaxy $B$ would seem to be dimmer. Is it possible for galaxy $B$ (the dimmer one) to be closer to us at the moment of its signal's emission than galaxy $A$ (the brighter one) at the moment of $A$'s signal's emission? ### Problem 11 Show on a spacetime diagram the difference in geometry of light cones in universes with and without particle horizons. ## References 1. E. Harrison. Cosmology: the science of the Universe. CUP (1981)
# Vector potential and constant magnetic flux density 1. ### skyboarder2 15 Hi, I would like to verify analytically that a vector potential of the form A=1/2(-yB0,xB0,0) produces a constant magnetic flux density of magnitude B0 in the z direction. (I guess I'd have to use the relation B=$$\forall$$$$\wedge$$A...) 2. ### marcusl 2,152 That is correct (assuming you meant to write the symbol $$\nabla$$). Do you have a question? 3. ### skyboarder2 15 Nope, I just look for a method to prove it mathematically 4. ### marcusl 2,152 Write out the components of $$\vec{\nabla} \times \vec{A}$$ in Cartesian coordinates.
pgfplotstable and longtable with caption is shown multiple times in tableslist I'm visualizing a csv file using pgfplotstable and longtable as followed: \newcommand{\powerTable}[3]{ \pgfplotstabletypeset[ string type, col sep = comma, skip rows between index={0}{#2}, skip rows between index={#3}{1000000000}, begin table=\begin{longtable}, end table=\end{longtable}, every last row/.style={after row=\bottomrule}, column type={l}, %default columns/{N}/.style={column type={c}}, columns/{Difference}/.style={column name={Vergroting}}, columns/{Increase}/.style={column type={c}, column name={Vergroting (\%)}}, columns/{Time}/.style={column name={Tijd}}, ]{csv/Power Packomania Problems Comparison.csv} } I added a caption that shows up on every page. However it also shows up multiple times in my \listoftables. I'm assuming it's because pgfplotstable actually generates duplicates of the caption for every page. But how can I make it so it only shows up once in the list of tables? Thanks! After a lot of trying out different stuff, I found this to work: \newcommand{\powerTable}[3]{ \pgfplotstabletypeset[ string type, col sep = comma, skip rows between index={0}{#2}, skip rows between index={#3}{1000000000}, begin table=\begin{longtable}, end table=\end{longtable}, every last row/.style={after row=\bottomrule}, every first row/.style={before row=\captionsetup{labelformat=empty}\caption[#1]{}\\[-23pt]}, column type={l}, %default columns/{N}/.style={column type={c}}, columns/{Difference}/.style={column name={Vergroting}}, columns/{Increase}/.style={column type={c}, column name={Vergroting (\%)}}, columns/{Time}/.style={column name={Tijd}}, ]{csv/Power Packomania Problems Comparison.csv} } Notice the use of \\[-23pt] in the style of every first row. \newcommand{\powerTable}[3]{ \pgfplotstabletypeset[ string type, col sep = comma, skip rows between index={0}{#2}, skip rows between index={#3}{1000000000}, begin table=\begin{longtable}, end table=\end{longtable}, every last row/.style={after row=\caption{#1}\bottomrule}, column type={l}, %default columns/{N}/.style={column type={c}}, columns/{Difference}/.style={column name={Vergroting}}, columns/{Increase}/.style={column type={c}, column name={Vergroting (\%)}}, columns/{Time}/.style={column name={Tijd}}, ]{csv/Power Packomania Problems Comparison.csv} } if you use the captioncommand on every page with \caption[]{#1} no entry in lot will be made. Only at the last row it will be done. therefor the tablelist list the table on the last page where the table is located. I do not know for sure, if this solve the problem because you did not provide a Minimal Working Example. It is only a solution because of knowledge, untested. • @egreg you are right. Answer was edit just a few minutes ago. – Peter Ebelsberger May 5 '16 at 22:04 • This almost worked! I had to change this line every last row/.style={after row=\caption{#1}\\\bottomrule}, (note the additional \\ before bottomrule). However, is it possible to get the page number to be the first page of the table? – The Oddler May 5 '16 at 22:06 • I tried using every head row/.style={before row=\caption[]{#1}\\\toprule, after row=\midrule\endhead}, every last row/.style={after row=\bottomrule}, every first row/.style={before row=\captionsetup{labelformat=empty}\caption[#1]{}\\} which almost works, only now I get an extra white space before the first row. But if I remove the \\ it won't build. – The Oddler May 5 '16 at 22:13 • @TheOddler try \newtabularline instead of \\ (maybe this helps) – Peter Ebelsberger May 5 '16 at 23:47 • It is supposed to be \tabularnewline, but it gives the same problem. And using \kill does hide the row, but then the table doesn't shown in the lot anymore... – The Oddler May 6 '16 at 8:11
### Browse by The Graduate School student collection includes three series: 1. ETD (electronic theses and dissertations), 2.Conference papers presented annually at the Graduate Research and Scholarly Projects Symposium (GRASP), and 3. Abstracts of the works presented at the annual Capitol Graduate Research Summit at Topeka, Kansas. ### Sub-communities within this community • #### ETD: Electronic Theses and Dissertations The collection of digital copies of Ph.D. and Master's theses (fall 2005 -- ) • #### GRASP: Graduate Research and Scholarly Projects Annual Symposium ### Recent Submissions • #### A tragedy within : An evaluation of the legal realities concerning the Wichita State football team crash of October 2,1970  (Wichita State University, 12/01/2001) This thesis attempts to evaluate the National Transportation Safety Board (NTSB) hearing held in regard to the Wichita State football team crash of October 2, 1970. From the exterior, it appeared like a normal investigative ... • #### A survey of Kansas speech-language pathologists' knowledge and confidence regarding literacy intervention  (Wichita State University, 2021-02-18) The connection between spoken and written language has been well established in the research literature. Spoken language is a crucial component in supporting the development of reading and writing. For the past 19 years, ... • #### Understanding the physics of droplet electrocoalescence in a microtrap  (Wichita State University, 2021-02-18) This work details a parametric study for merging microscale water droplets, using an electric field, in a microfluidic device. This device, titled TAP (Trapping and Assisted Pairing) is a cell handling platform for conducting ... • #### Computerized sentence building as a treatment for aphasia  (Wichita State University, 2021-02-18) Acute cerebrovascular disease (stroke) is one of the leading causes of death in the United States, and those who survive are often left with significant long-term disabilities. According to Kansas Health Matters, between ... • #### An energy consumption model under time-of-use rates for scheduling of manufacturing shops  (Wichita State University, 2021-02-18) One of the most important contributors supporting economic prosperity in Kansas is attributed to manufacturing. It accounted for 16.30% of the output and 11.69% of the workforce (2020 Kansas Manufacturing Facts). In addition, ... • #### Wheat protein-based bio-scaffold for neural regeneration  (Wichita State University, 2021-02-18) Spinal and peripheral nerve injuries are common in both civil and military environments and are primarily the result of transection injuries or burns. In the majority of nerve injuries, the nerve ends cannot be directly ... • #### What factors have an effect on the life expectancy of Kansas citizens?  (Wichita State University, 2021-02-18) Life expectancy is a common measure of public health and has been positively correlated with economic growth. This research focuses on how access to basic needs and socio-economic factors, often the focus of state-level ... • #### Optimizing the thermal performance of phase-change thermal management systems for utility-scale applications in Kansas  (Wichita State University, 2021-02-18) Unlike the conventional cooling systems, phase-change cooling systems using wicks offer reliable high and effective heat flux cooling capability. However, the thermal performance of these novel thermal management systems ... • #### Intralaminar fracture toughness of 3D printed unidirectional composites  (Wichita State University, 2020-12) The purpose of this research was to identify the fracture toughness $(G_{Ic})$ of intralaminar specimens of 3D printed composites. Two types of polylactide acid (PLA) were tested, PLA Neat (with no reinforcement) and PLA ... • #### Acoustic absorption properties of granular and 3D printed aerogels  (Wichita State University, 2020-12) Aerogel is a diverse class of nano-porous ultralight solid material. This research examines the acoustic properties of aerogel-based structures and evaluates their suitability for aircraft noise-reduction applications. ... • #### Neuroadaptive observer design for spacecraft attitude control and formation attitude synchronization  (Wichita State University, 2020-12) Spacecraft attitude-tracking control problems require uncertainties and disturbances- rejecting controllers to perform well in practical scenarios. The objective of this research is to design an adaptive controller that ... • #### Peripheral nerve-derived adult pluripotent stem (NEDAPS) cells for induction to osteoblast as a cell therapy in segmental bone defect fractures  (Wichita State University, 2020-12) The segmental defect fractures in bone can result from various types of causes such as primary injury, fracture, developmental deformities, after the debridement of bone in osteomyelitis, or resection of bone tumor. These ... • #### Implementation and evaluation of curved layer fused deposition modeling  (Wichita State University, 2020-12) Fused Deposition Modeling (FDM) is an extrusion based additive manufactur- ing (AM) process in which thermoplastic material is extruded through a nozzle. The nozzle deposits material along a two-dimensional path to create ... • #### Acoustic evaluation of the bell 699 rotor on the tiltrotor test rig in the national full-scale aerodynamics complex 40- by 80- foot wind tunnel  (Wichita State University, 2020-12) Aircraft noise is a growing problem in the air travel industry [1]. Urban air mobility (UAM) is a new movement to use rotorcraft to fly in and between large cities to alleviate ground traffic congestion [2]. Aircraft noise ... • #### Initiating object handover in human-robot collaboration using a multi-modal wearable and deep learning visual system  (Wichita State University, 2020-12) Hybrid robotic systems involving humans and machines working together are becoming a fundamental way of human life from things such as home automation, cell phones, to grocery store pickups, humans are working with machines ... • #### Occupant injury assessment in an emergency landing condition for a vertical take-off and landing aircraft  (Wichita State University, 2020-12) In the next few years, the aeronautical industry is poised to change in a way it has not for decades. There has been a rising interest in deploying Vertical Take-Off and Landing (VTOL) aircraft to serve on-demand ... • #### Writing self-efficacy and linguistic diversity of first-year composition students: An exploratory study  (Wichita State University, 2020-12) This study investigates the potential relationship between student writing self-efficacy and marginalized linguistic identities. A total of sixty-nine first-year composition students across two semesters responded to ... • #### Characterization of the in-plane homogenized mechanical properties of a hexagonal honeycomb core  (Wichita State University, 2020-12) The in-plane homogenized elasto-plastic behavior of a hexagonal aluminum honeycomb core has been investigated using experiments and finite element analysis of an idealized representative volume element. In-plane uniaxial ... • #### Short term forecasting of solar power with machine learning and time series techniques  (Wichita State University, 2020-12) Solar electric generation is the fastest-growing and lowest-cost form of electric generation today. Since solar power generation is variable, nonlinear, and unpredictable, it is posing technical and economic challenges to ... • #### Distortion of refill friction stir spot welding  (Wichita State University, 2020-12) Refill Friction Stir Spot Welding (RFSSW) produces a solid-state lap joint between sheet metals, preferably aluminum alloys, without leaving any exit holes in the materials. This process was derived from Friction Stir Spot ...
# Fourier transform of $e^{-\frac{x^2}{2}}\frac{1}{\mathsf{sinc}( x )}$ I have to find inverse Fourier transform of \begin{align*} F(\omega)=e^{-\frac{\omega^2}{2}}\frac{1}{\mathsf{sinc}( \omega )} \end{align*} I was wondering whether it exists maybe in a sense of distributions. Using duality between Fourier transform and inverse Fourier transform. We can ask what is Fourier transform of \begin{align*} f(x)=e^{-\frac{x^2}{2}}\frac{1}{\mathsf{sinc}( x )} \end{align*} and whether it exits? Note, that the function $f(t)$ has vertical asymptots at for $x=n \pi$ for $n \in \mathbb{Z}$ \begin{align} \mathsf{sinc}( x )=\frac{\sin(x)}{x} \end{align} equals $0$ at integer multiplies of $\pi$. Is this a problem? One way to look at the problem is find convolution of $\mathcal{F}(e^{-\frac{x^2}{2}})$ and $\mathcal{F}(\frac{1}{\mathsf{sinc}( x )})$. However, this direction can be problematic since $\mathcal{F}(\frac{1}{\mathsf{sinc}( x )})$ might not exist. • The Fourier transform of a convolution is a product. So what you are looking for is a convolution of the inverse Fourier transform of the first function (which will be a Gaussian also) and the inverse Fourier Transform of the reciprocal sinc. – Paul Jan 28 '15 at 21:43 • Yes, good. But what is the inverse Fourier of $\frac{1}{sinc}$? – Boby Jan 28 '15 at 21:45 • Since 1/sinc is unbounded, I can't imagine it has one. Is there some bound to address this issue? – Paul Jan 28 '15 at 21:51
# Human-Kin Human-Kin are diverse and adaptable. They thrive both in civilized lands, and at the fringes of the world. They also mix well with other races and though together they may not be as wholly dominant as the Kalic, or have the influence of the dwarven fortresses, they can be found any and everywhere. ## Vincal Vincal are skilled communicators, diplomats, and team players. They are commonly found in positions of political, or religious leadership, and are good with people. They commonly worship the gods, especially Wordix and Hollbirth. Vincal Mods: +1 CNT, +1 RES Body Type: Medium Racial: Once Per Day you can give one nearby target Circumstance +10. Vincal are average and fair of build. Males average 68kg and 170cm. Females 64kg and 160cm. Their hair ranges from blond to black, and their skin from light to dark. Their eyes are usually blue or a deep brown. They can live for 100 years, and become an adult at 16. ## Goran The survivors, the wanderers, Goran live where living is hard and life can end with the fall of a monster’s claw, or a frigid wind’s breath. They live tribally, and don’t mix much with other races. Skill in combat is essential in their barbarian society, and everything from personal disputes to politics can be solved through conflict. They usually worship Grondshok, though a few follow Wimble. Some of the more brutal tribes follow darker entities still. Goran Mods: +2 VIG, +1 RES, -1 EDU Body Type: Medium Languages: Goran Racial: Once Per Day, if an attack would inflict a wound on you, instead do not take the wound, and regain toughness equal to your Tenacity. Goran are huge, tall, and imposing. Males average 91kg and 200cm. Females 77kg and 170cm. Their hair is usually black or brown. Their skin ranges from pale to a red tan. Their eyes can be blue, brown or black. They can live for 90 years, and become an adult at 10. ## Harenite Originally desert nomads, the Harenite center their whole culture around the worship of higher powers. And not just a single god, or patron, Harenite are famously multi-theistic, worshipping multiple higher powers equally, even those with conflicting views. Some people are put off by this practice, but many small communities welcome them as general priests for all the divines. Regardless of their other devotions, almost all Harenite worship Hollbirth. Harenite Mods: +2 RES, +1 CNT, -1 EDU Body Type: Medium Racial: Once Per Day, you may pray to a Higher Power, and create an effect from their constitution. The rating of this effect is $$4*Level + RES$$
# Publications database 2019-11-1309:49 [PUBDB-2019-04190] Book/Report/Dissertation / PhD Thesis Marchetti, B. Characterization of Ultrashort Electron Bunches at the SINBAD-ARES Linac [DESY-THESIS-2019-026] Hamburg : Verlag Deutsches Elektronen-Synchrotron, DESY-THESIS 176 pp. (2019) [10.3204/PUBDB-2019-04190] = Dissertation, Universität Hamburg, 2019   The generation of ultrashort electron bunches is an active area of research in accelerator physics. A key application of such bunches is the injection into novel accelerators with high-frequency accelerating fields, such as laser-wakefield plasma accelerators or dielectric laser accelerators. [...] OpenAccess: MarxThesis - PDF PDF (PDFA); desy-thesis-19-026.title - PDF PDF (PDFA); 2019-11-1215:10 [PUBDB-2019-04168] Report/Journal Article et al Modelling the coincident observation of a high-energy neutrino and a bright blazar flare [arXiv:1807.04275] Nature astronomy 3(1), 88 - 92 (2019) [10.1038/s41550-018-0610-1]   In September 2017, the IceCube Neutrino Observatory recorded a very-high-energy neutrino in directional coincidence with a blazar in an unusually bright gamma-ray state, TXS0506 + 056 (refs1,2). Blazars are prominent photon sources in the Universe because they harbour a relativistic jet whose radiation is strongly collimated and amplified. [...] Published on 2018-11-05. Available in OpenAccess from 2019-05-05.: PDF; Restricted: PDF PDF (PDFA); 2019-11-1117:20 [PUBDB-2019-04134] Journal Article et al Effect of Solvent Additives on the Morphology and Device Performance of Printed Nonfullerene Acceptor Based Organic Solar Cells ACS applied materials & interfaces 45(11), 42313-42321 (2019) [10.1021/acsami.9b16784]   Printing of active layers of high-efficiency organic solar cells and morphology control by processing with varying solvent additive concentrations are important to realize realworld use of bulk-heterojunction photovoltaics as it enables both up-scaling and optimization of the device performance. In this work, active layers of the conjugated polymer with benzodithiophene units PBDB-T-SF and the nonfullerene small molecule acceptor IT-4F are printed using meniscus guided slot-die coating. [...] Restricted: acsami.9b16784-1 - PDF PDF (PDFA); supporting information - PDF PDF (PDFA); External link: Fulltext 2019-11-1116:11 [PUBDB-2019-04124] Journal Article et al Relationship between structure and molecular interactions in monolayers of specially designed aminolipids Nanoscale advances 1(9), 3529 - 3536 (2019) [10.1039/C9NA00355J]   Artificial cationic lipids are already recognized as highly efficient gene therapy tools. Here, we focus on another potential use of aminolipids, in their electrically-uncharged state, for the formation of covalently cross-linked, one-molecule-thin films at interfaces. [...] OpenAccess: PDF PDF (PDFA); 2019-11-1115:59 [PUBDB-2019-04123] Journal Article et al Low-temperature luminescence spectrum of forbidden $4f^{13} 5d‐4f^{14}$ transitions in $CaF_2:Lu^{3+}$ crystal Magnetic resonance in solids 21(4), 1-7 (2019) [10.26907/mrsej-19413]   $Lu^{3+}$ + $4f^{13} 5d‐4f^{14}$ luminescence in $CaF_2:Lu^{3+}$ crystal at 8 K was studied with a high spectral resolution using synchrotron radiation excitation. Absence of a zero-phonon line in the recorded spectrum was explained and features in the recorded spectrum were reproduced by simulation based on the microscopic model of electron-phonon interaction and the developed theory of non-Condon spectra.. OpenAccess: PDF PDF (PDFA); External link: Fulltext 2019-11-1115:59 [PUBDB-2019-04122] Journal Article et al Characterizing transmissive diamond gratings as beam splitters for the hard X-ray single-shot spectrometer of the European XFEL Journal of synchrotron radiation 26(3), 708 - 713 (2019) [10.1107/S1600577519003382]   The European X-ray Free Electron Laser (EuXFEL) offers intense, coherent femtosecond pulses, resulting in characteristic peak brilliance values a billion times higher than that of conventional synchrotron facilities. Such pulses result in extreme peak radiation levels of the order of terawatts cm−2 for any optical component in the beam and can exceed the ablation threshold of many materials. [...] 2019-11-1115:51 [PUBDB-2019-04119] Journal Article et al Sensitization of luminescence from $Sm^{3+}$ ions in fluoride hosts $K_2YF_5$ and $K_2GdF_5$ by doping with $Tb^{3+}$ ions Journal of luminescence 209, 340 - 345 (2019) [10.1016/j.jlumin.2018.12.057]   Spectroscopic properties and energy transfer mechanisms for isostructural fluoride $K_2YF_5$ and $K_2GdF_5$ crystals singly and doubly doped with different concentrations of $Tb^{3+}$ and $Sm^{3+}$ ions have been investigated under excitation in the deep ultraviolet (DUV) and vacuum UV (VUV) spectral regions. In these hosts luminescence of $Sm^{3+}$ ions is enhanced under DUV excitation by doping with $Tb^{3+}$ ions, namely, the $Sm^{3+}$ excitation spectra show additional intense excitation in the region of $Tb^{3+}$ spin-allowed 4f – 5d transitions at 200–220 nm due to energy transfer from $Tb^{3+}$ to Sm3+. [...] Restricted: PDF PDF (PDFA); External link: Fulltext 2019-11-1115:01 [PUBDB-2019-04112] Journal Article et al Interfacial premelting of ice in nano composite materials Physical chemistry, chemical physics 21(7), 3734 - 3741 (2019) [10.1039/C8CP05604H]   The interfacial premelting in ice/clay nano composites was studied by high energy X-ray diffraction. Below the melting point of bulk water, the formation of liquid water was observed for the ice/vermiculite and ice/kaolin systems. [...] OpenAccess: PDF PDF (PDFA); 2019-11-1111:55 [PUBDB-2019-04080] Journal Article et al Trends in Synthesis, Crystal Structure, and Thermal and Magnetic Properties of Rare-Earth Metal Borohydrides Inorganic chemistry 58(9), 5503 - 5517 (2019) [10.1021/acs.inorgchem.8b03258]   Synthesis, crystal structures, and thermal and magnetic properties of the complete series of halide-free rare-earth (RE) metal borohydrides are presented. A new synthesis method provides high yield and high purity products. [...] 2019-11-1111:53 [PUBDB-2019-04079] Journal Article et al Cation ordering, ferrimagnetism and ferroelectric relaxor behavior in $Pb(Fe_{1−x}Sc_{x})_{2∕3}W_{1∕3}O_{3}$ solid solutions Ceramic samples of the multiferroic perovskite Pb(Fe1−xScx)2∕3W1∕3O3 with 0 ≤ x ≤ 0.4 have been synthesized using a conventional solid-state reaction method, and investigated experimentally and theoretically using first-principle calculations. Rietveld analyses of joint synchrotron X-ray and neutron diffraction patterns show the formation of a pure crystalline phase with cubic (Fm3̅m) structure with partial ordering in the B-sites [...] Fulltext: PDF PDF (PDFA);
Dataset Open Access # The "Last.fm" data set used in the article "Cumulative effects of triadic closure and homophily in social networks" Mikko Kivelä ### Citation Style Language JSON Export { "publisher": "Zenodo", "DOI": "10.5281/zenodo.3726824", "author": [ { "family": "Mikko Kivel\u00e4" } ], "issued": { "date-parts": [ [ 2020, 3, 25 ] ] }, "abstract": "<p>This is the &quot;Last.fm&quot; network used in the article:</p>\n\n<p>A. Asikainen, G. I&ntilde;iguez, &nbsp;J. Ure&ntilde;a-Carri&oacute;n, &nbsp;K. Kaski, M. Kivel&auml;. Cumulative effects of triadic closure and homophily in social networks. Science Advances (in press)</p>\n\n<p>https://doi.org/10.1126/sciadv.aax7310</p>\n\n<p>The data set is described in the article. Please cite the original article when using this data set.</p>\n\n<p>The original data in which this network is based on was donwloaded from audioscrobbler.net where it was licensed under the &quot;Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England &amp; Wales&quot; licese, and accordinly this data set uses the same license.</p>\n\n<p>The data contains two files:</p>\n\n<p><strong>lastfm.edg</strong><br>\nThis is the network formatted as an edge list, where each row in the file is an edge connecting the two nodes indicated by the two numbers separated by a whitespace. Each node number corresponds to a single account in the website.</p>\n\n<p><strong>lastfm_genders.txt</strong><br>\nThis is the list of genders of the nodes. Each row corresponds to&nbsp;one node. The first number is the node id (matching the one in the edge list) and the second number indicates the gender such that 0=male and 1=female.</p>", "title": "The \"Last.fm\" data set used in the article \"Cumulative effects of triadic closure and homophily in social networks\"", "type": "dataset", "id": "3726824" } 71 139 views